You are viewing a preview of this job. Log in or register to view more details about this job.

Software Engineering Intern – MLOps (LLM & Agent Systems)

DNV

About us

We are the independent expert in assurance and risk management. Driven by our purpose, to safeguard life, property, and the environment, we empower our customers and their stakeholders with facts and reliable insights so that critical decisions can be made with confidence.

As a trusted voice for many of the world’s most successful organizations, we use our knowledge to advance safety and performance, set industry benchmarks, and inspire and invent solutions to tackle global transformations.

About Energy Systems

We help customers navigate the complex transition to a decarbonized and more sustainable energy future. We do this by assuring that energy systems work safely and effectively, using solutions that are increasingly digital. We also help industries and governments to navigate the many complex, interrelated transitions taking place globally and regionally, in the energy industry.

About the role

We are seeking a Software Engineering Intern to join our Machine Learning Operations (MLOps) team, with a strong emphasis on Large Language Models (LLMs), agent-based systems, and production AI tooling.

While this role touches traditional ML concepts, the majority of our work focuses on LLM-powered systems—including prompt-driven workflows, tool-calling agents, orchestration layers, and model integrations—rather than classical model training or research.

This is a hands-on, production-focused internship. You will contribute directly to internal platforms and services that power AI-driven features used by real users. This role is on-site in Houston, with close collaboration and mentorship from the engineering team.

This role is based at our DNV office in Houston, TX.

What You’ll Do

Depending on team priorities, you may contribute to:

  • Backend services that power LLM-driven workflows and AI agents
  • Integration of LLM providers (e.g., OpenAI, Gemini, etc.) into production systems
  • Model Context Protocol (MCP) tools and servers for safe, structured tool access
  • Agent orchestration logic (multi-step reasoning, tool calling, handoffs)
  • Frontend interfaces (Vue or React) for configuring and interacting with AI workflows
  • Observability, logging, and cost tracking for LLM usage
  • Improving reliability and developer experience of AI-enabled systems

Responsibilities

  • Implement features in production backend and frontend codebases
  • Build and maintain APIs that interact with LLMs and internal tools
  • Write clean, maintainable, and testable code
  • Participate in code reviews and technical design discussions
  • Debug issues across distributed systems (APIs, agents, UI)
  • Document workflows, agent behavior, and system decisions
  • Learn and apply best practices for shipping AI systems responsibly

About you

What is Required

  • Currently pursuing (or recently completed) a degree in Computer Science, Software Engineering, or a related field
  • Hands-on experience with one or more of the following:
    • Python
    • Node.js
    • Vue.js or React
  • Strong fundamentals in programming and APIs
  • Familiarity with Git and collaborative development workflows
  • Ability to take ownership of tasks and work independently after initial guidance
  • Strong problem-solving skills and curiosity about how systems work end-to-end
  • Strong written and verbal English communication skills
  • We conduct pre-employment drug and background screening
     

What is Preferred

  • We strongly prefer for you to submit a 1-page cover letter along with your resume when applying
  • Experience working with LLMs (prompting, function/tool calling, embeddings, etc.)
  • Familiarity with Model Context Protocol (MCP) or similar tool-calling frameworks
  • Exposure to agentic workflows, orchestration, or multi-step AI systems
  • Experience building or consuming REST APIs
  • Familiarity with databases (SQL or NoSQL)
  • Exposure to Docker, cloud platforms (AWS, Azure, GCP), or CI/CD pipelines
  • Personal projects, internships, or open-source contributions involving AI or LLMs

What We’re Looking For in an Intern

  • Interest in production AI systems, not just experimentation
  • Comfort working across backend and frontend boundaries
  • Someone who can take a task, ask the right questions, and move it forward
  • Willingness to debug issues, test locally, and iterate before submitting code
  • Curiosity about reliability, scalability, and correctness in AI systems
     

**Immigration-related employment benefits, for example visa sponsorship, are not available for this position**