Bain & Company Inc

  • Platform Infrastructure Engineer

    Job Location US-CA-Palo Alto | US-CA-Los Angeles | US-MA-Boston | US-TX-Dallas | US-WA-Seattle
    Job ID
    Posted Date
    Advanced Analytics
    Regular Full-Time
    Location : Location
    US-CA-Palo Alto
  • Company Overview

    Bain & Company is the management consulting firm that the world’s business leaders come to when they want results. Bain advises clients on strategy, operations, information technology, organization, private equity, digital transformation and strategy, and mergers and acquisition, developing practical insights that clients act on and transferring skills that make change stick.  The firm aligns its incentives with clients by linking its fees to their results. Bain clients have outperformed the stock market 4 to 1. Founded in 1973, Bain has 57 offices in 36 countries, and its deep expertise and client roster cross every industry and economic sector.


    Department Overview 

    Bain’s Advanced Analytics Group is a team of high-impact quantitative technology specialists who solve statistical, machine learning, and data engineering challenges that we encounter in client engagements. AAG team members hold advanced degrees in subjects ranging across statistics, mathematics, computer sciences and other quantitative disciplines, and have backgrounds in a variety of fields including data science, marketing analytics and academia.


    Position Summary

    You will solve cutting-edge problems for a variety of industries as a software engineer specializing in Platform Infrastructure and DevOps. As a member of a diverse engineering team, you will participate in the full engineering life cycle which includes designing, developing, optimizing, and deploying new machine learning solutions and infrastructure at the production scale of the world’s largest companies.


    Core Responsibilities and Requirements

    • Partner with Data Science, Data Engineering, and Machine Learning Engineering teams to develop and deploy production quality code
    • Develop and champion modern infrastructure concepts to technical audience and business stakeholders
    • Implement new and innovative deployment techniques, tooling, and infrastructure automation within Bain and our clients.
    • This position will be located in Palo Alto, Los Angeles, Boston, Dallas, Austin, Seattle, or remotely
    • Travel is required (~20%)

    Build and deploy highly available, scalable, and fault tolerant platforms to run production applications that solve business problems


    • Understand the needs and challenges of a client across operations and development, and then formulate solutions that advance their business and technical goals.
    • Develop solutions encompassing technology, process, and people for:
      • Continuous Delivery
      • Infrastructure strategy & operations
      • Build and release management
    • Work closely with development teams to ensure that solutions are designed with customer user experience, scale/performance, and operability in mind.

    Develop infrastructure and deployment platform to enable production data science and machine learning engineering development

    • Participate in the full software development life cycle including designing distributed systems, writing documentation and unit/integration tests, and conducting code reviews.
    • Develop and improve infrastructure including CI/CD, microservice frameworks, distributed computing, and cloud infrastructure needed to support this platform.

    Provide technical guidance to external clients and internal stakeholders in Bain:


    • Explore new technical innovations in the machine learning and data engineering to improve customer results.
    • Advise and coach engineering teams on technology stack best practices and operational models to raise their devops capabilities.




    • 4+ years of experience using one of the following IaC frameworks: CloudFormation, Terraform
    • 4+ years of experience working with Docker containers
    • 4+ years of experience working on public cloud environments (AWS, GCP, or Azure), and associated deep understanding of failover, high-availability, high scalability, and security
    • 2+ years of experience with Unix/Linux system administration and scripting
    • 2+ years of experience with administering and managing Kubernetes clusters (EKS, GCP, or AKS) and Helm (optional)
    • 2+ years of experience programming with Python, C/C++, Java, Go, or similar programming language
    • 2+ years of experience with authentication mechanisms including LDAP, Active Directory and SAML
    • One or more configuration management tools: Ansible, Salt, Puppet, or Chef
    • One or more monitoring and analytics platforms: Grafana, Prometheus, Splunk, SumoLogic, NewRelic, DataDog, CloudWatch, Nagios
    • CI/CD deployment pipelines: Jenkins, TravisCI, Gitlab CI, AWS CodePipeline
    • Version control and git workflows


    • HashiCorp Vault and integrating it with Kubernetes for secret management
    • Deploying end-to-end logging solutions such as the EFK stack
    • Deploying Prometheus and various exporters (postgres, elasticsearch, etc)
    • Hadoop framework
    • Distributed databases and query languages such as SQL or HQL: Hive, Aster Data, Greenplum, Cassandra, Vertica, Amazon Redshift, Snowflake
    • Developing frameworks, platforms, APIs
    • Developing and maintaining rigorous technical documentation and runbooks
    • Collaborating with the Networking and Security infrastructure teams to achieve and maintain baseline security standards
    • Serverless frameworks
    • Agile development methodology
    • Grafana dashboards



    Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
    Share on your newsfeed