3T Biosciences is solving a major bottleneck in the field of immunotherapy – identifying novel targets that can be used to generate therapies to treat cancer in broad patient populations. We are developing transformative T cell receptor therapies for cancer and other immune-related diseases. Our proprietary experimental and computational technology called 3T TRACE allows us to identify novel T cell receptor targets as well as clinical toxicities to bring safe and effective therapies.
The 3T computational team is an interdisciplinary team with core strengths in engineering, algorithms, and bioinformatics. Our core platform 3T TRACE involves iterative data cycles between wet-lab experimentation and computational analysis and updates to computational models to accelerate target discovery and therapeutic development. The ideal candidate is someone who is excited by the opportunity to work across multiple areas in a dynamic environment. 3T will consider well-qualified candidates in either a full-time remote position or at our South San Francisco office location.
We’re committed to making a difference for patients. Our focused, creative, and ambitious team makes 3T Biosciences the perfect engine to drive therapeutic solutions to reality. We’re looking for enthusiastic and self-motivated individuals to bring their talents to a fast-paced environment.
Your Typical Responsibilities:
- Develop, maintain, and deploy application(s) responsible for analyzing data via machine learning and providing visualizations
- Develop, maintain, and deploy data lake, warehouse, and ETL infrastructure to process and store data supporting novel target discovery, therapeutic development, and program development
- Develop, maintain, and deploy AWS cloud infrastructure using CloudFormation
- Coordinate software releases while maintaining a CI/CD pipeline comprised of GitHub monorepo, GitHub actions, and container builds
- Orchestrate computational tasks via DAGs
- Contribute to computational infrastructure for IND filings
- Interface with a team of computational and experimental scientists to aid in data infrastructure, pipeline automation, and data usability
- Collaborate with multiple internal groups on cross-functional projects
- Create presentations and providing written and verbal updates on scientific findings
- Bachelor’s degree or equivalent in relevant engineering-focused field
- 3+ years industry experience relevant to software development
- Proficiency in Python
- Strong backend development experience
- Experience developing database infrastructure (e.g. MySQL, Redshift, etc.)
- Experience creating / maintaining API’s (e.g. Flask, FastAPI, etc.)
- Experience with AWS cloud infrastructure
- Previous experience with Docker and container registries
- Familiarity with CI/CD pipelines (e.g. GitHub actions, etc.)
- Familiarity with CloudFormation or other infrastructure as code tools (e.g. Terraform, Pulumi, etc.)
- Familiarity with data warehouse and data lake schema design and ETL integration design concepts
- An ability to communicate and collaborate with life sciences experts
- Previous experience with Biotech/Life Sciences
- Prior experience working with workflow orchestration (e.g. Airflow, Argo, Kubeflow, AWS Batch, AWS lambda, etc.)
- Prior experience developing computational infrastructure for IND filings
Please contact email@example.com with your cover letter and resume and reference Req# 100-06.