Data Scientist – Multimodal & Foundation Models (2-4 years)


Role Overview

This role sits at the critical intersection of Applied Research and Machine Learning Engineering. We are looking for a Data Scientist with 2+ years of experience who possesses a first-principles understanding of attention-based models or a proven track record of working with complex SoTA (State-of-the-Art) architectures.

You won't just be integrating APIs; you will be responsible for understanding, training, modifying, and optimizing Transformer-based architectures, Diffusion models, and emerging paradigms. You will build sophisticated multimodal pipelines (Image, Video, Audio) and ensure these models—ranging from standard LLMs to Large Multimodal Models (LMMs)—are fine-tuned, evaluated, and deployed into production.

Core Responsibilities

  • Model Research & Modification: Analyze and improve Transformer architectures. Work deep inside training pipelines for SoTA models, implementing custom loss functions, and experimenting with advanced architectural variants (e.g., Mixture of Experts (MoE), State Space Models (SSM)).
  • Multimodal Pipeline Development: Apply LLMs and Foundation models to script understanding and scene breakdown. Construct complex prompts for generative outputs across image, video, and audio modalities.
  • Fine-Tuning & Optimization: Execute domain-specific fine-tuning (LoRA, QLoRA, PEFT) and implement efficiency techniques like mixed precision, quantization, and pruning to make SoTA models production-viable.
  • Evaluation & Benchmarking: Design structured testing frameworks to benchmark model quality, creative intent, and failure modes. Document findings in technical logs and research notes.
  • Production Engineering: Transition research from Jupyter notebooks to production-ready code. Develop and expose model capabilities via REST APIs and collaborate with engineering to integrate solutions into media pipelines.

Eligibility Requirements (Mandatory)

[!IMPORTANT]
  • Advanced Model Experience: Applicants must have hands-on experience training, modifying, or scaling complex SoTA models (e.g., Llama 3, SDXL, Sora-like architectures, or Whisper). Candidates whose experience is limited to using hosted APIs (OpenAI/Anthropic) or prompt engineering without working at the architecture/training level will not be considered.
  • Experience: 2+ years of hands-on experience in Data Science/ML.
  • Architecture Depth: Deep theoretical and implementation-level understanding of Transformers (Encoder-Decoder/Decoder-only), attention mechanisms, and scaling behavior.
  • Training Expertise: Proven ability to fine-tune models from checkpoints or from scratch. Experience managing training stability and convergence for high-parameter models.
  • Research Literacy: Ability to read, summarize, and implement techniques directly from recent ML research papers (e.g., Diffusion, GenAI, FlashAttention, MoE).

Technical Proficiency

Category

Tools & Concepts
Frameworks
  • Python, PyTorch (preferred), HuggingFace (Transformers, Diffusers, PEFT)
Model Expertise
  • LLMs, LMMs (Large Multimodal Models), Diffusion, MoE, TTS/Voice Cloning
Techniques
  • LoRA/QLoRA, Instruction Tuning, Custom Training Loops, Mixed Precision
Engineering
  • GPU Performance Debugging (e.g., CUDA OOM troubleshooting), REST APIs, Inference Optimization

Good to Have (Bonus)

  • Experience with multimodal generation (Vision/Audio Transformers, Image/Video generation).
  • Familiarity with efficient attention implementations (e.g., FlashAttention-2) or orchestration libraries like LangChain.
  • Contributions to open-source machine learning projects or independent research.
  • An interest in creative AI and entertainment technology.

The Ideal Candidate

  • Thinks like a researcher, builds like an engineer: You stay updated on the latest ArXiv papers and are comfortable experimenting to find the fix for training instability.
  • Deep Ownership: You prefer deep understanding over black-box usage and are comfortable diagnosing model failures at the tensor level.
  • Iterative Mindset: You enjoy the cycle of Research → Prototyping → Production Integration.

Required Skills

Machine Learning applied research