Talent.com
Applied AI Engineer

Applied AI Engineer

Mem0 Official DocumentationSan Francisco, CA, United States
job_description.job_card.variable_hours_ago
serp_jobs.job_preview.job_type
  • serp_jobs.job_card.full_time
job_description.job_card.job_description

Role Summary :

Own the 0→1 . You’ll turn vague customer use cases into working proofs-of-concept that showcase what Mem0 can do. This means rapid full-stack prototyping, stitching together AI tools, and aggressively experimenting with memory retrieval approaches until the use case works end-to-end. You’ll partner closely with Research and Backend, communicate trade-offs clearly, and hand off winning prototypes that can be hardened for production.

What You'll Do :

Build POCs for real use cases : Stand up end-to-end demos (UI + APIs + data) that integrate Mem0 in the customer’s flow.

Experiment with memory retrieval : Try different embeddings, indexing, hybrid search, re-ranking, chunking / windowing, prompts, and caching to hit task-level quality and latency targets.

Prototype with Research : Implement paper ideas and new techniques from scratch, compare baselines, and keep what wins.

Create eval harnesses : Define small gold sets and lightweight metrics to judge POC success; instrument demos with basic telemetry.

Integrate AI tooling : Combine LLMs, vector DBs, Mem0 SDKs / APIs, and third-party services into coherent workflows.

Collaborate tightly : Work with Backend on clean contracts and data models; with Research on hypotheses; share learnings and next steps.

Package & handoff : Write concise docs, scripts, and templates so Engineering can productionize quickly.

Minimum Qualifications

Full-stack fluency : Next.js / React on the front end and Python backends (FastAPI / Django / Flask) or Node where needed.

Strong Python and TypeScript / JavaScript; comfortable building APIs, wiring data models, and deploying quick demos.

Hands-on with the LLM / RAG stack : embeddings, vector databases, retrieval strategies, prompt engineering.

Track record of rapid prototyping : moving from idea → demo in days, not months; clear documentation of results and trade-offs.

Ability to design small, meaningful evaluations for a use case (quality + latency) and iterate based on evidence.

Excellent communication with Research and Backend; crisp specs, readable code, and honest status updates.

Nice to Have :

Model serving / fine-tuning experience (vLLM, LoRA / PEFT) and lightweight batch / async pipelines.

Deployments on Vercel / serverless, Docker, basic k8s familiarity; CI for demo apps.

Data visualization and UX polish for compelling demos.

Prior Forward-Deployed / Solutions / Prototyping role turning customer needs into working software.

About Mem0

We're building the memory layer for AI agents. Think long-term memory that enables AI to remember conversations, learn from interactions, and build context over time. We're already powering millions of AI interactions. We are backed by top-tier investors and are well capitalized.

Our Culture

Office-first collaboration - We're an in-person team in San Francisco. Hallway chats, impromptu whiteboard sessions, and shared meals spark ideas that remote calls can't.

Velocity with craftsmanship - We build for the long term, not just shipping features. We move fast but never sacrifice reliability or thoughtful design - every system needs to be fast, reliable, and elegant.

Extreme ownership - Everyone at Mem0 is a builder-owner. If you spot a problem or opportunity, you have the agency to fix it. Titles are light; impact is heavy.

High bar, high trust - We hire for talent and potential, then give people room to run. Code is reviewed, ideas are challenged, and wins are celebrated—always with respect and curiosity.

Data-driven, not ego-driven – The best solution wins, whether it comes from a founder or an engineer who joined yesterday. We let results and metrics guide our decisions.

#J-18808-Ljbffr

serp_jobs.job_alerts.create_a_job

Applied Ai Engineer • San Francisco, CA, United States