Back to Careers
Backend Engineer
Power the future of immersive AI roleplay with fast, reliable, and intelligent infrastructure.
Seoul
Engineering
Full-time
On-site
We’re looking for a backend or infra-focused engineer with a deep interest in AI systems, low-latency performance, and multiplayer-style synchronization.
At astrsk.ai, we’re building an AI roleplay platform that feels alive — meaning every millisecond matters. Whether it’s handling streaming outputs from LLMs, syncing dialogue states across clients, or optimizing token usage, your work will shape the backbone of how users experience story, conversation, and presence.
The role
You’ll help design and implement the technical foundation for how interactive AI sessions work at scale. You’ll collaborate with frontend, design, and product teams to create realtime, intelligent, and resilient systems for thousands of concurrent users.
What you'll do
Design and optimize LLM-based streaming and inference systems
Build low-latency session sync logic (think: multiplayer, voice, or chat-based logic)
Explore P2P or hybrid sync strategies for performance and reliability
Improve caching, session management, and AI conversation memory systems
Work with frontend engineers to shape seamless, magical user experiences
Prototype new AI-enhanced infrastructure tools that support creators
Required Skills
5+ years of backend, infra, or MLops experience
Strong understanding of distributed systems, multiplayer game logic, or low-latency data flows
Familiarity with P2P, WebRTC, or state sync in client-server models
Experience integrating LLMs (OpenAI, Claude, Mistral, etc.) in a production environment
Ability to make smart tradeoffs between speed, cost, and quality in AI pipelines
Strong collaboration skills — you’re comfortable working with design and frontend
Bonus Skills
Experience with local-first architectures or conflict-free data structures (CRDTs)
Prior work in gaming, chat, or streaming platforms
Passion for storytelling tools, creative AI, or character systems
What to expect in the first 6-12 months
Help define architecture for session storage, recovery, and versioning
Optimize token usage, cost, and latency across multiple LLM providers
Build tools to support internal AI testing, evaluation, and debugging
Collaborate on new creative AI features that push the edge of the product
We want to hear from you
This is an opportunity to build deeply technical systems that directly impact user creativity and expression. If you’re someone who loves performance, infrastructure, and intelligent tooling — and you want to see your code spark new forms of digital storytelling — we want to hear from you.