Hesiod Technologies Logo
est. 2025
ZĂĽrich, CH

Hello đź‘‹

We're Hesiod Technologies, a Zurich-based research-first company investigating how neural networks process and generate real-world information.

current focus

Audio generation using text- and game-state-conditioned diffusion models, and an analysis of reasoning mechanisms and failures in vision-language models.

approach

We apply our research insights to real-time game systems while pursuing a deeper understanding of fundamental model behaviors.

Long-term vision: Upon securing sufficient funding, we aim to train a foundation model with deep spatial intelligence—understanding three-dimensional space, physical principles, and causal relationships. Our goal is to advance Europe's position in creating AI systems that can genuinely reason about the physical world around us.

Research Collaboration

We're actively seeking collaborations in the following areas:

Audio generation and diffusion models for video game environments (adaptive soundscapes, NPC voice synthesis, combat audio systems)
Vision-language model interpretability and reasoning mechanisms
Representation learning

Whether you're interested in informal research discussions or potential collaborations, we'd love to hear from you.

Click to copy • JS required

Current Research

2 projects

Audio Diffusion Models

project::audio-generation

low-key research preview

We develop latent diffusion models for video game audio generation. The system achieves two types of alignment: semantic alignment with text descriptions and temporal alignment with real-time video frame features. Through these alignments, we maintain long-horizon coherence over multi-second outputs.


The model serves as a complete audio companion for games, generating both immediate Foley effects and on-going soundscapes. It creates context-aware sound effects for player actions while maintaining background audio and music that responds to the gameplay situation.


Our approach works across diverse gaming genres, including RPGs, Action-Adventure, fast-paced FPS, and Racing games. Our distillation-based sampling approach reduces inference steps while preserving audio quality, enabling sub-150ms response times on consumer GPUs (30/40 series: RTX 3090, RTX 4090) for real-time local inference.


Game designers interested in validating and refining this approach are welcome to reach out, and we will soon release an API for game developers to integrate this technology into their development pipeline. In the near future, gamers will have direct access to audio customization features in supported games.

Latency
~150ms
GPU
CUDA 11.7+
RAM
16GB
Tech Stack
PyTorch U-Net ONNX
model input
Video frames, Godot Engine TPS Demo (MIT)
text prompt
"tense ambient soundscape with subtle percussion and deep bass, building suspense"
model input
Video frames from a Text to Video model (from X user @bilawalsidhu)
text prompt
"emotional and serious music with soft string plucking, deep drum rolls, and a quiet, echoing chant for a climactic moment"

Vision-Language Model Interpretability

project::vlm-interpretability

research

We study reasoning capabilities and systematic failure modes in vision-language models. Current work focuses on sparse autoencoders for mechanistic interpretation of internal model representations and understanding of emergent behaviors in multimodal reasoning.

Model Scale
1B-7B params
Primary Focus
Visual Reasoning
Tools & Methods
Vision Transformers Sparse AE Activation Analysis
Vision-Language Model Interpretability Visualization

Research Background

The company represents the research vision of Muriz Serifovic, focused on fundamental questions in machine learning theory. Over the past decade, this work has evolved from studying geometric properties of representation spaces to investigating emergent behaviors in large language models.

2024-2025 → Current Research Focus

Analysis of emergent capabilities in Transformer architectures through sparse autoencoders. How do reasoning capabilities emerge from lower-level features, and can we characterize this emergence mathematically?

areas: mechanistic interpretability • reasoning patterns
2021-2023 → Optimization Theory

Theoretical analysis of implicit regularization in deep learning, with focus on SGD dynamics and architecture-induced priors.

areas: loss landscapes • convergence analysis • architectural biases
2018-2020 → Semi-Supervised Learning

Investigation of representation learning in high-dimensional spaces with limited labeled data. How can architectural priors enable sample-efficient learning in high-dimensional spaces?

areas: manifold learning • entropy minimization
2015-2017 → Representation Theory

Formal analysis of disentanglement properties in deep neural networks, studying the geometric structure of learned manifolds and the role of regularization in shaping representation spaces.

areas: information bottleneck • geometric deep learning • invariance principles

AI Consultancy

AI veterans (10 years)

AI Implementation & Deployment for Your Organization

From strategy to deployment, we deliver scalable AI solutions that drive measurable business outcomes

accepting clients

At the intersection of AI research and implementation, we translate academic innovations into functioning systems. We bring deep technical expertise to practical business challenges, understanding AI not just as implementers, but as architects who shape its fundamental behaviors.

Whether you're looking to implement large language models at scale, optimize your AI infrastructure, or develop custom solutions, our research-first approach ensures you're not just following industry trends—you're implementing solutions built on solid theoretical foundations and practical insights.

From strategic planning through to production deployment, we focus on delivering scalable and robust AI solutions that drive real business value. Based in Zurich, we combine technical precision with pragmatic implementation to ensure your AI initiatives are both innovative and reliable.

AI Strategy & Roadmap

Strategic planning and implementation roadmap for AI integration, aligned with business objectives

Custom AI Development

Tailored AI solutions built for your specific use cases and technical requirements

MLOps & Production

End-to-end ML pipeline management and production deployment automation

GenAI Solutions

Generative AI implementation with focus on practical business applications

AI Optimization & Scaling

Performance tuning and scaling solutions for production AI systems

Data Engineering & Architecture

Robust data infrastructure design and implementation for AI systems

Why Choose Us

core advantages

Production-First Approach

Solutions designed for production from day one, with scalability and reliability built-in

Rapid Time-to-Value

Quick implementation cycles with measurable business impact from early stages

Security & Compliance Focus

Built-in compliance with EU AI Act and robust security measures

Success-Based Pricing

Flexible pricing model aligned with your project success metrics

Development Process

git log --oneline

Discovery & Strategy

Understanding requirements and defining strategic approach

Proof of Concept

Rapid prototyping and validation of core concepts

Development & Testing

Iterative development with continuous testing and refinement

Production Deployment

Seamless deployment with monitoring and scaling setup

Continuous Optimization

Ongoing performance monitoring and iterative improvements

Security & Standards

security first

Technical Security

End-to-end encryption (TLS 1.2+)
Data encryption at rest (AES-256)
Secure cloud infrastructure

Data Protection

Strict data isolation
Privacy by design
Local data processing when possible

AI Development

Version controlled development
Systematic testing
Documented implementations

Transparency

Clear data usage policies
Regular client updates
Open communication

Ready to Advance Your AI Strategy?

Choose your preferred way to start the conversation:

typical response time: 3-24h

Location

$ status --location

â—Ź Connected to: ZĂĽrich Network

â—† Region: Europe/Zurich

⬢ Datacenter: Swiss Alps-01

Last updated: 1.2ms ago

From our base in Zurich, we benefit from proximity to ETH Zurich, established research laboratories, and an active machine learning community. This environment supports both our technical research goals and our commitment to responsible AI development. Being in Switzerland allows us to maintain close ties with academic researchers while pursuing independent industrial research.