The Frontier of Generative AI at ABEMA: How We're Shaping Development and Engineering Growth in a New Era

TECH&CREATIVE

At ABEMA, our development structure continues to evolve alongside our service growth. In recent years, we have been actively promoting the use of generative AI within our development team to enhance the content experience and improve operational efficiency. In this article, we interviewed Principal Engineer Hato, who has been leading the adoption of generative AI, and Engineering Manager Suga, who oversees the development of recommendation features. We discussed the current state of generative AI use at ABEMA, including specific use cases and workplace changes, as well as the learning mindset required for engineers.

Profile

  • Yuji Hato became the Principal Product Engineer at AbemaTV, Inc., in 2024, where he now leads product engineering. He joined in 2011 and developed Android and iOS apps for Ameba while also working on common backend infrastructure. He later helped launch the music streaming service "AWA." In 2016, he joined the ABEMA iOS app development team, serving as Engineering Manager for the iOS/Android teams and Head of the Client Strategy Office.

  • Shunya Suga is the Engineering Manager for the Product Backend team of AbemaTV. He joined as a new graduate in 2021 and has worked on new features for large-scale sporting events and on the replacement of ABEMA's search infrastructure. He currently develops and expands new features to drive the user experience.

How Generative AI Is Enhancing Features of ABEMA

── To begin, could you tell us about your backgrounds as engineers?

Hato: I'm a Principal Engineer in the Product Development Division here at ABEMA. I'm responsible for the product development efforts of about 60 engineers, which includes setting our technical direction and driving initiatives to improve quality.

Suga: I'm a Backend Engineering Manager in the same division. My current focus is on developing ABEMA's recommendation features to enhance our users' viewing experience.Suga: I'm a Backend Engineering Manager in the same division. My current focus is on developing ABEMA's recommendation features to enhance our users' viewing experience.

Hato: We shape the organizational structure and technology strategy for our division. A major part of that is driving the adoption of generative AI in product development. It might look like generative AI is a switch you can flip for instant productivity, but the reality is that cost, risk management, and security must be carefully vetted. We're working closely with promotion teams to select the right tools, establish clear implementation processes, and validate their effectiveness, ensuring we're fully and responsibly leveraging AI.

── With generative AI evolving so quickly, what has ABEMA's development team done to keep it up?

Hato: Over the last two to three years, as generative AI has exploded, there's been a company-wide push at CyberAgent to explore its potential. In 2023, for instance, we held a company-wide "GenAI Utilization Contest" with a ¥10 million prize pool, which generated a ton of great ideas and proofs of concept. At ABEMA specifically, we're seeing more and more successful product integrations. Initiatives like generating banner images and news articles are already live in our production workflow.

Suga: We're also integrating generative AI into the recommendation features I work on. I actually presented on this at Google Cloud Next Tokyo '24, where I discussed our recommendation system built with vector search.

AlloyDB Powering ABEMA: Building Recommendation System with Vector Search

The core idea is to take metadata from our content—like a show's synopsis—and use AI to summarize and structure it. We then convert that into a vector and use vector search to find similar content. So if a viewer watches a specific show, we can automatically recommend other content under a heading like "Because you watched..."

Since launching this feature, we've seen a measurable improvement of several percentage points in our service metrics, proving that generative AI can directly improve the user experience. Our next frontier is to move beyond program-level analysis to the scene level. We're exploring how to analyze popular moments—such as dramatic confession scenes or exciting surprise reveals—to create even more precise, contextually relevant recommendations.

From Code Completion to Workflow Design: A Look at Generative AI at ABEMA Today

── What do you think are the most effective ways to introduce generative AI into a development workflow?

Hato: We take two main approaches for boosting development productivity with generative AI. The first is what I'd call "developer augmentation" through tools integrated into the daily workflow. This includes things like real-time code completion and suggestions in your IDE. More recently, this has evolved to include more advanced support, like AI agents that can autonomously break down and execute tasks. The second is a systems-level approach, in which we examine the entire workflow to improve productivity across the whole team or organization. This involves optimizing processes themselves or embedding agents to assist with business operations, creating efficiencies on a much broader scale.

The first approach is exemplified by tools like GitHub Copilot, which directly impacts daily work by assisting developers as they write code. These tools are relatively easy to adopt. CyberAgent rolled it out to all engineers in April 2023, and over 1,000 engineers were using it right away. In fact, according to GitHub at the time, we were number one in Japan for both prompt submissions and suggestion acceptances. Two years on, I can say it's seamlessly integrated into our daily routine.

The second approach, however, demands a more structural solution. This involves using custom-designed workflows and AI agents to automate things like test execution and business processes. It’s not just about plugging in a tool; it requires optimizing based on a deep understanding of the business design and system architecture. The pace of change in generative AI is incredibly fast, so for both approaches, the key is to experiment, iterate, and improve quickly to find what works best.

── Large projects often suffer from high cognitive load due to system complexity, which hurts productivity and maintainability. How is generative AI helping to address these challenges on your existing projects?

Suga: From the perspective of reducing cognitive load, code assistant tools like GitHub Copilot have shown some effectiveness by allowing developers to ask questions about the entire codebase, such as "Where is this feature located?" or "What is its structure?" However, understanding domain knowledge, such as specifications and requirements, from code alone is difficult. ABEMA's backend codebase, for example, consists of numerous microservices, making the overall system structure quite complex. Therefore, at ABEMA, we use the documentation tool to centralize the management of specifications, requirements, and product requirements documents (PRD).

Furthermore, to make such documents accessible via generative AI, we developed an in-house hybrid search tool, which combines RAG and full-text search. When a question like "What are the specifications for ABEMA's international support?" is asked, it extracts relevant information from documents and returns a summarized answer. Through these initiatives, we feel that catching up on domain knowledge and gathering information have become smoother, gradually reducing cognitive load.

── Generative AI could be a huge boost to documentation culture. Are you actively using it to auto-generate docs from code?

Hato: If your goal is to generate documentation for an existing codebase, generative AI is excellent for creating a first draft quickly. However, what it can generate is limited to things that are explicitly present in the code—project structure, architecture diagrams, detailed design info. It struggles with the implicit knowledge: domain specs, unwritten business rules, the reasoning behind a design, non-functional requirements. For now, you have to recognize the limits of automation and understand that human oversight is still required to ensure the quality and completeness of the documentation.

Of course, the quality of the context you provide dramatically affects the quality of the output. This is where emerging ideas like Project as Code (PaC)—managing every aspect of a project as code—become interesting. If we can feed that structured information to an LLM via a mechanism such as the Model Context Protocol (MCP), we believe we can significantly improve the quality of the generated output. We're currently testing this with a tool we're calling "esa MCP." This is why meticulously documenting everything from requirements to design rationale as structured knowledge is becoming more critical than ever.

── What about for a field like testing, which is multi-stepped and demands high reliability? What's the impact of generative AI there?

Suga: Testing is a prime use case where generative AI works best. For instance, generating unit tests for a specific function can be done with very high accuracy. Even a prompt like, "Generate test code based on the spec for this feature," yields very practical results today. That said, this is still firmly in the assistance category. For self-contained tasks like code completion, adoption is smooth, and the benefits are immediate.

Hato: But for something like end-to-end testing, which involves multiple connected steps, you need an integrated system that can handle everything from test case creation to automated execution and result assertion. If you zoom out to the entire development lifecycle, from requirements gathering to testing, it’s a multi-stage process. It's not realistic to think one tool can cover all of that. We need to choose the right tool for each phase and integrate them effectively.

We're also very excited about the potential of AI agents. A system that can autonomously break down tasks and make decisions based on context can fundamentally change the development process. For a large, multi-domain service like ABEMA, figuring out how to leverage this is a massive but worthwhile challenge. There are still hurdles in accurately handling vast contexts, but we're optimistic about technological progress toward practical use. Our strategy is to start in areas where we can deliver clear value—like single, well-defined tasks—and expand from there.

Growth Strategies for Engineers and the Reality of Crossing Borders in the AI-Native Era

── As generative AI becomes standard, how is it impacting skill development and new-hire onboarding?

Suga: I’ve particularly noticed the effects when training junior engineers and student interns. My team has interns working on the recommendation feature, and by using generative AI, they're able to get up to speed on practical tasks much faster. For example, they can consult the AI while coding or get help with APIs and design patterns. I feel that they're acquiring development know-how that would normally take years to learn, but in a much shorter timeframe. This ability for young engineers to grow so quickly is a massive shift.

── Given the complexity of a service like ABEMA, the amount of domain knowledge a new developer has to absorb must be huge. How do you handle that?

Suga: At ABEMA, we have clear domain-based teams. One for the home screen UI, another for core functions like billing and authentication, an infrastructure team, and so on. These areas are fairly independent, so even I don't have a perfect grasp of everything. When I first joined, it took me about 6 months to a year to really understand the core domains, such as the basic viewing functionality and its data flows. Each area has its own specs and design philosophies, so you really have to learn them one by one.

Looking forward, as technologies like AI with long-term memory or multi-agent systems evolve, they could provide personalized support that adapts to an individual's context. That would be a game-changer for reducing cognitive load and streamlining onboarding, potentially allowing engineers to be highly productive right from the beginning.

── We're seeing more people learning and applying skills outside their core expertise. Suga-san, I heard you're a backend engineer by trade but are now heavily involved in machine learning for recommendations. Do you feel it's gotten easier to cross job roles?

Suga: I originally joined as a backend engineer, but my team also has machine learning specialists. We're increasingly building AI-powered systems together, blending our different expertise. Developing the recommendation feature requires machine learning knowledge, but generative AI has made it dramatically easier to understand and get up to speed in a new domain.

As generative AI has evolved, cloud platforms have rolled out powerful solutions, such as vector search. This makes it much easier to build features for similar content discovery. It feels like we now have an environment where you can get hands-on and experiment even in highly specialized areas.

── Tools like Cursor and Devin are evolving at an incredible pace. What skills and qualities do you think engineers need to truly master them?

Hato: Generative AI is great at getting you a "pretty good" result, but achieving the quality required for a product, a human-in-the-loop is essential. You need human judgment at every step. At this point, ensuring final output quality requires a fair amount of human verification and adjustment. Even a simple prompt like "Build this new feature for ABEMA" requires strong skills in articulating requirements and in software design. Accurately conveying the design philosophy—including security, data, non-functional requirements, and context—in natural language alone is still a high bar.

But the models and tools are evolving faster than we can imagine. As the quality and quantity of context they can handle improve, the scope of what they can do will expand dramatically. In that world, an engineer's value won't come from coding skills alone. The ideal strength will be the ability to leverage AI effectively to contribute to the team and the product.

And when AI-native development becomes the default, our entire way of working will change. Expertise will still be needed to validate an AI's output, but the most critical skills will become more abstract: the ability to correctly translate requests into requirements, strategic thinking, and the architectural mindset for designing entire systems.

I believe engineers will need to be those who can leverage their core expertise while learning across job roles, constantly asking "What should we build?" and driving their teams and products forward. It's incredibly encouraging to see engineers like Suga, who started in backend, work backward from "What does ABEMA need?" and cross over into machine learning.

Usually, stepping into a different technical field requires significant effort. Generative AI lowers that barrier—both in terms of learning cost and psychological hurdles. That's why I encourage everyone to actively use it to accelerate their own growth and drive team success.

Share this story

Follow us!

  • Facebook
  • Twitter

CA BASE SUMMIT, the Engine Driving Our Technological Future

TECH&CREATIVE

At CyberAgent, we host an annual "Ashita Kaigi" (Meeting for Tomorrow) specifically for our engineers and creators. At the "CA BASE SUMMIT 2025," held on July 18, 2025, a wave of proposals focused on advancing the use of AI agents. True to form, the speed from resolution to reality was swift; one of the approved proposals, the "AI-Driven Promotion Office," was launched on August 1, 2025. This article explores the significance of the summit, its history, and the dynamic atmosphere of the day, with our President, Susumu Fujita, leading the judging panel.

Page Top