How the gaming industry reacted to the launch of Project Genie 3 — an AI model capable of creating virtual worlds
Fazil Dzhyndzholiia
Last week, Google finally released Project Genie 3 to the general public (for now, only in the US) — a prototype AI model capable of generating three-dimensional spaces from text descriptions. These environments can be explored much like in a conventional video game, with direct character control. We wrote about Project Genie last year: even then, the technology looked intriguing, although based on Google’s early presentations it was difficult to assess its real potential. Now, following the public launch, it has become clearer what the project is capable of — and how it is already beginning to affect the gaming industry.
What the technology is about
In Project Genie, the user interacts with two dialogue windows. The first is used to enter a prompt describing the virtual environment, while the second defines the appearance of the character who will move through it. A camera perspective can also be selected — either first-person or third-person.
Before generating a full 3D world, Project Genie first displays a static image of the future scene (under the hood, Genie relies on Nano Banana Pro and Gemini), allowing the user to уточнить or refine the request. After that, generation begins. Movement follows a classic PC game control scheme — W, A, S, D and the space bar. A generated space can be explored for no more than one minute, after which a new scene must be created.
In addition to text descriptions, the system supports references. Images can be uploaded to more precisely convey the desired environment, or, for example, a photo of a landscape or one’s own apartment can be added and recreated in interactive form. Essentially, Project Genie allows photos to be turned into short, simple games. Moreover, you can take a picture of, say, your cat or dog and make them playable characters.
For best results, detailed prompts are, of course, recommended. However, even with more generalized requests, the neural network is capable of adding details on its own in real time. For instance, if you ask it to generate a car trip along a highway without further clarification, the model will independently add oncoming traffic for plausibility. Similarly, when creating an architectural structure — such as a temple in the mountains — and then entering it, Genie will promptly generate an interior.

The official purpose of this toolset is still not entirely clear. In interviews, Google engineers openly say that they themselves want to see which use cases users will discover.
The most obvious one is game development, where Project Genie could speed up the earliest stages of production: conceptualization and vision-building. Brainstorming ideas is simply easier when they can be visualized immediately. Unity Technologies CEO Matthew Bromberg writes about this directly. At the company, Genie is seen not as a threat, but as a potential accelerator for game creation: models like this can quickly sketch out scene and environment prototypes, which can then be transferred and refined in a traditional engine such as Unity using familiar systems — physics, logic, and networking code.
Another possible direction is training and education. For example, teaching people how to behave during extreme natural disasters. You simulate a hurricane and show where one should take shelter.
The 60-second limit per scene remains a serious constraint on Genie 3’s potential. However, the developers have already stated that these strict time limits may be expanded in the future.
What practical applications do you think Project Genie will have?
“We have The Legend of Zelda at home”
With the official release of the Project Genie prototype, the internet was quickly flooded with videos showcasing early user creations — ranging from photorealistic “bodycam recordings” to virtual tours of Disneyland and simulators of domestic cats wandering around a room.
Against the backdrop of all this content, however, the most striking videos are those featuring Genie-generated worlds that look almost identical to popular games — or at least strongly resemble them. One user, for example, created a “GTA 5 in Greenland” — complete with characters suspiciously similar to Trevor and Michael. In this particular case, the neural network merely “drew inspiration” from Rockstar Games’ hit, but in others it outright copied the visual style of projects such as Dark Souls 3 or Minecraft.
The Verge journalist Jay Peters received early access to Genie and described his experience in detail. Notably, he used the tool in a way Google likely did not anticipate. Instead of abstract scenes, Peters deliberately attempted to recreate primitive clones of well-known Nintendo games — Super Mario 64, Metroid Prime, and The Legend of Zelda: Breath of the Wild.
Genie generated worlds that were visually very close to copyrighted projects with little resistance. That said, it is worth remembering that we are talking about interactive spaces, not full-fledged games. There is almost nothing to do in them: there are no goals, tasks, or progression systems. All interaction is limited to movement and jumping. Moreover, these scenes not only exist for just 60 seconds, but are also rendered at 720p and 24 frames per second, with noticeable input latency making controls uncomfortable. The worlds themselves often behave unstably — objects may forget their state or suddenly change.
It is obvious that using the tool for imitation purposes raises legal concerns. Therefore, one should not expect the ability to create worlds featuring characters like Mario to remain in Genie for long — especially given how aggressively companies like Nintendo protect their IP. Some restrictions are already present from day one and will likely be expanded: in his article, Peters notes, for instance, that the neural network refused to generate a world based on the Kingdom Hearts series.
Investor panic and developer calm
Despite the fact that Project Genie in its current form only produces short demos with minimal interactivity rather than full-fledged games with gameplay, its release instantly shook the gaming industry’s stock market. However, it was not developers who panicked, but investors — who seemingly do not fully understand the technology.
In recent days, several major gaming companies took a financial hit: Unity Technologies’ stock fell by 12%, CD Projekt RED by roughly 8%, and Roblox Corporation by 10%. Even Take-Two Interactive, one of the largest publishers, lost more than $3.5 billion in market capitalization in a single day — a drop of 8%.
Solutions like Genie 3 are mistakenly perceived by investors not as auxiliary development tools, but as potential full-fledged alternatives to classic game engines. This, in turn, negatively affects the valuation of companies that build games using traditional production models. Jason Schreier commented on the situation as follows:
Gaming stocks are plummeting today after Google's rollout of Project Genie, an AI tool that lets users create and explore virtual worlds for 60 seconds. This is the result of a market that does not actually understand how video games are made. Allow me to suggest that the Street read Blood, Sweat, and Pixels...
Game developers and industry experts, on the other hand, are responding to Google’s neural network with restraint and are in no hurry to sound the alarm — at this stage, nothing fundamentally changes for game development. Yes, as mentioned earlier, conceptualizing ideas will become easier, but in the near future no one will start making high-quality games using Genie 3 alone.
Former lead engineer at Unity and Ubisoft, Sebastian Aaltonen, shared his thoughts on the new AI model:
Genie 3 has <3 min of persistence. Then it forgets what happened […] Doesn't support interacting with anything, NPCs or enemies […] the tech is basically a video generator. I don't think it can handle collisions of objects that are not currently visible […] Rendering is 720p 24Hz. With extreme hardware requirements. 50x less pixels than 4K 144Hz.
This is super nice tech for virtual experiences (Unlimited Detail was eventually used for that purpose too), but I don't see a clear path for this kind of tech becoming a game dev tool anytime soon. I would buy the dip.
***
Even taking all its limitations and shortcomings into account, it is hard not to acknowledge that Project Genie is impressive in many ways. It truly represents the next major step in the evolution of neural networks. At the same time, Genie clearly does not yet aspire to be a “killer” of traditional game development and is unlikely to compete with it in the near term.
It is possible that in five years the technology will be capable of things that are difficult to imagine today. However, almost certainly, human talent will continue to play the decisive role in development — it is precisely this factor that allows the creation of new The Legend of Zelda games, rather than simply reproducing or imitating existing ones.
And what do you think about Genie? Share your thoughts in the comments.
How do you feel about the growing integration of AI into the creative process of game developers?
-
Chinese Developers Release Open-Source Genie 3 Alternative for Building Game Worlds -
GTA 6 Powered by AI? Investors Get Nervous Over Google Genie 3 as Take-Two and Unity Stocks Begin to Slide -
Unity Stock Plummets 35% Following Google's AI Tool 'Project Genie' Announcement -
What We Know About Genie 3 — The Neural Network That Could Change the Gaming Industry Forever -
Google Begins Testing Project Genie for AI-Powered World Creation

