Skip to content

Commit 70012ee

Browse files
committed
docs(nx-dev): new blog post about integrating LLMs with Nx generate UI
1 parent e1c0be3 commit 70012ee

8 files changed

+135
-3
lines changed

docs/blog/2025-04-15-nx-mcp-vscode-copilot.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ tags: ['nx', 'nx-console', 'ai']
66
cover_image: /blog/images/articles/bg-copilot-nx-mcp.avif
77
description: 'Learn how to enhance VS Code Copilot with Nx MCP integration, providing rich monorepo context for smarter AI assistance.'
88
youtubeUrl: https://youtu.be/dRQq_B1HSLA
9+
pinned: true
910
---
1011

1112
{% callout type="deepdive" title="Series: Making your LLM smarter" expanded=true %}

docs/blog/2025-05-07-migrate-ui.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,6 @@ tags: ['nx', 'nx-console']
66
cover_image: /blog/images/2025-05-07/migrate-ui-header.avif
77
description: 'Introducing the new Migrate UI in Nx Console, a visual interface that simplifies the migration process.'
88
youtubeUrl: 'https://youtu.be/5xe9ziAV3zg'
9-
pinned: true
109
---
1110

1211
{% callout type="deepdive" title="Nx 21 Launch Week" expanded=true %}

docs/blog/2025-05-08-improved-module-federation.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,6 @@ authors: ['Colum Ferry']
55
tags: ['nx', 'module-federation']
66
cover_image: /blog/images/2025-05-08/module-federation.avif
77
description: 'Nx 21 introduces native support for Module Federation with Inferred Tasks and Continuous Tasks, enabling streamlined Rspack configs and seamless multi-app serving for improved developer experience.'
8-
pinned: true
98
---
109

1110
{% callout type="deepdive" title="Nx 21 Launch Week" expanded=true %}

docs/blog/2025-05-09-continuous-tasks.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,6 @@ tags: ['nx']
66
cover_image: /blog/images/2025-05-09/continuous-tasks.avif
77
description: 'Learn how to use continuous tasks in Nx 21 to improve your developer experience.'
88
youtubeUrl: https://youtu.be/AD51BKJtDBk
9-
pinned: true
109
---
1110

1211
{% callout type="deepdive" title="Nx 21 Launch Week" expanded=true %}
Lines changed: 134 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,134 @@
1+
---
2+
title: 'Combining Predictability and Intelligence With Nx Generators and AI'
3+
slug: nx-generators-ai-integration
4+
authors: ['Juri Strumpflohner']
5+
tags: ['nx', 'nx-console', 'ai']
6+
cover_image: /blog/images/articles/bg-llm-nx-generators.avif
7+
description: 'Learn how you can combine the predictability of Nx generators code generators with the intelligence of LLMs which are able to integrate them into your workspace specific context.'
8+
youtubeUrl: https://youtu.be/PXNjedYhZDs
9+
---
10+
11+
{% callout type="deepdive" title="Series: Making your LLM smarter" expanded=true %}
12+
13+
- [Nx Just Made Your LLM Way Smarter](/blog/nx-just-made-your-llm-smarter)
14+
- [Making Cursor Smarter with an MCP Server For Nx Monorepos](/blog/nx-made-cursor-smarter)
15+
- [Nx MCP Now Available for VS Code Copilot](/blog/nx-mcp-vscode-copilot)
16+
- [Nx and AI: Why They Work so Well Together](/blog/nx-and-ai-why-they-work-together)
17+
- [Save Time: Connecting Your Editor, CI and LLMs](/blog/nx-editor-ci-llm-integration)
18+
- **Enhancing Nx Generators with AI: Predictability Meets Intelligence**
19+
20+
{% /callout %}
21+
22+
In a world where AI coding assistants are increasingly capable of generating entire applications from scratch, you might wonder: **what's the point of code generators anymore?** At Nx, we've been thinking about this exact question, exploring the combination of predictable code generators with intelligent LLMs to create an improved developer experience.
23+
24+
Lets dive in how this works.
25+
26+
{% toc /%}
27+
28+
## The Value of Predictability in Code Generation
29+
30+
Nx plugins often come with code generators that create new projects, libraries, or components following best practices. For example, the `@nx/react` plugin includes generators for scaffolding React applications and libraries with the correct structure and configuration, taking away a lot of the low-level configuration and setup burden.
31+
32+
```shell
33+
nx g @nx/react:lib packages/some-lib
34+
```
35+
36+
These generators guide you through a series of prompts and then scaffold out a project with a clean, predictable structure. Unlike **AI-generated code that might vary with each prompt, generators produce consistent results every time**.
37+
38+
This predictability is particularly valuable in enterprise settings where:
39+
40+
- Teams need to maintain consistent coding standards
41+
- New libraries should follow established architectural patterns
42+
- Customized setups need to be reproducible across projects
43+
44+
We see a lot of Nx users either tailor existing generators to their needs or [create entirely custom ones](/extending-nx/recipes/local-generators), ensuring that new code follows team standards perfectly.
45+
46+
## Where AI Shines: Context and "Intelligence"
47+
48+
While generators excel at predictability, they lack awareness of your workspace context. Imagine the example of generating a new React library but also to import the main component of that library into an existing project. You can totally encode that behavior but
49+
50+
- it is costly to implement, needing to account for all sorts of special edge cases
51+
- it adds additional configuration burden on the user's side which needs to provide the applictions name and location in the workspace structure
52+
53+
This is where AI assistants prove invaluable. An LLM assistant is perfectly able to take the generated output, interpret it and use the Nx MCP to
54+
55+
1. Understands your workspace structure and project relationships
56+
2. Identify the application the library should be connected to
57+
3. Adjust the source code and make the necessary changes
58+
59+
## How This Works: LLM Handing Control Over to a Human
60+
61+
> When vibe coding you just iterate fast and backtrack but in a real world enterprise environment you might want more control over the flow, inspecting intermediate values and being in a constant conversation with the LLM.
62+
63+
If we ignore "vibe coding" for a moment (where you want to iterate fast and backtrack), in a real-world enterprise setting you want more control staying in a conversation with the AI assistant and being able to adjust values or course correct.
64+
65+
Our latest enhancement creates a seamless workflow between LLMs and generators:
66+
67+
1. You describe what you want to create to your AI assistant
68+
2. The assistant uses the Nx MCP server to identify available generators
69+
3. It selects the appropriate generator and configurations
70+
4. Instead of running it directly, it opens the Nx Console Generate UI
71+
5. You can review and adjust the options before generating
72+
6. After generation, the assistant helps integrate the new code with your existing projects
73+
74+
Let's dive into how this works based on an example. Instead of manually invoking the generator yourself, you let the coding assistant drive the interaction by asking something like:
75+
76+
```plaintext
77+
Create a new React library into the packages/orders/feat-cancel-orders folder
78+
and call the library with the same name of the folder structure. Afterwards,
79+
also connect it to the main shop application and make sure you link the
80+
library properly in the package.json of the main shop application.
81+
```
82+
83+
> Note there's two different parts here: the first part that can be perfectly satisfied by an Nx generator, and the 2nd part (`Afterwards, also connect it to the main shop application...`) where the intelligence of the LLM comes in that is able to connect the resulting code to your codebase.
84+
85+
Your coding assistant (in this case VSCode Copilot) invokes the [Nx MCP](/features/enhance-AI) to better understand the underlying workspace structure and then invokes the MCP's tools for code generation:
86+
87+
- `nx_generators` - Returns a list of available generators in the workspace
88+
- `nx_generator_schema` - Provides detailed schema information for a specific generator such as the available options that can be provided to a generator
89+
90+
This allows the LLM to map the user query onto an available Nx generator options. But **instead of invoking the generator directly our new flow hands control over to the developer** for inspecting the values and potentially making adjustments.
91+
92+
![LLM invoking the Nx generate UI](/blog/images/articles/llm-nx-generate-ui.avif)
93+
94+
Meanwhile, the LLM waits to continue. Once you as a developer confirm and run the generator via the Nx Console UI, the LLM gets a message sent and continues its execution **making contextual decisions based on your workspace structure**. In our example, it automatically connects the new library to the existing data access and UI libraries (aligning it with other libraries that are already in the workspace) and connecting it to the main application.
95+
96+
This workflow combines the predictability of generators with the intelligence of AI, while keeping you in control of the process.
97+
98+
## Why This Approach Works So Well
99+
100+
This integration addresses several key challenges:
101+
102+
1. **Predictability and Intelligence**: You get the consistency and predictability of generators with the customization abilities of AI.
103+
104+
2. **Speed vs. Control**: The process is fast but keeps you in control through the Nx Generate UI.
105+
106+
3. **Context Awareness and Deep Integration**: The AI understands your workspace architecture and can make appropriate adjustments to the generated code to deeply integrate it into your workspace.
107+
108+
As [Victor noted in his recent post](/blog/nx-and-ai-why-they-work-together):
109+
110+
> "LLMs excel at impressive demonstrations but struggle with consistency and correctness... Being able to access a large library of annotated generators helps LLMs reduce variability of what they generate, which improves consistency and quality. They use a generator and make some small modifications on top instead of trying to author everything from scratch."
111+
112+
## Getting Started and Looking Forward
113+
114+
To use this feature, you'll need:
115+
116+
1. [Nx Console](/getting-started/editor-setup) installed in VSCode or Cursor (we're working on IntelliJ)
117+
2. The [Nx MCP server configured](/features/enhance-AI) for your editor
118+
119+
Once set up, you can start leveraging this powerful combination of predictable generators and intelligent AI assistance.
120+
121+
This integration is just one example of our broader vision for AI-enhanced development with Nx, providing useful and deep integration of LLMs into your development workflow. Currently we're looking into integrating our [new Nx terminal UI](blog/nx-21-terminal-ui) with your coding assistant, allowing for some interesting new AI powered workflows.
122+
123+
That said, don't forget to subscribe to our [YouTube channel](https://www.youtube.com/@nxdevtools) or [subscribe to our newsletter](https://go.nx.dev/nx-newsletter) for future announcements and demonstrations.
124+
125+
---
126+
127+
Learn more:
128+
129+
- 🧠 [Nx AI Docs](/features/enhance-AI)
130+
- 🛠️ [Nx Generators](/features/generate-code)
131+
- 👩‍💻 [Nx GitHub](https://github.com/nrwl/nx)
132+
- 👩‍💻 [Nx Console GitHub](https://github.com/nrwl/nx-console)
133+
- 💬 [Nx Official Discord Server](https://go.nx.dev/community)
134+
- 📹 [Nx Youtube Channel](https://www.youtube.com/@nxdevtools)
Binary file not shown.
Loading
Binary file not shown.

0 commit comments

Comments
 (0)