Page Inspect
Internal Links
86
External Links
26
Images
52
Headings
43
Page Content
Title:johnnyreilly
Description:The blog of John Reilly ❤️🌻
HTML Size:148 KB
Markdown Size:43 KB
Fetched At:November 17, 2025
Page Structure
h32025
h2Where AI-assisted coding accelerates development — and where it doesn’t
h2About me
h2Setting the table: GitHub Copilot and the emergence of agent mode
h2AI generated applications
h3Prototype at speed
h3Build bespoke apps
h3Limitations of AI-generated applications
h3When app prompting fails
h3I'm depending on you: The issues of dependency management
h3Design system challenges
h3The AI tech stack / library choices
h3The knowledge gap challenge
h3The testing paradox
h2The curse of AI generated pull requests (PRs)
h3Architectural concerns
h2The broader implicationsof AI-assisted coding
h3AI verbosity and the default trap
h3The acceleration metaphor
h3Are we still learning?
h3Trust, but verify
h3The flow state dilemma
h2Conclusion
h2Keeping front end and back end in sync with NSwag generated clients
h2Azure DevOps: merging pull requests with conventional commits
h2Azure DevOps: merging pull requests and setting autocomplete with the API
h2Azure DevOps: using DefaultAzureCredential in an Azure Pipeline with AzureCLI@2
h2Azure DevOps: pull requests and dynamic required reviewers
h2TypeScript is going Go: Why it’s the pragmatic choice
h2Microsoft Graph client: how to filter by endswith
h2List Pipelines with the Azure DevOps API
h2Static Web Apps CLI: local authentication emulation with ASP.NET
h2Node.js, Azure Application Insights, and Fastify
h2Get Service Connections with the Azure DevOps API (REST and TypeScript)
h2Slash command your deployment with GitHub Actions
h2Smuggling .gitignore, .npmrc and friends in npm packages
h2npx and Azure Artifacts: the secret CLI delivery mechanism
h2Azure Artifacts: Publish a private npm package with Azure DevOps
h2Introducing azdo-npm-auth (Azure DevOps npm auth)
h2Azure DevOps API: Set User Story column with the Azure DevOps Client for Node.js
h2module ws does not provide an export named WebSocketServer
h2Resolving "The requested module 'ws' does not provide"...
h2Static Typing for MUI React Data Grid Columns
Markdown Content
johnnyreilly
Skip to main content
**John Reilly**AboutBlogTalks
GitHubBlueskyMastodonTwitter
Search
Recent posts
### 2025
- Where AI-assisted coding accelerates development — and where it doesn’t
- Keeping front end and back end in sync with NSwag generated clients
- Azure DevOps: merging pull requests with conventional commits
- Azure DevOps: merging pull requests and setting autocomplete with the API
- Azure DevOps: using DefaultAzureCredential in an Azure Pipeline with AzureCLI@2
- Azure DevOps: pull requests and dynamic required reviewers
- TypeScript is going Go: Why it’s the pragmatic choice
- Microsoft Graph client: how to filter by endswith
## Where AI-assisted coding accelerates development — and where it doesn’t
November 12, 2025 · 19 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
AI is a fantastic tool. In a short time, it has changed the way the whole industry builds software. Where we used to write every line ourselves, now the code we produce is generally a collaboration between an engineer and AI tooling. The likes of GitHub Copilot, Claude Code, ChatGPT, Replit etc., all combine to make being a software developer in 2025 quite different from what it was in 2022. The rate of change has been giddying.
This post exists to dig into a little of the nuance around how software development has been changed by the innovations of AI: both the positives and the negatives. We’ll also discuss the tooling and approaches that have carried the industry forward, and the brand new pitfalls that are now revealing themselves.
This post won’t be exhaustive. Given how fast AI has changed and keeps changing, many of the reference points in this post may seem out of date from the moment of publication. But hopefully, the underlying principles should be evergreen.
This will be a personal story, informed by the experiences both from the world of open source and from colleagues and friends in the industry.
## About me
A little information about myself: I work as a software engineer for Investec, a multinational financial services group. The story of engineering at Investec is relevant to this piece. Investec has been an early and enthusiastic adopter of AI generally, and AI software tooling particularly. I’ll seek to draw on this experience.
I also work on open source software outside of work and have done for many years, primarily in the world of TypeScript.
## Setting the table: GitHub Copilot and the emergence of agent mode
One of the most well-known AI coding tools is GitHub Copilot. In many ways, it was the original AI coding tool, and competitors like Claude Code and Cursor still can’t match Copilot’s dominance. Let’s consider for a moment what makes these IDE AI tools useful.
GitHub Copilot has ask and edit features which are not radically different from what you’d get with say ChatGPT, but something like agent mode was truly the game-changer. This feature transformed the way I approach complex coding tasks, shifting from line-by-line assistance to higher-level problem solving.
Rather than implementing a feature myself from scratch, I can describe a feature or a bug fix in natural language, and the AI will generate the necessary code changes across multiple files. This holistic approach to coding feels more like collaborating with a junior developer who can take on significant chunks of work. This allows me to focus on reviewing and refining the output rather than getting bogged down in the minutiae of implementation. AI is just fantastic at boilerplate, it turns out.
Another aspect that agent mode shines at is its ability to generate bash scripts and automation tasks rather than just making direct code changes. There’s something to be said for deterministic results. When AI generates a script that you can review, understand, and then execute, you maintain control over the process. This approach is far more reliable than having AI make direct modifications to your codebase that might introduce subtle bugs or architectural inconsistencies.
The predictable nature of scripted solutions means you can verify the approach before execution, understand exactly what will happen, and easily roll back if needed. It’s a more methodical approach that aligns well with engineering best practices around reproducible builds and transparent processes.
With that context in mind, let’s look at how these AI tools have reshaped application development — sometimes brilliantly, sometimes problematically.
## AI generated applications
One of the most interesting aspects of AI-assisted coding is the notion of prompting your way to an application using a tool like Replit.
Historically, you may have run multiple workshops, the output of which may have been a vague design and some wireframes. But with something like Replit, you can leave a workshop with a functioning application. Crucially, you also have access to the source code of the AI-generated application, so it‘s possible to take that, leave Replit, and deploy it to your environment of choice.
Let’s walk through some of the aspects of this.
### Prototype at speed
The time saved in the early stages of application development is considerable. The number of applications that would struggle to cross the chasm from idea to code is now vastly reduced. If you have an idea, it’s now possible to realise some form of it in hours. That’s incredibly useful.
A number of times in my career, I’ve been confronted by the idea of an application that just doesn’t work. A number of the underlying ideas are in conflict, in non-obvious ways. Discovering that information when you’ve already engaged your team and started developing is a waste of time, money, and good humor.
Using a tool like Replit to prototype drastically reduces the possibility that this might happen. By prompting AI to actually build a prototype of the app for you, it’s possible to surface poorly thought-out aspects of design before the actual build. This is really helpful and increases the chances of success.
### Build bespoke apps
Tools like Replit offer the ability to build bespoke applications. Recently, at Investec, the organisation was looking at a tool to manage vendors for the organisation. There are a number of products in the marketplace that perform this function.
But as the company examined these tools, it was clear that each tool was opinionated. If we wanted to use one of the tools, it wouldn’t quite fit our existing organisational processes and structures. We could use one of them, but we’d either be working around the differences or adjusting the way we worked to use the tool effectively.
The idea occurred: why not build our own? Historically, that would have taken a long time to achieve, and may not have been cost-effective given the number of users. But maybe we could prompt our way to an application that aligned with our processes?
Three people got in a room for an afternoon and “maybe” became “actually.” They left the room with an AI-generated application that was more aligned with Investec’s preferences and processes.
The question then was, can we take our prototype application and “productionise” it? To achieve that, we wanted:
- The application to live in our source control, where we run tooling like GitHub Advanced Security to ensure code quality
- We wanted to adjust the application tech stack towards one that better aligned with Investec’s standard choices
- We wanted our application to have a deployment pipeline and use CI/CD to deploy new versions
- We found that with a couple of engineers and five days’ work, we were able to achieve that. That’s going from idea to a fully working application in just over a week. It’s kind of mind-blowing when you think about it.
There’s maybe more detail here than you need, but what hopefully shines through is how it’s possible to build applications for dedicated purposes, in ways that wouldn’t have been practical previously. Brilliant stuff.
### Limitations of AI-generated applications
While AI-generated applications offer tremendous advantages, they’re not without their challenges. The very speed and ease that make these tools so appealing can also mask significant issues that only become apparent later in the development process. Let’s explore some of the key limitations and pitfalls.
### When app prompting fails
One of the most frustrating experiences with AI app generation occurs when you’re working with a tool like Replit and it starts making changes to one part of your stack while rendering another part meaningless. I’ve encountered situations where Replit created an app with a Python backend and a TypeScript front end.
At some point in the prompting journey, though, it stopped updating the TypeScript frontend and converted the Python into a full-stack web application, effectively leaving half the application non-functional.
The people prompting the app were not aware this had happened, and it wasn’t until we tried to migrate the application from Replit that we realised what had occurred. We’ve talked about AI saving time, but in this case, it cost us a good amount of effort to unravel what had happened.
This highlights a broader issue with AI-generated applications: they often struggle with the complexity of modern full-stack development, where changes in one layer can have cascading effects throughout the system.
### I'm depending on you: The issues of dependency management
Tools like Dependabot will often flag AI-generated applications for using outdated libraries with known vulnerabilities. This isn’t necessarily the AI’s fault; it’s working with training data that includes examples using older versions of libraries. But it does mean that the “finished” application isn’t actually ready for production without significant security updates.
Code quality is another concern. While AI can generate working code quickly, it doesn’t always generate good code. The applications work, but they may not follow best practices, lack proper error handling, or may have performance issues that only become apparent under load. AI application generation works exceptionally well when you “own the domain”, i.e. when the application functionality is self-contained and doesn’t rely heavily on external integrations.
AI tools can, however, create a false sense of completion. This is especially true if parts of your app functionality depend on external systems, APIs that don’t exist in standardised forms, or data that isn’t readily available. The app appears finished, but the hard work of integration is still ahead of you.
Perhaps most concerning is when AI tools use libraries that simply don’t have vulnerability-free versions available. This puts you in the immediate position of choosing between security and functionality: a choice that shouldn’t exist.
At Investec, we are very biased in the direction of security. So when this presents, we will take some time to identify and securely resolve this. This often involves more work, which is fine. The point to note here is that AI will not necessarily land you with production-grade code.
### Design system challenges
For organisations with established design systems (and Investec certainly has one) AI tools present an interesting challenge. AI can generate visually appealing interfaces that work well functionally, but they often don’t align with your company’s design language and brand guidelines.
You might end up with a beautiful application that looks nothing like the rest of your company’s digital properties. This disconnect between AI capabilities and organisational standards creates additional work to bring AI-generated UIs into compliance with house styles.
### The AI tech stack / library choices
One of the unexpected consequences of AI-generated applications is how they expose the assumptions and biases embedded in AI training data and the system prompts. When you prompt for an application, the AI doesn’t just write code: it makes opinionated choices about which libraries, frameworks, and architectural patterns to use.
These choices often reflect what was popular in the AI’s training data, rather than what’s current or best suited to your specific needs. You might find your AI-generated application using React class components when functional components with Hooks would be more appropriate, or choosing older state management libraries when simpler solutions exist.
The challenge becomes more pronounced when you consider that AI tools tend to default to “safe” choices: libraries and patterns that were widely used and well-documented in their training period. While this reduces the likelihood of completely broken code, it can result in applications that feel outdated from the moment they’re generated.
This presents an interesting dilemma: do you accept the AI’s technology choices for the sake of speed, or do you invest time in modernising the stack to align with your preferences and current best practices? The answer often depends on whether you’re building a quick prototype or something intended for longer-term use.
### The knowledge gap challenge
One of the most significant benefits of AI coding tools is how they can help engineers work with languages and frameworks they’re not experts in. The AI becomes a bridge, allowing a Python developer to confidently work with TypeScript or a frontend engineer to write backend services. This democratisation of technical knowledge is genuinely powerful.
However, this strength also reveals a critical weakness. When AI generates code in domains where the engineer lacks expertise, it becomes much harder to review the output critically. The AI might know APIs that you don’t, and while this can accelerate development, it can also lead to situations where engineers ship code they don’t fully understand.
I’ve seen examples where smart engineers have relied on AI to write infrastructure code — Bicep templates, for instance — but then struggled when debugging issues arose. The problem isn’t that the AI wrote bad code; often, the code works perfectly. The issue is that when you don’t understand what’s been written, you can’t effectively maintain, debug, or extend it.
This creates a fundamental principle: if you don’t understand the code, you shouldn’t ship it. It’s important to hold fast to this principle for the long-term benefits of maintainable code. It’s easy to ignore the principle when AI makes it so easy to generate working solutions. But it pays off to do this in the long term.
### The testing paradox
Interestingly, one area where AI consistently excels is writing and fixing tests. AI-generated tests often surface actual issues in the codebase that human engineers might have missed. This creates an almost heretical situation for TDD advocates; the AI is finding bugs through tests that were written after the fact, not before.
While this approach might make purists uncomfortable, the practical benefit is undeniable. AI-generated tests serve as a safety net, catching edge cases and potential issues that improve overall code quality.
The flip side of this is that it can make some very silly decisions when writing tests. It may cover edge cases that are irrelevant, or miss important scenarios that a human would consider obvious. It often stubs out the implementation that you actually want to test.
Again, we’re highlighting the importance of human review and understanding when working with AI-generated code. We augment with AI; we don’t replace with AI.
## The curse of AI generated pull requests (PRs)
As with many things in my life, it was open source software that got me thinking about this topic. Some OSS pals Josh and Brad posted this on Bluesky recently:
For a while now, the land of OSS had been awash with AI generated PRs. These are PRs that have been written by AI. This problem hasn't impacted my projects particularly but I've certainly been aware of the toll it takes on other projects in terms of maintainers time spent on reviews.
The most benign reading of this sort of PR is that the post is talking about, is that the contributor thinks they know what they want, but believes AI will do a better job of writing the code than they will.
It's not about OSS; it's a general software development concern. AI isn't the problem here; humans are. We talk often about working effectively with AI by having a "human in the loop". The issue we're seeing here, is use of AI without sufficient humans.
I've long had a personal rule when coding: *If you submit a PR you must be the first reviewer.*
This predates Copilot by some years. The idea essentially came to me when someone reviewed one of my PRs and raised some perfectly reasonable questions. Essentially as I'd been working on something, I'd changed approaches a few times, and what ended up in the PR wasn't entirely coherent.
So now, before I share a PR, I try to review it and see if it all makes sense.
This rule of thumb has served me well, and the use of AI coding tools only heightens the need for something similar.
### Architectural concerns
While agent mode is undoubtedly a game-changer, the code it creates can sometimes be questionable from an architectural standpoint. I’ve seen AI choose unusual solutions that work but are suboptimal.
Here’s an example: I’d prompted Copilot to build a particular feature. The nature of the feature is not interesting, but the approach it used was. Rather than having an array of objects that represented the data it should use, it instead created a string array.
Each string in the array contained text that represented the data needed in a human-readable format. It then also created various regex-powered parsing mechanisms to extract the data it needed from the strings later on. It was a very roundabout way of achieving what I wanted. It was also very inefficient and buggy. The approach worked, but it was neither performant nor maintainable.
This highlights the importance of understanding not just what AI-generated code does, but how it does it. The “how” often reveals whether the solution will scale, perform well, and be maintainable over time. Other, less serious concerns I’ve seen include generating poorly factored code, using inefficient algorithms, or creating convoluted logic that is hard to follow.
## The broader implicationsof AI-assisted coding
Beyond anecdotal successes and challenges, the AI-assisted coding trend brings big-picture implications to the world of software development.
### AI verbosity and the default trap
One consistent characteristic of AI-generated code is its verbosity. AI tends to be wordy, both in code comments and in implementation approaches. While thorough documentation isn’t inherently bad, brevity often leads to more maintainable code. In my experience, developers tend to accept AI’s verbose defaults without question, but this can lead to codebases that are unnecessarily complex and harder to maintain.
The same principle applies to pull request descriptions and documentation; AI tends toward comprehensive but overly detailed explanations when concise clarity would be more valuable. Oh, and emojis, always with the emojis! It can be steered to be more concise, but that requires deliberate prompting and review.
### The acceleration metaphor
In many ways, AI functions like a travelator (aka a moving walkway) in an airport; it accelerates your movement in the direction you’re already going. If you’re heading in the right direction with solid engineering fundamentals, AI can dramatically speed up your progress. But if your approach or architecture is flawed, AI will help you get to the wrong destination much faster.
This acceleration effect means that the foundational skills of software engineering — understanding requirements, designing systems, and making architectural decisions — become even more critical in an AI-assisted world.
### Are we still learning?
I started out long before AI was a thing. I learned to code by reading books, writing code, making mistakes, and learning from those mistakes. I learned to debug by debugging. I learned to architect systems by studying good architecture and bad architecture and learning from both. With AI tooling, it’s possible to go a long way without really learning these foundational skills.
It’s too early to know what the implications of this will be. Will we see a generation of engineers who can deliver features but don’t understand the underlying principles? Will debugging skills atrophy because AI can often generate working code?
These are open questions, but they highlight the importance of maintaining a strong foundation in software engineering principles even as we embrace AI tools.
### Trust, but verify
The Zero Trust security principle operates on the concept of “never trust, always verify.” This means that no user, device, or application is inherently trusted, regardless of whether they are inside or outside the network perimeter. Although AI is a very different kettle of fish, this principle remains useful when considering AI inputs into your ecosystem.
GitHub Copilot wrote you some code that seems to do the job? Great! Look hard at what you received and be sure that you’re happy with what it’s doing and how it’s doing it. Source control becomes your safety belt when coding with AI; it provides the rollback mechanism when AI takes you down an unexpected path.
### The flow state dilemma
One unexpected consequence of AI-assisted coding is its impact on flow state. The traditional deep, uninterrupted focus that characterises productive programming sessions becomes harder to achieve when you’re constantly context-switching between writing code and reviewing AI suggestions.
Personally speaking, I derive great joy from getting deep into flow state as I build an application or a feature. It’s a wonderfully meditative state and genuinely improves my mental health.
The collaborative nature of AI-assisted development, while powerful, can disrupt this meditative quality of coding that many engineers cherish. It’s a trade-off between speed and the satisfying rhythm of sustained, focused work.
Every now and then, I’ll have a “feel the force, Luke” moment, turn off my Copilot, and intentionally enter into flow, unaccompanied by my AI buddy. I bet I’m not alone.
## Conclusion
AI has undoubtedly transformed software development, offering unprecedented speed in prototyping, unprecedented access to knowledge across domains, and unprecedented assistance in solving complex problems. It’s a great unblocker; you don’t always need to find the expert (or “Marcel,” as we call him at Investec) when AI can provide guidance.
However, the key to successful AI-assisted development lies in maintaining the human element. AI excels at generating code, but humans remain essential for understanding requirements, making architectural decisions, reviewing outputs critically, and ensuring that solutions align with business needs and engineering standards.
The future of software development isn’t about replacing engineers with AI; it’s about engineers learning to work effectively with AI as a powerful tool in their toolkit. The most successful teams will be those that embrace AI’s capabilities while maintaining rigorous standards for code quality, architectural soundness, and security practices.
As we continue to navigate this rapidly evolving landscape, the principles of good engineering remain constant: understand what you’re building, review what you’re shipping, and never lose sight of the bigger picture. AI can accelerate the journey, but human judgment must still set the destination.
This post was originally published on LogRocket.
**Tags:**
- AI
## Keeping front end and back end in sync with NSwag generated clients
October 12, 2025 · 6 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
For many years I've been a big fan of using NSwag to generate TypeScript and CSharp clients for APIs. I've written about it before in Generate TypeScript and CSharp clients with NSwag.
You're likely aware of the popularity of excellent projects like tRPC which provide a way to use TypeScript end-to-end. However, if you're working in a polyglot environment where your back end is written in C# or \[insert other language here\], and your front end is written in TypeScript, then cannot take advantage of that. However, by generating front end clients from a server's OpenAPI specs, it's possible to have integration tests that check your front end and your back end are aligned.
This post will show you how to do that using NSwag.
**Tags:**
- Swagger
- C#
- Azure
- TypeScript
**Read More**
## Azure DevOps: merging pull requests with conventional commits
August 29, 2025 · 9 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
There was a time in my life when I didn't really care about commit messages. I would just write whatever I felt like, and it was fine. Over time, I learned that good commit messages are important for understanding the history of a project, especially when working in a team. And also, because I tend to forget what I've been working on surprisingly quickly.
There's also more technical reasons to care about commit messages. For example, if you're using a tool like semantic-release to automate your release process, it relies on conventional commit messages to determine the next version number and generate release notes. It turns out that Azure DevOps has some challenges when it comes to maintaining a git commit history of conventional commits, especially when merging pull requests. By default, Azure DevOps uses a commit strategy that creates a merge commit with a message like "Merge PR 123: Title of pull request". This is acts against conventional commits.
You can use the UI to change the commit message when completing a pull request, but it's very easy to forget to do this. And if you're using squash merges, you lose the individual commit messages from the feature branch, which can be a problem if you're trying to maintain a history of conventional commits.
There is a way to bend Azure DevOps to our will; to allow us to control our commit messages. In this post, I'll show you how to do just that using the Azure DevOps API, some TypeScript and build validations. The fact this mechanism lives in a build validation means you cannot forget to set the commit message. That's the feature.
This post is not, in fact, specifically about using conventional commits. That's just a common use case. Rather this post is about being able to control the commit message when merging pull requests in Azure DevOps.
**Tags:**
- TypeScript
- Azure DevOps
- Node.js
**Read More**
## Azure DevOps: merging pull requests and setting autocomplete with the API
July 25, 2025 · 4 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
Have you ever wanted to merge a pull request in Azure DevOps using the Azure DevOps API? Or set a pull request to autocomplete, so it automatically merges when all policies are satisfied? If so, you're in the right place. In this post, I'll show you how to do just that using the Azure DevOps Client for Node.js.
I'm using the Azure DevOps Client for Node.js; but if you want to use the REST API directly, you can do that too. The principles are the same, but you'll need to make HTTP requests instead of using the client library.
To get up and running with the Azure DevOps Client for Node.js, you can see how we work with it in this post on dynamic required reviewers in Azure DevOps post. This will help you set up your environment and authenticate with Azure DevOps.
If you'd like to read about setting commit messages when merging pull requests in Azure DevOps, you can check out my post on merging pull requests with conventional commits in Azure DevOps.
**Tags:**
- TypeScript
- Azure DevOps
- Node.js
**Read More**
## Azure DevOps: using DefaultAzureCredential in an Azure Pipeline with AzureCLI@2
July 18, 2025 · 6 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
I frequently build scripts that work against Azure resources using the Azure SDK for JavaScript. I use the `DefaultAzureCredential` to authenticate against Azure resources - this is also available in other platforms such as .NET.
The `DefaultAzureCredential` is a great way to authenticate locally; I can `az login` and then run my script, safe in the knowledge that the `DefaultAzureCredential` will authenticate successfully. However, how can I use the `DefaultAzureCredential` in an Azure DevOps pipeline?
This post will show you how to use the `DefaultAzureCredential` in an Azure DevOps pipeline, specifically by using the `AzureCLI@2` task.
**Tags:**
- TypeScript
- Azure DevOps
- Node.js
**Read More**
## Azure DevOps: pull requests and dynamic required reviewers
June 25, 2025 · 11 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
Have you ever wanted to have required reviewers for a pull request in Azure DevOps? Probably. And that's an inbuilt feature of Azure DevOps. By using branch policies, you can set required reviewers for a pull request. If you want to ensure the code is reviewed by the appropriate people before it is merged into the main branch, this can prove very useful.
However, the required reviewers are static. You can set them up in the branch policies, but they don't change dynamically based on the code being altered or the people involved in the pull request. I spent many moons trawling the internet for an answer to this question, and I found that many people were asking the same question. The answer was always the same: "You can't do that."
However, there is a way. It is, hand on heart, marginally clunky. But the clunk is marginal, and more than acceptable. It involves co-opting build validations to achieve the desired effect. In this post, I'll show you how to do that.
**Tags:**
- TypeScript
- Azure DevOps
- Node.js
**Read More**
## TypeScript is going Go: Why it’s the pragmatic choice
May 20, 2025 · 16 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
Ashley Claymore
Bloomberg. TC39.
TypeScript is being ported to Go. This is known as "TypeScript 7" (it is currently on 5.8). It's quite likely that you know this by now, as there have been excellent communications from the TypeScript team in a variety of forums. In fact, hats off to the team; it's been an object lesson in how to communicate well; straightforward, clear and open.
There's no shortage of content out there detailing what is known about the port. This piece is not that. Rather, it's the reflections of two people in the TypeScript community. What our thoughts, feelings and reflections on the port are.
It's going to be a somewhat unstructured wander through our reactions and hopes. Buckle up for opinions and feelings.
**Tags:**
- TypeScript
**Read More**
## Microsoft Graph client: how to filter by endswith
May 11, 2025 · 8 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
In this post we're going to look at filtering using an `endswith` filter with the Microsoft Graph client. This falls into the category of "Advanced query capabilities on Microsoft Entra ID objects" and I found tricky to get working.
Performing an `endsWith` or similar filter shouldn't be difficult. But the method of how to do so isn't obvious. If you've ever encountered a message like this:
> Operator 'endsWith' is not supported because the 'ConsistencyLevel:eventual' header is missing. Refer to https://aka.ms/graph-docs/advanced-queries for more information
Then this blog post is for you.
**Tags:**
- Microsoft Graph
- TypeScript
**Read More**
## List Pipelines with the Azure DevOps API
April 6, 2025 · 3 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
Listing the Azure Pipelines using the Azure DevOps REST API and TypeScript is possible, but if you use the official Azure DevOps Client for Node.js, you might have issues. This is because it does not support pagination. So if you have a project with a large number of pipelines, then using the official client might mean you cannot retrieve it.
This post implements an alternative mechanism, directly using the Azure DevOps API and thus handling pagination. If you're curious as to how to create a pipeline, then check out my post on creating a pipeline with the Azure DevOps API.
**Tags:**
- Azure Pipelines
- Azure DevOps
- TypeScript
**Read More**
## Static Web Apps CLI: local authentication emulation with ASP.NET
March 29, 2025 · 18 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
When developing web applications that have some dependency on authentication, it can be tricky to get a local development setup that allows you to manage authentication effectively. However, there's a way to achieve this, using the Static Web Apps CLI local authentication emulator.
I build a lot of SPA style applications that run JavaScript / TypeScript on the front end and C# / ASP.NET on the back end. The majority of those apps require some kind of authentication. In fact I'd struggle to think of many apps that don't. This post will walk through how to integrate ASP.NET authentication with the Static Web Apps CLI local authentication emulator to achieve a great local development setup. Don't worry if that doesn't make sense right now, once we have walked through the setup, it will.
This post builds somewhat on posts I've written about using the Static Web Apps CLI with the Vite proxy server with for enhanced performance and how to use the `--api-location` argument to connect to a separately running backend API. However, you need not have read either post to understand what we're doing.
We're going to first walk through what we're trying to achieve, and then we'll walk through the steps to get there. When it comes to implementation, we're going to use Vite as our front end server, and ASP.NET as our back end server. The Static Web Apps CLI will be used for local authentication emulation.
**Tags:**
- Azure Static Web Apps
- Node.js
- ASP.NET
- Static Web Apps CLI
**Read More**
## Node.js, Azure Application Insights, and Fastify
February 17, 2025 · 5 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
If you deploy a Node.js application to Azure, you might want to use Azure Application Insights to monitor it. This post shows you how to set up a Node.js application with Azure Application Insights. It also includes a Fastify plugin to automatically track requests. (Given the out of the box mechanism for tracking requests does not work with Fastify.)
This is one of those posts that gathers together information I found doing research and puts it in one place.
**Tags:**
- Azure
- Node.js
- TypeScript
**Read More**
## Get Service Connections with the Azure DevOps API (REST and TypeScript)
January 25, 2025 · 5 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
If you work with Azure Pipelines, you'll likely have come upon the need to create service connections. These are the connections to external services that your pipelines need to run. You can interrogate these connections using the Azure DevOps REST API. This post goes through how to do this; both using curl and using TypeScript.
I'm writing this post because when I attempted to use the Azure DevOps Client for Node.js package to acquire them I found it lacking, and not for the first time. I am going to allow myself a little moan here; ever since Microsoft acquired GitHub, the Azure DevOps ecosystem feels like it has had insufficient investment.
However, as is often the case, there is a way. The Azure DevOps REST API is there for us, and with a little `fetch` we can get the job done.
**Tags:**
- Azure Pipelines
- Azure DevOps
- TypeScript
**Read More**
## Slash command your deployment with GitHub Actions
January 2, 2025 · 12 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
In the world of computing, slash commands have a proud and noble history. They are a way to interact with a system by typing a command into a chat or terminal, usually with a `/` preceding the command; hence the name "slash commands". GitHub has its own slash commands that you can use in issues and pull requests to add code blocks and tables etc. The slash commands are, in truth, quite limited.
However, through clever use of the GitHub Actions platform, it's possible to build something quite powerful which is "slash-command-shaped". In this post, we'll look at how to implement a `/deploy` slash command which, when invoked in a pull request, will deploy an Azure Container App with GitHub Actions.
The technique we'll use is covering a deployment usecase, as we'll see, it could be adapted to many other scenarios.
**Tags:**
- GitHub Actions
- Azure Container Apps
**Read More**
## Smuggling .gitignore, .npmrc and friends in npm packages
December 22, 2024 · 5 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
I recently needed to include a number of `.gitignore` and `.npmrc` files in an npm package. I was surprised to find that the `npm publish` command strips these out of the published package by default. As a consequence, This broke my package, and so I needed to find a way to get round this shortcoming.
I ended up using zipping and unzipping with `postinstall` and `prepare` scripts to include these files into my npm package.
This post shows how to use zipping and unzipping with `postinstall` and `prepare` scripts to include these files into your npm package.
**Tags:**
- Node.js
**Read More**
## npx and Azure Artifacts: the secret CLI delivery mechanism
December 8, 2024 · 8 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
The `npx` command is a powerful tool for running CLI tools shipped as npm packages, without having to install them globally. `npx` is typically used to run packages on the public npm registry. However, if you have a private npm feed, you can also use `npx` to run packages available on that feed.
Azure Artifacts is a feature of Azure DevOps that supports publishing npm packages to a feed for consumption. (You might want to read this guide on publishing npm packages to Azure Artifacts.) By combining `npx` and Azure Artifacts, you can deliver your CLI tool to consumers in a way that's easy to use and secure.
This post shows how to use `npx` and Azure Artifacts to deliver your private CLI tool to consumers.
**Tags:**
- Azure DevOps
- Node.js
**Read More**
## Azure Artifacts: Publish a private npm package with Azure DevOps
December 1, 2024 · 3 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
Azure DevOps has a feature called Azure Artifacts that supports publishing npm packages to a feed for consumption. Publishing a private npm package with Azure DevOps is a common scenario for teams that want to share code across projects or organizations. This post shows how to publish a private npm package with Azure DevOps.
Publishing a private npm package with Azure DevOps is fairly straightforward, but surprisingly documentation is a little sparse.
**Tags:**
- Azure DevOps
- Node.js
**Read More**
## Introducing azdo-npm-auth (Azure DevOps npm auth)
November 9, 2024 · 6 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
Azure DevOps has a feature called Azure Artifacts that supports publishing npm packages to a feed for consumption. Typically those npm packages are intended to be consumed by a restricted audience. To install a package published to a private feed you need to configure authentication, and for non Windows users this is a convoluted process.
`azdo-npm-auth` exists to ease the setting up of local authentication to Azure DevOps npm feeds, particularly for non Windows users.
**Tags:**
- Azure DevOps
- Node.js
**Read More**
## Azure DevOps API: Set User Story column with the Azure DevOps Client for Node.js
November 1, 2024 · 5 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
When I attempted to set the column of a User Story in Azure DevOps using the Azure DevOps Client for Node.js, I was surprised to find that the field `System.BoardColumn` was read-only and I bumped into the error:
> TF401326: Invalid field status 'ReadOnly' for field 'System.BoardColumn'.
This post explains how to set the column of a User Story in Azure DevOps using the Azure DevOps Client for Node.js and it's based in part on a Stack Overflow question.
**Tags:**
- TypeScript
- Azure DevOps
- Node.js
**Read More**
## module ws does not provide an export named WebSocketServer
October 15, 2024 · One min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
I use Playwright for testing and mock Web Socket calls with the ws package. I recently did an `npm upgrade` and found myself hitting this error message when I tried to run tests:
SyntaxError: The requested module 'ws' does not provide an export named 'WebSocketServer'
It was caused by the following code:
import { WebSocketServer } from "ws"; // this goes bang!// ...const mockWsServer = new WebSocketServer({ port: 5000 });
The fix was surprisingly simple to implement but hard to search for. That's why I'm writing this.
## Resolving "The requested module 'ws' does not provide"...
This fix is as simple switching the code to:
import ws from "ws";// ...const mockWsServer = new ws.Server({ port: 5000 });
And that should resolve the issue.
## Static Typing for MUI React Data Grid Columns
October 7, 2024 · 6 min read
John Reilly
OSS Engineer - TypeScript, Azure, React, Node.js, .NET
The MUI X Data Grid is a really handy component for rendering tabular data in React applications. But one thing that is not immediately obvious is how to use TypeScript to ensure that the columns you pass to the component are correct. This post will show you how to do that.
Why does it matter? Well look at this screenshot of the Data Grid with incorrect column names:
**Tags:**
- React
- TypeScript
- MUI
**Read More**
Older Entries
- TypeScript
- TypeScript vs JSDoc JavaScript
- Type annotations proposal: strong types, weakly held
Azure
- Azure Container Apps: Easy Auth and .NET
- Introducing azdo-npm-auth (Azure DevOps npm auth)
- npx and Azure Artifacts: the secret CLI delivery mechanism
- Azure Static Web Apps: dynamic redirects with Azure Functions
ASP.NET
- ESLint your C# with Roslyn Analyzers
- ASP.NET, Serilog and Application Insights
React
- Structured data and React
- React: storing state in URL with URLSearchParams
- Notable articles
- The history of Definitely Typed
- TypeScript: the documentary
- How we fixed my SEO
Popular articles
- ASP.NET, Serilog and Application Insights
- ESLint your C# with Roslyn Analyzers
- dotnet-format: Prettier your C# with lint-staged & husky
Recently updated
- Where AI-assisted coding accelerates development — and where it doesn’t
- Azure Container Apps, Bicep, managed certificates and custom domains
- Using AZD for faster incremental Azure Static Web App deployments in GitHub Actions
Learn more / support me
- About me
- Blog source code on GitHub
- Blog categories
- RSS feed
- Atom feed
- Privacy Policy
-
-
Copyright © 2012 - 2025 John Reilly. Built with Docusaurus.