Skip to main content

LocalStack Adds Ability to Visually Debug AWS Apps on Local Machines

LocalStack today announced it has extended its ability to simulate Amazon Web Services (AWS) environments to provide an ability to debug applications before deploying them.

Company CEO Colin Neagle said App Inspector makes it possible for developers to debug their applications running in a simulated AWS environment inside a container on a local server.

Simulating the full application stack within a local sandbox container makes it possible to better understand application behavior such as data flows between AWS services, event execution paths and resource dependencies that may have been inadvertently misconfigured, noted Neagle.

Once discovered, App Inspector then generates a visual representation of the interaction between services in the local environment to make it simpler to debug applications without digging through logs and then needing to upload a fix to a staging server running in the AWS cloud.

That capability doesn’t replace the need for an observability platform but it does reduce the overall amount of friction software engineering teams are likely to encounter as they build applications on a laptop or local server, said Neagle.

In general, application developers still prefer to build software on a local machine that they have more control over. LocalStack was created to provide a local instance of an AWS environment that enables developers to build software that is destined to run in a cloud environment. The goal is to reduce the number of instances where code that runs on a local machine doesn’t actually run as intended in the cloud.

At present, LocalStack supports AWS, but the company is also building a set of emulation services for Microsoft Azure that will be made available later this year.

It’s not clear how much code is developed first on a notebook, desktop PC or local server, but in the age of artificial intelligence (AI) the amount of code being generated by individual developers is increasing exponentially. Every time code has to be sent back for revision, it slows the overall pace of application development. DevOps and platform engineering teams will need to streamline workflows in a way that ultimately improves productivity.

The speed at which DevOps teams make that adjustment will naturally vary from one team to another, but the sooner everyone realizes that a piece of AI-generated code might not run as intended in a cloud computing environment the less stress there is likely to be for all concerned. The challenge and the opportunity, of course, is identifying those points of friction before the amount of code being created might soon become too overwhelming to effectively deploy and manage.

Ultimately, DevOps teams need to meet application developers where they proverbially are rather than forcing them all to develop applications in the cloud. The simple truth is many developers, if inclined, will simply evade any cloud mandate using shadow IT resources. The better part of valor then is to enable application developers to build applications on the platforms they prefer in ways that can still be centrally managed by a DevOps team.



from DevOps.com https://ift.tt/yVcxaqj

Comments

Popular posts from this blog

Claude Code’s Ultraplan Bridges the Gap Between Planning and Execution

Planning a complex code change is hard enough. Reviewing it in a terminal window shouldn’t make it harder. Anthropic is addressing that friction with a new capability called Ultraplan, currently in research preview as part of Claude Code. The feature moves the planning phase of a coding task from your local terminal to the cloud — and gives developers a richer environment to review, revise, and approve a plan before a single line of code changes. It’s a small workflow shift with real practical value, especially for teams working on large-scale migrations, service refactoring, or anything that requires careful coordination before execution begins. How it Works Ultraplan connects Claude Code’s command-line interface (CLI) to a cloud-based session running in plan mode. When a developer triggers it — either by running /ultraplan followed by a prompt, typing the word “ultraplan” anywhere in a standard prompt, or choosing to refine an existing local plan in the cloud — Claude picks u...

Claude Code Can Now Run Your Desktop

For most of its short life, Claude has lived inside a chat window. You type, it responds. That model is changing fast. Anthropic recently expanded Claude Code and Claude Cowork with a new computer use capability that lets the AI directly control your Mac or Windows desktop — clicking, typing, opening applications, navigating browsers, and completing workflows on your behalf. It’s available now as a research preview for Pro and Max subscribers. The short version: Claude can now do things at your desk while you’re somewhere else. How it Actually Works Claude doesn’t reach for the mouse first. It prioritizes existing connectors to services like Slack or Google Calendar. When no connector is available, it steps up to browser control. Only when those options don’t apply does it take direct control of the desktop — navigating through UI elements the way a human would. Claude always requests permission before accessing any new application, and users can halt operations at any point. T...

Google’s Scion Gives Developers a Smarter Way to Run AI Agents in Parallel

Running multiple AI agents on the same project sounds straightforward — until they start stepping on each other. Different agents accessing the same files, sharing credentials, or colliding on the same codebase can quickly turn a promising setup into a coordination nightmare. That’s the problem Google set out to solve with Scion. Scion is an experimental multi-agent orchestration testbed built to manage concurrent AI agents running in containers across local machines and remote clusters. Google recently open-sourced the project, giving developers a hands-on way to experiment with parallel agent execution across tasks like research, coding, auditing, and testing. Think of it as a control layer that keeps agents working together without getting in each other’s way. What Makes Scion Different Most agent frameworks treat AI as a library or prompt-chaining script that runs directly in your environment. Scion takes a different approach — it treats agents as system processes, wrapping ...