Building Cloud Scout: When Your Code Works But Reality Changes

I built a dashboard to solve a real problem: teams missing critical cloud infrastructure announcements because they weren’t on the right distribution lists or didn’t have portal access to every provider they used.

The application worked. Then, technical roadblocks with vendor-specific APIs (often discovered during Founder’s Quiet Hours, after midnight) revealed what’s actually sustainable to build, shaping the business model around that reality.

As an Azure Engineer and a Solutions Architect, I’ve worked in cloud infrastructure, handling escalations and migrations. In my years of experience, I have considered the various use cases that inspired this mission as a value-add for the customer. Let’s consider a couple of real-world examples that could fit a number of cloud provider scenarios:

The Problem

VNet Injection to Private Endpoints: Teams running production databases discovered they had months, not years, to complete mandatory networking reconfigurations. Complex changes touching their entire infrastructure.

PostgreSQL Single Server to Flexible Server: A required migration to completely different architecture, pricing models, and operational characteristics. Not a simple upgrade.

In both cases, announcements existed. Documentation was published. Migration guides were available.

But engineers found out late. They weren’t on the right distribution lists. They didn’t have portal access. The person who got the emails had left the company months ago.

By the time they found out, 18-month planned migrations became 90-day emergency scrambles.

And it wasn’t just one cloud provider. These teams had infrastructure across AWS, GCP, Oracle, Databricks, Snowflake. Each with their own status pages, notification systems, announcement methods. Nobody had time to check them all. Most engineers didn’t have portal access to all of them.

I kept thinking: someone should aggregate this into one dashboard.

Then I realized: I could build that.

What I Built

Cloud Scout aggregates status information, deprecation notices, and maintenance announcements from multiple cloud providers into a single dashboard.

The goal: Stop portal hopping. Stop missing critical announcements. Give teams one place to see what’s coming across their entire infrastructure without needing portal access to every provider.

Frontend: React dashboard with filters for provider categories (Cloud, Data, AI), severity levels, and timeframes. Mobile-responsive. LocalStorage caching for instant loads.

Backend: Python serverless functions on Vercel fetching status from 11 providers:

  • AWS and Azure via RSS feeds
  • GCP through JSON API
  • Databricks, Snowflake, MongoDB via Statuspage.io
  • AI providers (Claude, OpenAI, Gemini) via Statuspage.io
  • Red Hat security advisories

Database: PostgreSQL on Supabase handling different alert types, severities, and historical tracking.

I deployed to production. Data flowed. All 11 integrations worked.

The application functioned exactly as designed.

For a few weeks, it did what I wanted. Then reality hit.

The 403 Errors Started

I started seeing 403 Forbidden errors in my logs.

Not all at once. One provider first. Then another. Then another.

403 means: “I know who you are, and you’re not allowed to access this.”

These weren’t rate limits (429 errors). These weren’t server issues (5xx errors). These were deliberate access restrictions.

The endpoints hadn’t moved. My code hadn’t changed. Authentication was still valid. But requests that worked yesterday were rejected today.

What Was Actually Happening

Providers were blocking programmatic access. Not as bugs. As features.

Vendors want customers on their sites. Period. It’s business model, not personal.

When you visit a provider’s status page directly:

  • You see their other services
  • You stay engaged with their platform
  • You might click through to their console
  • You remain in their ecosystem

When a third-party tool aggregates that information elsewhere, providers lose that engagement. So they design systems accordingly.

The 403 errors were technical implementation of business decisions.

The Technical Restrictions

As providers realized aggregation was happening, restrictions increased:

  • 403 errors from cloud hosting IP ranges
  • Aggressive rate limiting (429 errors for previously fine patterns)
  • RSS feed format changes with no announcement
  • Webhook endpoints disabled
  • New authentication requirements on previously open APIs
  • Response format changes breaking data normalization

None documented. No deprecation notices. No migration guides. Just errors in logs and broken integrations.

The Maintenance Burden

What worked at deployment required constant debugging:

Fixed AWS, Azure returns 403s. Resolved Azure, Databricks webhooks stop. Fix Databricks, GCP changes JSON format. Update GCP, AWS RSS structure changes.

This meant I spent more time maintaining integrations than the dashboard would save users.

This wasn’t a code bug. The code was fine. The architecture was sound. The problem was fundamental: I built on dependencies designed to serve different purposes.

The AI Tools Reality

I used AI coding assistants for boilerplate, architectural patterns, scaffolding components. Saved setup time.

But for actual API integrations, the AI-generated code was based on outdated documentation. Authentication methods had changed. Response formats were different. Endpoints had moved.

I caught this during testing, but spent significant time verifying and correcting integration code against current documentation.

The lesson: AI tools excel at structure. For third-party API integration, verify everything against current documentation.

The AI didn’t know about VNet injection migration timelines or PostgreSQL deprecation. Training data was old. Suggested integrations reflected already-changed APIs.

What I Actually Learned

1. Business Models Shape Technical Architecture

I thought I was making smart technical choices: official APIs, documented patterns, proper integrations.

But provider business interests drive technical architecture. Status pages exist to bring customers to provider sites. Making them easy for third parties to consume conflicts with that goal.

The 403 errors weren’t bugs. They were systems working as designed.

2. “Public” Doesn’t Mean “For You”

Publicly accessible doesn’t mean designed for programmatic consumption by external tools.

Status pages are public because customers need to see them on provider sites. Not because providers want third-party services built on top.

3. Experiencing Both Sides

I’ve been on both sides of this pattern:

Provider side: Helping customers understand why integrations broke when APIs changed. Explaining business decisions implemented as technical restrictions.

Customer side: Discovering APIs I depended on weren’t meant for how I used them.

Experiencing both taught me more than either alone.

4. Support Channels Matter

Customers with enterprise support get advance notice, direct channels, account teams, better communication.

Building on public APIs meant:

  • No account team
  • No support contract
  • No advance notice
  • Just 403 errors and nobody to contact

No support relationship means no reliable integration.

The Technical Skills Gained

Despite everything, I learned valuable skills:

Frontend: React state management with Sets, LocalStorage caching with background refresh, mobile-responsive design, optimistic UI with error handling

Backend: Serverless architecture on Vercel, RSS parsing, JSON API consumption, webhook design, data normalization across formats

Integration: Working with 11 API formats, normalizing severity levels, rate limiting strategies, graceful degradation, debugging 403 errors

Deployment: Vercel pipeline, environment management, database schema for multi-source data, monitoring for serverless functions

The code demonstrates production-ready patterns. These skills transfer.

What I’d Do Differently

Assess Business Model Alignment First

Ask before writing code: “Does my product depend on behavior conflicting with my dependency’s business model?”

Cloud Scout aggregated information providers wanted consumed on their sites. Fundamental conflict.

Test For Restrictions Early

Monitor APIs for a month before building:

  • Do they block programmatic access?
  • Are there unexplained 403s?
  • Do they change without notice?

Would have seen this coming.

Verify API Documentation Directly

Use AI for scaffolding. Verify all integration code against current provider documentation. Don’t assume generated code reflects current endpoints or authentication.

Time “saved” trusting AI gets spent debugging why it doesn’t work.

Build What You Control

If core value depends on external dependencies, you don’t control core value.

Should have built more monitoring infrastructure myself instead of depending on external status pages designed to drive traffic elsewhere.

What This Experience Taught Me

This wasn’t a failed project. It was proving technical capability while learning its limits.

I proved I can build: React frontends with complex state management. Python backends with serverless architecture. Multi-provider API integration. Database design for multi-source data. Production deployment.

The application worked. Code was sound. Architecture was reasonable.

I learned the constraint: Technical capability isn’t the limit. Business context and dependency control are.

I had technical skills but no organizational backing, support contracts, or recourse when providers restricted access.

The hardest lesson: Knowing when external dependencies create unsustainable maintenance burden.

I could have kept fixing integrations. Found workarounds for 403s. Rebuilt parsers when formats changed.

But that’s not solving problems. That’s fighting how systems are designed to work.

Recognizing when not to continue building is as valuable as knowing how to build.

For Other Developers

Enterprise Support Infrastructure Matters

Without it, you can’t fix 403 errors with phone calls. You don’t get advance notice or escalation paths.

Understand Dependency Business Models

If your product’s value conflicts with your dependency’s revenue model, expect restrictions.

Vendors want customers on their sites. If your tool keeps them off, expect 403s.

“Technically Possible” Doesn’t Equal “Sustainable”

Just because you can build something doesn’t mean you can maintain it. External dependencies you don’t control create maintenance burden you can’t eliminate through better code.

AI Tools Have Outdated Information

For third-party API integration, verify everything. AI training data gets old fast, especially for cloud provider APIs.

VNet injection migrations, PostgreSQL deprecations, authentication changes: none in AI training data. Learned by running into errors.

Watch For 403 Errors Early

They’re not bugs. They’re signals you’re building on something not meant for your use case.

When you see 403s during development, that’s dependencies telling you how they want APIs used. Listen.

Where Cloud Scout Stands Now

Code is on GitHub demonstrating:

  • React state management and caching
  • Python serverless functions
  • Multi-source data integration
  • Production deployment workflows
  • Graceful error handling

Some integrations still work. Some don’t, because providers enforce restrictions aligning with their business models.

The codebase shows what’s technically possible and what happens when you build on dependencies designed for different purposes.

The Bottom Line

I built Cloud Scout to solve a real problem: teams missing critical migration announcements because they weren’t on distribution lists or didn’t have portal access.

I succeeded at building a working solution. The application functioned. Integrations worked. Dashboard solved the problem.

I also learned that understanding provider business models didn’t prevent running into them as constraints.

The code was fine. Architecture was sound. The problem was fundamental: I built value on dependencies designed to serve their own business interests.

These aren’t lessons from tutorials. You learn them by:

  • Building real things
  • Watching 403 errors appear
  • Understanding they’re not bugs but business logic as technical restrictions
  • Deciding when to stop fighting how systems are designed

Because both successes and constraints are part of the learning experience.


Project Details:

  • Built With: React, Python, PostgreSQL, Vercel, Supabase
  • Integrations: 11 cloud and data providers (AWS, Azure, GCP, Oracle, Databricks, Snowflake, MongoDB, Red Hat, Claude, OpenAI, Gemini)

Key Learnings:

  • Full-stack development from concept to production
  • Multi-provider API integration patterns
  • Business models vs technical capability
  • When external dependencies create unsustainable maintenance
  • Enterprise vs independent development contexts

References

GitHub

Demo Site


Part of my journey documenting the transition from enterprise cloud engineering to independent development. Sharing both technical wins and real-world constraints shaping what’s sustainable to build.


Discover more from MsTechDiva

Subscribe to get the latest posts sent to your email.

Discover more from MsTechDiva

Subscribe now to keep reading and get access to the full archive.

Continue reading