Building FactoryOS - Week 1: Laying the Foundation

TL;DR I am building a platform to streamline launching multiple products quickly. In this first week, I focused on core infrastructure: setting up a monorepo, a dual SAAS/self-host build system, licensing (ECDSA-signed JWT, 14‑day offline grace), authentication modules (Identity/RBAC for multi-tenancy), Billing adapter integrated (Polar via Laravel), Feature flags via Laravel Pennant and CI pipeline with SBOM generation + Cosign signing.

Why a platform at all?

After years of client work, I'm shifting gears to launch my own products. Rather than build each app from scratch (and reinvent auth, billing, etc. every time), I decided to create a platform that provides the common foundation for any SAAS product I spin up. FactoryOS is that platform – essentially an operating system for a one-person startup factory. The goal is simple: ship new products in days, not months by reusing core modules across all products.

Of course, starting with a platform-first approach is a bit risky. Platforms usually succeed only if a real product proves the concept. I'm mindful of this – so I've set a rule: no new platform feature unless my first product actually needs it within a week. This keeps me focused on solving real product requirements, not gold-plating infrastructure. In fact, my plan for Week 2 is to develop a minimal first product on FactoryOS to validate that the abstractions hold up.

Another reason I'm investing in a unified platform is to enable some ambitious automation down the road. I eventually want to leverage AI-driven agents to handle repetitive dev and growth tasks (market research, code scaffolding, testing, sales outreach). FactoryOS’s consistent structure could allow "agent" scripts to plug in and run parts of the build or marketing process for each product. That's far-off vision stuff, but building with that in mind from day one might pay off later.

In short, I'm betting that a solid shared foundation will let me launch and iterate on multiple ideas quickly, while maintaining quality (since auth, licensing, payments, etc. are solved in one place) and speed (since I'm not rebuilding the wheel each time). Week 1 was all about laying that groundwork.

What I shipped this week

This week was heads-down on core infrastructure. Here’s what I got done in the first 7 days:

  1. Monorepo & Tooling: I set up a single repository to host all my products and the platform code. The structure has a /platform directory with Laravel packages (modules like identity, licensing, billing, etc.) and a /products directory where each product lives as its own Laravel app. I initialized the monorepo with a Makefile for common tasks (build, test, release), and composer path repositories so each product can require the local platform packages easily. By the end of the week, I could run one command to build and test everything, which was a big developer experience win (no more juggling multiple repos for related projects).

  2. Dual Build System (SAAS & Self-Hosted): From day one, each product needs to ship in two forms: a multi-tenant cloud version (SAAS) and a single-tenant self-hosted version. I created two Dockerfiles and a unified build pipeline to produce both container images from the same codebase. Using environment toggles, the SAAS build runs with MULTI_TENANT=true (meaning one instance can serve multiple orgs/accounts), while the self-hosted build uses SELF_HOST_MODE=true to enforce isolation and license checks. With a single make release command, I get two images output (e.g. myrepo/product:saas-<gitsha> and myrepo/product:self-<gitsha>) – one optimized for my cloud deployment, and one for customers to run on their own infrastructure. Both images pass the same test suite to ensure consistency. (Trade-off: I considered using a single image with a runtime mode switch, which would simplify releases, but opted for two images to keep things straightforward and avoid any config drift at runtime. I may revisit this decision later if maintenance becomes an issue.)*

  3. Licensing Module: To commercialize the self-hosted offerings, I built a licensing system into the platform. It uses ECDSA P-256 public/private keys to sign license tokens (JWTs). I even added an artisan command php artisan license:generate to issue licenses for a given organization, with details like expiry, seat count, and feature tier. On the application side, there's a ValidateLicense middleware that runs on startup and on each request in self-hosted mode. It loads the license JWT, verifies the signature and claims locally (no external call needed), and decides if the app should continue serving. If the license is missing or invalid (e.g. expired or too many users), the middleware will block the request – currently returning an HTTP 402 Payment Required status. (Yes, 402 is a real but rarely-used status code. I might switch this to a 403 Forbidden with a specific error code in the JSON response for clarity.) I've built in a 14-day grace period for offline use – so if the license server can't be reached for a while, the app will keep working up to two weeks past expiry. This licensing setup should prevent unauthorized use of the self-host product while being resilient to network issues.

  4. Identity & Multi-Tenancy: Every product I launch will need organizations (tenants), user accounts, roles, and invites – the basic multi-tenant SAAS scaffolding. I implemented an Identity package in the platform to handle this once. It provides models and APIs for org management, user registration/login (built on Laravel's auth), inviting team members by email, assigning roles (with role-based access control checks), and scoping data by tenant. Out of the box, a product can use this to support teams and roles like "Owner", "Admin", "Member" etc. I used Laravel Gates/Policies for permissions. There's also middleware to resolve the current org context (for example, based on subdomain or an X-Org-ID header for API calls). In single-tenant (self-hosted) mode, the identity module knows to default to one organization and enforce user limits from the license claims (e.g. if your license says 5 seats, the 6th user cannot be invited). By the end of the week, I had the backend in place for org creation, user invites, role updates, etc., tested via API calls. (The front-end UI for these actions is still bare-bones – that's on the to-do list.)

  5. Billing Integration: I don't want to delay revenue features, so I integrated billing in Week 1 as well. I chose Polar (a subscription payments platform) and hooked it in via their Laravel integration. The platform now has a billing package that can create subscriptions for an org, handle checkout links, and listen to webhook events from Polar. When a payment succeeds, fails, or a subscription is canceled, the system gets notified and updates the org's subscription status in our database. This ties into the licensing/entitlements too – for instance, if a customer's subscription lapses, we'll automatically turn off their paid features. The beauty of having this as a shared module is that none of my product-specific apps need to contain payment logic or Stripe API calls; it's all abstracted behind the platform. (Side note: "official Laravel adapter" might be a generous term – it's a custom integration following Polar's API, but it's cleanly wrapped in the platform code. If I switch to another payments provider later, I can do so in one place.)

  6. Feature Flags & Entitlements: To manage feature tiers (free vs. premium features) across both SAAS and self-host models, I built an Entitlements system using Laravel Pennant for feature flags. This allows me to define features and toggle them per organization or per license. For example, a feature like advanced reporting can be flagged as "pro" – enabled if the user's subscription plan includes it, or if their self-host license token includes that feature. I set up Pennant to use the database for flag storage and wrote some glue code so that when a subscription is created/updated, the relevant feature flags for that org are flipped on or off. Likewise, when a self-host app boots up, it reads the features array from the license JWT and initializes the flags accordingly. This way, the app's code can simply check feature('some-feature') or use a Blade directive and not worry about the underlying licensing logic. Here's how a product's Blade template might gate a premium feature:

@if (feature('pro_reporting'))
    <livewire:reports.pro-dashboard />
@else
    <x-reports.upgrade-cta />
@endif

That's how easy it is for any product to show (or hide) UI elements based on entitlements. I've also added middleware to protect routes and backend logic in the same way. The result: one codebase can serve both free and paid tiers, and I can even roll out features gradually or run beta tests by toggling flags for specific users or orgs.

  1. DevOps, CI/CD, and Security: I didn't neglect the ops side. I created a GitHub Actions CI workflow that kicks off on each push. It runs the tests, then builds the two Docker images, generates a Software Bill of Materials (SBOM) using Syft, and signs the images using Sigstore Cosign. Every image build is cryptographically signed, which will let others (and me) verify that the image hasn't been tampered with. I also prepared a Helm chart for Kubernetes deployments and a Docker Compose file for easy self-hosted setup. Deployment for any product should be as simple as pulling the latest image and updating a version tag. All this might be overkill for week one, but I wanted the "factory assembly line" to be ready early. Now I have high confidence that when I do push a product live, the pipeline from code to running container is solid and auditable.

  2. Command Center CLI (Go TUI): As a bonus, I started building a little command-line dashboard for FactoryOS. Under the ops/factory directory, I'm using Go with the Bubble Tea TUI framework to create a Terminal User Interface for managing my products and platform. It's pretty minimal so far – basically an ASCII art FACTORYOS logo on startup and menu items you can navigate using the keyboard. This isn't strictly necessary, but I have a vision of a one-stop CLI to monitor the "factory floor" – seeing all products, their status, maybe even triggering deployments or running AI agent workflows from a terminal UI. For now, it's just a fun side project (and a nice break from PHP), but it lays groundwork for future automation.

FactoryOS ASCII Dashboard and Menu

Each of these accomplishments builds on the others. By dogfooding the platform with a dummy product (I created a placeholder app called FeedbackMetric to test everything in practice), I confirmed that I can add a new product in the /products directory, wire it up to the platform packages, and get a working app that has all the baseline functionality (auth, org management, billing, etc.) out of the gate. In one week, FactoryOS went from zero to a multi-feature skeleton that can support real SAAS applications.

Architecture Snapshot

To visualize how everything is organized, here's a simplified view of the FactoryOS monorepo structure:

/platform
├── identity        # Org management, roles, SSO hooks
├── licensing       # License key generation & validation
├── billing         # Subscription integration (Polar API)
├── entitlements    # Feature flags and plan entitlements
├── ui-kit          # Shared UI components (Livewire/Blade)
/products
├── feedbackmetric  # Example product (Laravel app) 
├── (your next product here)
/ops
├── factory         # Go TUI for command-center (dev tools)
├── ci              # CI workflows and scripts
├── docker          # Dockerfiles for builds, Compose, etc.
└── charts          # Helm charts for Kubernetes deploys
Makefile            # One-command builds, tests, releases

The platform packages are built as reusable Laravel packages (loaded via Composer). Each product is a full Laravel app that pulls in those packages. This means I can create a new product by simply cloning a template app into /products/newproduct and hooking it up – it will automatically have all the platform capabilities (auth, licensing, etc.) available. The Makefile and ops scripts ensure that whether I'm building locally or in CI, I consider all products and keep the builds consistent.

A few implementation details: in SAAS mode, some platform components behave slightly differently (for example, licensing is basically no-op in hosted mode, since you wouldn't license yourself – instead you rely on billing subscriptions). In self-host mode, certain multi-tenant features are disabled or assume a single tenant context. I've tried to keep those differences minimal and well-encapsulated. In the end, it's one codebase and one monorepo powering all deployment models.

The Trade-offs

It wasn't all smooth sailing. A few things I wrestled with or consciously traded off:

License Enforcement UX: A more standard approach could be HTTP 403 Forbidden with a machine-readable error message (e.g. { "error":"license_expired" }). I may adjust this as I refine the self-host licensing flow, especially once real users are involved and need clear messaging on what to do if their license expires.

Two Docker Images vs. One: I debated whether maintaining two separate images (SAAS and self-host) is the right approach. The dual images ensure totally isolated configs (no chance of a self-host only feature accidentally running in SAAS, and vice versa), but it also means double the build artifacts and potential drift if I'm not careful. An alternative would be a single image with a runtime switch (env variable) to toggle SAAS vs self-host mode. That would simplify distribution (only one image per product) but every runtime would carry the code for both modes. For now, I've stuck with two images because it makes testing and licensing logic simpler (each image has only what it needs).

Complexity vs. YAGNI: I definitely built more than a typical "Week 1 MVP". Things like SBOM signing, Helm charts, and a whole feature flag system are usually bells and whistles for later. I justified doing them early because they're part of the platform value prop: I want every product to be enterprise-ready (or close to it) out of the gate. Still, I have to be careful not to over-engineer. The rule I mentioned (no new platform features unless needed immediately) is my guardrail. I did catch myself almost adding a "policy engine" module and then stopped – I won't build that until a product truly requires custom fine-grained permissions or something. Iteration speed and tangible product output remain the priority.

Multi-Tenancy Edge Cases: Implementing orgs and roles in a generic way brought up questions. For example, should each product define its own roles, or use a common set? I decided on a common baseline (Owner/Admin/Member) that products can extend if needed. Also, tenancy in APIs vs. UIs: I built a header/subdomain mechanism to auto-scope requests to an org, but I'll need to ensure it's flexible for different product architectures (some might use subdomains for custom domains, etc.). These are things I'll likely refine once I have a real product using them and I see how it feels.

Polar Integration and Billing Logic: Using an external service for billing (Polar) saved me time, but I'm relying on their adapter and webhooks being solid. In testing, I had to tweak a few things to ensure idempotency of webhook handling (so one event doesn't double-update an org's status). Also, I'm returning HTTP 200 to webhooks even if my internal processing fails (to avoid Polar retrying endlessly) — instead I log and will have to reconcile later. These are typical challenges with any billing system; I'll keep an eye on them once real money is involved.

Overall, nothing major broke; most trade-offs were about choosing simplicity now vs. flexibility later. I tried to strike a balance: implement the core must-haves, but not build too far ahead without a need.

Next steps

Heading into Week 2, my focus is shifting from platform to product. I need to prove that FactoryOS can actually accelerate building an app that users find valuable. My plan is to take one of my product ideas and build a walking skeleton: a thin vertical slice that goes from user signup -> create org -> upgrade to paid plan -> access a gated feature -> deploy it live. Even if it's not fully useful, just wiring that flow end-to-end will validate the platform approach.

Success for Week 2 will be seeing a real (albeit tiny) product running on FactoryOS, using all the pieces I built: a user signing up on the hosted version, perhaps installing a self-hosted instance with a license, and me not having to write any new infrastructure code to make it happen. That will prove out the "days, not months" promise.

My longer-term next steps include finishing the front-end UI kit (so all products share a consistent look), adding optional SSO integration for enterprise clients, and fleshing out documentation so that if I ever onboard a collaborator (or an AI agent contributor), they can understand the system quickly. But those can wait until I have the first product in motion.

Ask

If you've made it this far, thanks for reading!

I'd love any feedback or thoughts, especially on the platform approach. Am I over-engineering too early, or missing something obvious? How would you handle the SAAS vs. self-host build differences, or the license enforcement UX? Also, if you have ideas for small SaaS products that could benefit from this kind of platform, let me know – I'm still validating what to focus on.

Building in public means I'm learning as I go, and any insights from fellow builders or potential users are incredibly valuable.

Feel free to reach out on X | Twitter. Here's to shipping fast and learning faster!