Tuesday, July 22, 2025

Can You Feel the Vibe?


“Vibe coding,” if we have to name it, is the instinct to follow an idea straight into a running system before the bureaucracy wakes up. It’s intoxicating. It’s useful. And left alone, it’s shadow IT with better syntax highlighting.

So: keep the spark, lose the shrapnel.

The Instinct Is Fine. The Invisibility Isn’t.

People aren’t vibe coding because they hate process; they’re doing it because process is slow at the exact moment their brain is fast. You know this dance:

  • The idea shows up fully formed (or close enough).

  • You need a REPL, a repo, and a data source—now.

  • Every extra form to fill is a chance the idea evaporates.

We can wag our fingers about “proper intake” or we can design a place for this energy to land safely. The behavior isn’t going away. Pretending it will just pushes it deeper underground.

Design a Sandbox, Not a Maze

Imagine a paved path that feels like a dirt trail. That’s the job.

I want a sandbox where anyone can mash ideas into code, hit “deploy,” get real feedback… and where we still capture an audit trail, enforce basic guardrails, and promote winners to the grown-up pipeline automatically. Freedom up front, governance in the substrate.

Call it Spark → Prove → Promote:

  1. Spark (Fast & Loose, but Logged):

    • Spin up a temp repo/branch from a template (vibe-template), auto-tagged as experimental.

    • Ephemeral env with pre-approved IAM role, read-only test data, and a 24-hour TTL.

    • Everything auto-logs: who did what, when, and which data they touched.

  2. Prove (Make It Real Enough):

    • Smoke tests or a tiny checklist: “Does it run? Is there a README? Any secrets in code?”

    • Observability baked in (stdout goes to a sandboxed log store; errors page you, not the on-call team).

    • A lightweight “promote” button that kicks off a review workflow.

  3. Promote (Graduate or Archive):

    • If it’s useful, a pipeline migrates code to the main org, assigns owners, and wires alerting.

    • If it’s junk, auto-archive and clean up the infra. No guilt, no clutter.

A win is a win. A miss evaporates without leaving landmines.

Guardrails Without Killing the Vibe

Here are the rails I care about. Most are invisible at run time, very visible at audit time.

  • Identity Everywhere: Every scratch environment is tied to an actual person and a ticketable artifact (even if the ticket is created behind the scenes). “Anonymous Lambda” is not a vibe.

  • Data Diets: Only synthetic or scrubbed data in Spark. Production data is opt-in, logged, and red-flagged. Boring but mandatory.

  • Ephemeral by Default: Time-to-live on resources. If no one renews, it self-destructs (with polite warnings).

  • Policy-as-Code, Quietly: Template repos include OPA/Snyk/whatever scans that fail hard on secrets and license issues. People shouldn’t have to think about it; it just works.

  • Observability from Line One: Drop a logger, get a dashboard. If your experiment crashes, you get the ping, not PagerDuty at 2 a.m.

  • Clear Promotion Path: Don’t make success painful. One click (or simple PR label) should kick off “make this production-ready” automations: code owners, IaC modules, secrets rotation, etc.

Tiny Example: The “Vibe Template” Repo

# vibe-template/.github/workflows/ci.yml name: vibe-ci on: [push] jobs: test-n-scan: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - run: pip install -r requirements.txt - run: pytest --maxfail=1 - name: secret-scan uses: trufflesecurity/trufflehog-actions-scan@v3 - name: opa-policy run: opa eval --fail-defined -i policy.rego -d src

Nobody asked for a pipeline. It’s just there—fast, opinionated, and loud only when something’s wrong.

Three Questions Before You Push “Run”

If you’re about to vibe code (or bless someone who is), ask:

  1. Can this leak anything we’d be fired for leaking? No? Proceed.

  2. Will anyone be paged if this melts? Yes? Rethink. Make the blast radius yours.

  3. If it works, do we know the next step? Success without a path is just future pain.

If you can’t answer those in 30 seconds, your sandbox isn’t sand enough.

What This Looks Like for Real Orgs (i.e., Us)

  • Platform Team: Builds and owns the vibe platform: templates, ephemeral envs, guardrails code. Think “internal Heroku for experiments.”

  • Security/AppSec: Codifies “thou shalt not” as policies embedded in the template, reviews the promote stage, not every commit.

  • Engineers/Analysts/Whoever Codes: Use it guilt-free. The rule: if you’re outside the sandbox, you’re in grown-up land and the rules apply.

  • Leadership: Measures outcomes: # of experiments, # promoted, lead time from idea to prod, incidents caused by experiments (ideally zero). If the metrics go the wrong way, adjust guardrails—not the concept.

Why Bother?

Because the alternative is either (a) slow innovation or (b) silent innovation. Option (b) feels faster until the audit hits, or the clever Lambda starts costing $20k/month in egress fees.

We can have speed and safety. We just have to engineer it. (Shocking, I know.)


TL;DR (Because I Know You Skipped)

  • Vibe coding is happening. Engrave it in policy or pretend it isn’t—your choice.

  • Build a paved sandbox: Spark → Prove → Promote.

  • Guardrails = identity, data boundaries, ephemeral infra, policy-as-code, built-in observability, smooth promotion.

  • Success path must be easier than stealth path. That’s the only lever that actually works.

Keep the spark. Lose the shrapnel.

Friday, March 21, 2025

The need to be right?

Maybe you've been here too—that irresistible urge to be right. Whether in meetings, conversations, or even casual debates, there's something deeply satisfying about knowing the answer or how to do something. But here's the tricky part: the need to be right can become a serious obstacle in leadership.

Leaders can be driven to continuously prove themselves correct, often unintentionally silencing their teams. When we put being right above getting it right, we risk creating an environment where others hesitate to share ideas, challenge perspectives, or offer valuable insights. The result? Innovation stalls and creativity takes a back seat.

Think about the best leaders you've worked with. They probably didn't insist on always being the smartest person in the room. Instead, they focused on bringing out the best ideas, regardless of who proposed them. They were comfortable admitting they didn't have all the answers, opening the door to collaboration and collective problem-solving.

Here are the things I am working on to continuously improve in this area.

  1. Humility. Recognize that no one has a monopoly on good ideas—not even the boss. Being open to learning from your team creates trust and invites diverse viewpoints.
  2. Prioritize outcomes over egos. My goal is to achieve the best result rather than validate my personal viewpoint. I need my team to fulfill the vision and outcomes. Giving them room to do it their way allows them to learn from experience rather than being told.
  3. Listening. Slow my brain down and not plan my response. Just listen. Discuss without answering or directing. I have been rewarded by watching people arrive at the "right" thing in an entirely different way.

Remember, leadership isn't about proving you're right but empowering your team to find the best path forward together. So the next time you feel that urge to argue your point, pause and ask yourself: "Am I trying to be right, or am I trying to get it right?" 

P.S. This is good advice in a relationship.


Tuesday, February 25, 2025

The Way the Rest of the World Builds Stuff

Turns out that once you leave Amazon, there is a lot to learn about the way the rest of the world builds systems. That was part of the reason that I joined a relatively small company; building things at scale is different.


For context, we are a relatively new company formed when two smaller organizations merged. They each brough different technology and the related skills with them. For instance, we have systems on GCP, Azure, and AWS. Some applications make use of Kubernetes, some rely on WordPress. 

Today we discussed how teams / services implement continuous delivery and where one team's capabilities better align to our vision / roadmap. We walked through the use / use cases for the following technologies. It was humbling and exciting to hear about a handful of products that I have little-to-no experience with.


The following are a few of the places we discussed.

  1. Flux CD - https://fluxcd.io/
  2. GitHub Structure like = https://fluxcd.io/flux/guides/repository-structure/
  3. Argo and/or Flux - https://ferrishall.dev/flux-cd-vs-argo-cd
  4. Rancher Desktop - https://rancherdesktop.io/
  5. Helm - https://helm.sh/
Learning new stuff is fun.

Tuesday, December 3, 2024

A Brief History of Time - AI Style

My wife and I are trying a new approach to a book club and incorporating AI.

There are some books that we (both) have that we have tried to read, read part of, or never really go going on. Our choice of book is something of interest, but we experienced friction / lack of motivation, and never started / finished it. We are hoping that this new approach will create a better understanding of the book than we would get from just reading cover to cover.

We are "reading" them using the LLM of our choosing, ChatGPT, Copilot, etc. and it goes something like this. Hey ChatGPT, give me a summary of chapter 1 of "A Brief History of Time" by Stephen Hawking. We read the output and then we use our curiosity to explore further. For instance, some of my follow up questions were...

  • How well did Ptolemy’s model work? Was it 100%. Why did it take so long to get to the heliocentric model?
  • What is the notion of predictive power in scientific theory. Why is it important? What is the most influential / significant use of it?
  • How is the age of the universe calculated? How has it evolved over time?
  • How can everything be moving away from each other but still galaxies collide?
  • What is the force of gravity that our galaxy and the andromeda galaxy exert on one other? Which exerts more force on the other? Are either gravitationally bound to other galaxies?
  • How can I envision dark matter? Is it clustered together in different places, like puffy clouds. Or is it spread out like fog over vast areas? Is it moving? Can we indirectly see dark matter outside of its gravitational pull, like lensing?
After asking more questions I wrapped up with...

Is there any concept in the first chapter of brief history of time that we haven’t explored in our conversation here?

Which led me to ask a bunch of other questions. This continued for about an hour. I already read the first half of the book, and it covers topics I already have a little understanding of. This approach allowed me to explore in different nooks and crannies that I probably just took at face value when I originally read the book. This approach feels somehow...better. I am not just reading the words but also thinking about them and trying to understand deeper and/or map them to my world view.

The next step after both of us have wrapped up independently exploring a given chapter is that we discuss it together - like a book club. What did you find interesting? Surprising? Don't quite get yet? Did you explore areas that have little or nothing to do with the book (squirrels)? I expect our discussion to be "interesting" in that we come at things from different angles; I am more the analytical/science person, and she is much more the spirit/artist.

There is also the side benefit (not to be underestimated) of doing something together to keep things interesting. Stay tuned.

Saturday, November 30, 2024

Trying out AI - Rebalancing my Investments

I keep looking for ways to leverage AI both at work and in my personal life. Recently, I asked for help rebalancing my portfolio. The following is a series of prompts that I used to get there.

The first step was to document my holdings. I have accounts across several management companies and about 100 different assets. I tried to export them from each company, but none had an export feature. This is where I started using ChatGPT to parse my statements and export them as a table. 

Using the attached investment account statement, extract the current holdings from the document and give them back to me as a CSV file. 


I worked with ChatGPT to correct this, as it would only read a page of the data. We went back and forth several times before I got the results I needed, including looking up the expense ratio for every mutual fund and EFT. Next, I described how I wanted my portfolio to be rebalanced.
   

Balance my portfolio rooted in Nobel Prize-winning economic theories would emphasize diversification, broad market exposure, and risk management. Allocating across asset classes using the following:

Portfolio Allocation

  1. Equities (60%-70%)
    • Domestic Equity (35%-40%): Broad exposure to U.S. stocks via total market or S&P 500 index funds. Include a small-cap tilt (e.g., small-cap value funds) to capture the small-cap premium identified by Fama and French.
    • International Equity (20%-25%): Diversify across developed markets and emerging markets. Consider including small-cap international funds to enhance diversification and returns.
    • Value Tilt (10%-15%): Increase allocation to funds focusing on value stocks (high book-to-market ratios) to capture the value premium
  2. Fixed Income (30%-40%)
    • Domestic Bonds (20%-25%): Investment-grade government and corporate bonds for stability. A portion in inflation-protected securities (e.g., TIPS) to guard against inflation risks.
    • International Bonds (10%-15%): Include exposure to developed market and emerging market bonds to diversify currency and interest rate risks.
  3. Alternative Assets (Optional: 5%-10%)
    • Real estate investment trusts (REITs) for exposure to real estate.
    • Commodities or other diversifying assets if aligned with the investor’s goals.
  4. Cash and Cash Equivalents (5%-10%)
    • For liquidity and short-term needs, including high-yield savings, money market funds, or short-duration Treasury bonds.

Design Principles

  1. Market Efficiency: Avoid active stock picking or market timing, as the Efficient Market Hypothesis suggests these strategies rarely outperform
  2. Diversification: Use a fund-of-funds approach to spread investments across many asset classes, sectors, and geographic regions, reducing unsystematic risk.
  3. Small-Cap and Value Tilt: Emphasize small-cap and value stocks to capitalize on their historical outperformance as identified in the Fama-French Three-Factor Model.
  4. Risk Management: Use a mix of equity and fixed-income securities to align with the investor’s risk tolerance and investment horizon, reflecting principles of Modern Portfolio Theory.
  5. Low Costs: Prefer passive index funds or ETFs to minimize expenses and maximize net returns. The attached file lists my holdings across accounts, with the last column being the account name those holdings are in. Balance all my holdings listing unique transactions.
Below is a table with a list of my holdings across accounts. The first row is the column names. The last column being the account name those holdings are in.

It took a bit of coaxing to get a final portfolio. First, I have my holdings across five separate accounts, and getting ChatGPT to balance across accounts took a bit prompting. Some accounts had cash that would not get invested, so I had to remind it that the goal was to have between 5-10%. Another prompt resulted in more mutual funds with high cost ratios, which I wanted to minimize. So I added a prompt specifying that any mutual fund or ETF should have a cost ratio of .05%.

Once I had a portfolio rebalancing working the way I wanted, I asked for a list of transactions needed to achive that portfolio. This resulted in some wierd side effects where there were too many trades resulting in lots of transaction fees, so I added an instruction to minimize the number of trades. If I already owned Apple in one account, don't sell it if also going to buy it in another account.



I ended up with a list of trades to perform across each account. I don't know about you, but I have taken notice of that warning at the bottom of the screen recommending that any output should be double checked. So that set of prompts began.

I asked for the detailsof why each security was chosen, what rule(s) / criteria were used and what the amount specific was chosen. The output was very informative and is what I used to check the BUY transactions specified. I didn't find anything wrong. I further probed by asking which other securities were considered for each buy and why the one recommened was selected over the others. The following is one of the securities from that details.

Fund Selected: Fidelity Total Market Index Fund (FSKAX)


• Expense Ratio: 0.015%


Selection Logic:


• Broad Market Exposure: FSKAX tracks the Dow Jones U.S. Total Stock Market Index, providing comprehensive exposure to the entire U.S. equity market, including large-cap, mid-cap, and small-cap stocks.

• Low Expense Ratio: At 0.015%, it is one of the lowest-cost options available, aligning with your preference for minimizing expenses.

• Fund Size and Liquidity: With significant AUM, it offers high liquidity and stability.

• Alignment with Strategy: Provides a foundation for the Domestic Equity portion of your portfolio, supporting diversification and market efficiency principles.


Other Assets Considered:


• Vanguard Total Stock Market Index Fund (VTSAX): Expense Ratio of 0.04%. Not selected due to a higher expense ratio compared to FSKAX.

• Schwab Total Stock Market Index Fund (SWTSX): Expense Ratio of 0.03%. FSKAX was preferred due to the existing relationship with Fidelity and slightly lower expense ratio.


I found this all very helpful and it helped me to trust (or not) the trades recommended. 

What have I done with all this information? Nothing, yet. It's not clear to me if this was anything more than an intellectual excersize. It was very cool and I learned a couple of things - patience being a big one. It took me four days to complete all of this.

Monday, November 11, 2024

It's all Ball Bearings (AI) These Days


I saw recently that Google and Amazon announced how much AI they use to generate code, Google in percent, and Amazon in hours saved. There has been a lot of hype that it is all AI these days. Over the summer, I played around with different AIs to get my head in this space to see how much is hype.

I built a few Python apps in Google Colab to detect license plates in a video stream from my bike rides. Colab uses Codey and Gemini, and they did an excellent job of getting me started as someone who didn't know Python or the libraries available. I am a trial-and-error learner who tries something and, when it doesn't work, refers back to references to figure out why. At one point, I had a working app and asked how I could change the implementation to be multi-threaded, which it did. It didn't work, but that resulted from Colab and not the code generated.

In my new job, I make a point to use ChatGPT, GitHub Copilot, and/or Claude a few times a week. Last week, I asked it to create two apps for me; a random people picker as a spinning wheel and an out-of-office (OOO) helper. The spinner took me about 15 minutes to get right, so I repeatedly asked Copilot to modify the implementation to get it the way I wanted. In the end, it was close, and I made minor changes to the CSS. The exciting thing about this approach was that I iterated with Copilot to turn the implementation instead of finding completed implementation(s) and Frankensteining them together.

Where it has yet to work well for me at work is melding technology with the specifics of my organization. For instance, what does an executive dashboard look like for our ecosystem? Or what is the best way to configure the on-call paging system across our teams? Sure, I asked if there were any best practices or pitfalls others have run into, and the answers were very generic, which is what I would expect.

My experience and intuition suggest that we can benefit from leveraging AI. As I build the 2025 roadmap, I am encouraging the team to consider how AI or AI-powered products can help us achieve more and faster.

Tuesday, November 5, 2024

Exploring DORA


We are considering implementing DORA metrics to help track and improve our outcomes. I had never heard of the term DORA, but most of the metrics rolled into this moniker I have a history with. As part of the discussion around DORA, we met with a few vendors whose products integrate with GitHub, Jira, etc., to surface metrics. While we didn't end up purchasing a product to help here, it wasn't because we needed to see the value of DORA. On the contrary, we value the outcomes that DORA metrics (et al.) are after; it is more about timing. Through acquisitions, mergers, and growth, we have a diverse group of teams, each at different maturity levels and mechanisms around SDLC. Starting to track DORA-esque metrics when a team may be working to climb up on the maturity curve seems premature for a few reasons. First, we should focus on helping teams up-level their SDLC. For instance, CI/CD is good, so everyone should do that. Second, most mechanisms have some metrics as outputs that we can leverage. If you're doing Scrum, are you tracking the key metrics or meeting every two weeks to plan work? Lastly, we need to be careful when comparing across teams that don't do the same thing or have similar needs. The apples and oranges analogy exists because it works.

As a team leader looking to uplevel the entire engineering organization, DORA and products that capture related metrics sound attractive from a centralized visibility perspective. Being new to the organization, I know relatively little about each team's maturity level, but many feel there is room to improve. Does that mean we are not going to track any DORA metrics? Of course not; we will track the ones with shared mechanisms and leverage that data. Measure, identify, improve, repeat.

P.S. I was pre-Dora The Explorer. But my kids weren't. Geography interests me, so this Dora is fabulous in my book.


Can You Feel the Vibe?