Tuesday, December 3, 2024

A Brief History of Time - AI Style

My wife and I are trying a new approach to a book club and incorporating AI.

There are some books that we (both) have that we have tried to read, read part of, or never really go going on. Our choice of book is something of interest, but we experienced friction / lack of motivation, and never started / finished it. We are hoping that this new approach will create a better understanding of the book than we would get from just reading cover to cover.

We are "reading" them using the LLM of our choosing, ChatGPT, Copilot, etc. and it goes something like this. Hey ChatGPT, give me a summary of chapter 1 of "A Brief History of Time" by Stephen Hawking. We read the output and then we use our curiosity to explore further. For instance, some of my follow up questions were...

  • How well did Ptolemy’s model work? Was it 100%. Why did it take so long to get to the heliocentric model?
  • What is the notion of predictive power in scientific theory. Why is it important? What is the most influential / significant use of it?
  • How is the age of the universe calculated? How has it evolved over time?
  • How can everything be moving away from each other but still galaxies collide?
  • What is the force of gravity that our galaxy and the andromeda galaxy exert on one other? Which exerts more force on the other? Are either gravitationally bound to other galaxies?
  • How can I envision dark matter? Is it clustered together in different places, like puffy clouds. Or is it spread out like fog over vast areas? Is it moving? Can we indirectly see dark matter outside of its gravitational pull, like lensing?
After asking more questions I wrapped up with...

Is there any concept in the first chapter of brief history of time that we haven’t explored in our conversation here?

Which led me to ask a bunch of other questions. This continued for about an hour. I already read the first half of the book, and it covers topics I already have a little understanding of. This approach allowed me to explore in different nooks and crannies that I probably just took at face value when I originally read the book. This approach feels somehow...better. I am not just reading the words but also thinking about them and trying to understand deeper and/or map them to my world view.

The next step after both of us have wrapped up independently exploring a given chapter is that we discuss it together - like a book club. What did you find interesting? Surprising? Don't quite get yet? Did you explore areas that have little or nothing to do with the book (squirrels)? I expect our discussion to be "interesting" in that we come at things from different angles; I am more the analytical/science person, and she is much more the spirit/artist.

There is also the side benefit (not to be underestimated) of doing something together to keep things interesting. Stay tuned.

Saturday, November 30, 2024

Trying out AI - Rebalancing my Investments

I keep looking for ways to leverage AI both at work and in my personal life. Recently, I asked for help rebalancing my portfolio. The following is a series of prompts that I used to get there.

The first step was to document my holdings. I have accounts across several management companies and about 100 different assets. I tried to export them from each company, but none had an export feature. This is where I started using ChatGPT to parse my statements and export them as a table. 

Using the attached investment account statement, extract the current holdings from the document and give them back to me as a CSV file. 


I worked with ChatGPT to correct this, as it would only read a page of the data. We went back and forth several times before I got the results I needed, including looking up the expense ratio for every mutual fund and EFT. Next, I described how I wanted my portfolio to be rebalanced.
   

Balance my portfolio rooted in Nobel Prize-winning economic theories would emphasize diversification, broad market exposure, and risk management. Allocating across asset classes using the following:

Portfolio Allocation

  1. Equities (60%-70%)
    • Domestic Equity (35%-40%): Broad exposure to U.S. stocks via total market or S&P 500 index funds. Include a small-cap tilt (e.g., small-cap value funds) to capture the small-cap premium identified by Fama and French.
    • International Equity (20%-25%): Diversify across developed markets and emerging markets. Consider including small-cap international funds to enhance diversification and returns.
    • Value Tilt (10%-15%): Increase allocation to funds focusing on value stocks (high book-to-market ratios) to capture the value premium
  2. Fixed Income (30%-40%)
    • Domestic Bonds (20%-25%): Investment-grade government and corporate bonds for stability. A portion in inflation-protected securities (e.g., TIPS) to guard against inflation risks.
    • International Bonds (10%-15%): Include exposure to developed market and emerging market bonds to diversify currency and interest rate risks.
  3. Alternative Assets (Optional: 5%-10%)
    • Real estate investment trusts (REITs) for exposure to real estate.
    • Commodities or other diversifying assets if aligned with the investor’s goals.
  4. Cash and Cash Equivalents (5%-10%)
    • For liquidity and short-term needs, including high-yield savings, money market funds, or short-duration Treasury bonds.

Design Principles

  1. Market Efficiency: Avoid active stock picking or market timing, as the Efficient Market Hypothesis suggests these strategies rarely outperform
  2. Diversification: Use a fund-of-funds approach to spread investments across many asset classes, sectors, and geographic regions, reducing unsystematic risk.
  3. Small-Cap and Value Tilt: Emphasize small-cap and value stocks to capitalize on their historical outperformance as identified in the Fama-French Three-Factor Model.
  4. Risk Management: Use a mix of equity and fixed-income securities to align with the investor’s risk tolerance and investment horizon, reflecting principles of Modern Portfolio Theory.
  5. Low Costs: Prefer passive index funds or ETFs to minimize expenses and maximize net returns. The attached file lists my holdings across accounts, with the last column being the account name those holdings are in. Balance all my holdings listing unique transactions.
Below is a table with a list of my holdings across accounts. The first row is the column names. The last column being the account name those holdings are in.

It took a bit of coaxing to get a final portfolio. First, I have my holdings across five separate accounts, and getting ChatGPT to balance across accounts took a bit prompting. Some accounts had cash that would not get invested, so I had to remind it that the goal was to have between 5-10%. Another prompt resulted in more mutual funds with high cost ratios, which I wanted to minimize. So I added a prompt specifying that any mutual fund or ETF should have a cost ratio of .05%.

Once I had a portfolio rebalancing working the way I wanted, I asked for a list of transactions needed to achive that portfolio. This resulted in some wierd side effects where there were too many trades resulting in lots of transaction fees, so I added an instruction to minimize the number of trades. If I already owned Apple in one account, don't sell it if also going to buy it in another account.



I ended up with a list of trades to perform across each account. I don't know about you, but I have taken notice of that warning at the bottom of the screen recommending that any output should be double checked. So that set of prompts began.

I asked for the detailsof why each security was chosen, what rule(s) / criteria were used and what the amount specific was chosen. The output was very informative and is what I used to check the BUY transactions specified. I didn't find anything wrong. I further probed by asking which other securities were considered for each buy and why the one recommened was selected over the others. The following is one of the securities from that details.

Fund Selected: Fidelity Total Market Index Fund (FSKAX)


• Expense Ratio: 0.015%


Selection Logic:


• Broad Market Exposure: FSKAX tracks the Dow Jones U.S. Total Stock Market Index, providing comprehensive exposure to the entire U.S. equity market, including large-cap, mid-cap, and small-cap stocks.

• Low Expense Ratio: At 0.015%, it is one of the lowest-cost options available, aligning with your preference for minimizing expenses.

• Fund Size and Liquidity: With significant AUM, it offers high liquidity and stability.

• Alignment with Strategy: Provides a foundation for the Domestic Equity portion of your portfolio, supporting diversification and market efficiency principles.


Other Assets Considered:


• Vanguard Total Stock Market Index Fund (VTSAX): Expense Ratio of 0.04%. Not selected due to a higher expense ratio compared to FSKAX.

• Schwab Total Stock Market Index Fund (SWTSX): Expense Ratio of 0.03%. FSKAX was preferred due to the existing relationship with Fidelity and slightly lower expense ratio.


I found this all very helpful and it helped me to trust (or not) the trades recommended. 

What have I done with all this information? Nothing, yet. It's not clear to me if this was anything more than an intellectual excersize. It was very cool and I learned a couple of things - patience being a big one. It took me four days to complete all of this.

Monday, November 11, 2024

It's all Ball Bearings (AI) These Days


I saw recently that Google and Amazon announced how much AI they use to generate code, Google in percent, and Amazon in hours saved. There has been a lot of hype that it is all AI these days. Over the summer, I played around with different AIs to get my head in this space to see how much is hype.

I built a few Python apps in Google Colab to detect license plates in a video stream from my bike rides. Colab uses Codey and Gemini, and they did an excellent job of getting me started as someone who didn't know Python or the libraries available. I am a trial-and-error learner who tries something and, when it doesn't work, refers back to references to figure out why. At one point, I had a working app and asked how I could change the implementation to be multi-threaded, which it did. It didn't work, but that resulted from Colab and not the code generated.

In my new job, I make a point to use ChatGPT, GitHub Copilot, and/or Claude a few times a week. Last week, I asked it to create two apps for me; a random people picker as a spinning wheel and an out-of-office (OOO) helper. The spinner took me about 15 minutes to get right, so I repeatedly asked Copilot to modify the implementation to get it the way I wanted. In the end, it was close, and I made minor changes to the CSS. The exciting thing about this approach was that I iterated with Copilot to turn the implementation instead of finding completed implementation(s) and Frankensteining them together.

Where it has yet to work well for me at work is melding technology with the specifics of my organization. For instance, what does an executive dashboard look like for our ecosystem? Or what is the best way to configure the on-call paging system across our teams? Sure, I asked if there were any best practices or pitfalls others have run into, and the answers were very generic, which is what I would expect.

My experience and intuition suggest that we can benefit from leveraging AI. As I build the 2025 roadmap, I am encouraging the team to consider how AI or AI-powered products can help us achieve more and faster.

Tuesday, November 5, 2024

Exploring DORA


We are considering implementing DORA metrics to help track and improve our outcomes. I had never heard of the term DORA, but most of the metrics rolled into this moniker I have a history with. As part of the discussion around DORA, we met with a few vendors whose products integrate with GitHub, Jira, etc., to surface metrics. While we didn't end up purchasing a product to help here, it wasn't because we needed to see the value of DORA. On the contrary, we value the outcomes that DORA metrics (et al.) are after; it is more about timing. Through acquisitions, mergers, and growth, we have a diverse group of teams, each at different maturity levels and mechanisms around SDLC. Starting to track DORA-esque metrics when a team may be working to climb up on the maturity curve seems premature for a few reasons. First, we should focus on helping teams up-level their SDLC. For instance, CI/CD is good, so everyone should do that. Second, most mechanisms have some metrics as outputs that we can leverage. If you're doing Scrum, are you tracking the key metrics or meeting every two weeks to plan work? Lastly, we need to be careful when comparing across teams that don't do the same thing or have similar needs. The apples and oranges analogy exists because it works.

As a team leader looking to uplevel the entire engineering organization, DORA and products that capture related metrics sound attractive from a centralized visibility perspective. Being new to the organization, I know relatively little about each team's maturity level, but many feel there is room to improve. Does that mean we are not going to track any DORA metrics? Of course not; we will track the ones with shared mechanisms and leverage that data. Measure, identify, improve, repeat.

P.S. I was pre-Dora The Explorer. But my kids weren't. Geography interests me, so this Dora is fabulous in my book.


Friday, October 11, 2024

Suddenly I Need to Think About My Safety

Personal safety has changed for me. 

I have not posted much, if anything, about my experience as a transgender person. Mostly because I am a private person. If you want to know why I don't like condiments, no problem. But at work or on social media, I tend to separate my private life from my public life. I think it's no different than people not bringing up topics like their marriage or conceiving a child; it's about having boundaries.

Boundaries are about safety. At work, boundaries support myself and those I interact with. Who knows what is going on in someone's life that will trigger them and make them feel unsafe. That doesn't mean I suppress my private life or who I am, but I will not push it onto others. Like in a previous post, I don't know what is happening in anyone else's life, and I respect that they may want to keep it that way.

Building on boundaries...some folks no longer have boundaries; they are willing to say almost anything, especially on social media.

I've been watching the show Will and Harper on Netflix and was troubled in a couple of ways by the part where they went into a Texas steakhouse together. If you're not familiar with the story, Harper is a friend of Will's from Saturday Night Live and is transgender. Will Ferrell, being who he is, attracted a lot of attention from the other patrons at the restaurant. Some of those patrons wrote terrible things about Harper and tagged Will on social media.

I relate to that story because safety is something I need to think about now. My transition is not something I talk about, and most people are respectful enough not to ask. I appreciate that since, as a transgender person, I often don't feel safe. I say this not to compete with any group; but because of the realization that I went from the top of the privilege hierarchy to a spot much lower. There are many states that I would be uncomfortable spending time in, Texas among them. I recently needed to travel to Dallas for work, so I took several precautions to ensure my safety. My former privilege told me that I was overreacting. But wait, there was my experience in Nashville a few years ago when I was alone and physically/verbally confronted by some huge/drunk men. I was terrified and stunned, this was an entirely new experience for me. It worked out "OK" because I literally ran away, but it was one heck of a wake-up call.

It's this fear/concern for safety that exists for most transgender people and drives the generalized suspicion we feel. It is hard for me to be suspicious that way, but some things have transpired this year, and those suspicions explain many things. Too many things. This is where allies become essential to help with boundaries and safety. The next night in Nashville I asked a friend to walk me back to the hotel, but what about the less obvious things? I need your help there too. 

The revelation for Will Ferrell around what life was like for his friend is something that I wish everyone could understand. It was clear from his reaction how big of a revelation it was for him - how his actions led to the threats on social media. Harper and I are both white trans-women who transitioned relatively late in life and so enjoyed the benefits of our privilege for longer than we won't. It is so much worse for people at the intersection of race and gender!

I need your help to remain safe beyond my physical safety. Stand up to locker-room talk, gossip, social media, bias at work, etc. activities. It will require you to step outside your comfort zone. I hope you can find the courage.



 


Saturday, September 14, 2024

Try Noodling on That


The name of this blog is a reference to words uttered by a friend of mine. He also added the word "noodle" to my vocabulary, which is synonymous with thinking in this context. Not to be confused with a Noodle the pug who prognosticated on what type of day we could expect to have.

Whether to myself or someone on my team, I often use the phrase "noodle on that" when the answer is non-trivial and analysis and logic are not yielding a solution. It can be relatively straightforward—I need an answer to this problem—or something more abstract—like ideation. Either way, there are times when looking for an answer is more like chasing something elusive. Finding the answer is often further compounded by some time pressure - I need to solve this by XX.

Enter Noodling.

Several times, the answer presented itself to me - but only when I took the pressure off myself. For instance, I was cramming for a math final, and the answer to a particularly gnarly calculus problem came to me in my sleep. It woke me up, presumably so I could write it down (ha). Another time, the answer came to me in the shower. Someone once attributed this to all the positive ions stimulating my brain. Whatever the reason, it's a thing - Shower Epiphanies.

As a manager, I have often suggested that they take a break from trying to solve the problem directly and noodle on it. Give yourself a break. Let the pressure off. Give your subconscious a chance to work on it. My anecdotal data is that it works the majority of the time. This has the additional benefit of letting people work it out for themselves. My belief is that things feel better when people figure something out on their own, as opposed to me giving them the answer. 

See you on comfy mountain. IYKYK.

Monday, September 9, 2024

A Brief History of Integrating


I was recently asked how I would integrate systems with similar functionality but different code bases and infrastructure. It got me thinking about how I feel I have been solving the integration problem throughout my career and how it (not surprisingly) keeps coming up. My answer was: It depends on what you have and where you want to land. I have many questions: how much of the data is the same? How reliable is the data? Is there an appetite to reduce any duplication? What is the desired recency (age) of the data? Is there a goal to become more homogenous? Regardless of the answers, I have employed several strategies/technologies over my career.

One of the first implementations I worked on was a practical solution for a PC-based application to leverage COBOL code on the mainframe. The company had invested years in the code and was not ready (or able) to rewrite it. We built an LU6.2 gateway to call the mainframe and pass data to the COBOL file's copybook. This was such a common need that later, Microsoft wrote an LU6.2 connector to create a code proxy from the copybook for you. Here we are 24 years after Y2K, and I am willing to bet that the banking and finance sectors still have a bunch of COBOL being used.

During the Object-Oriented boom, there was a big push to build an Enterprise Service Bus (ESB). We talked about technologies like MQSeries, CORBA, and RPC as ways to implement an ESB. I never saw an ESB I liked, probably because so many were never finished. Instead, I watched enterprise architects trying to resolve the enterprises' requirements into something for everyone. It's hard enough to nail this down for a single business line in an enterprise, much less many/all of them.

To some extent, we used databases as the point of integration. One system would be declared the data owner and copies of that data would be made for use by others. At the time, I saw this strategy fail more often than it succeeded because the copy would become stale and not meet our customer's expectations. Nowadays, we make copies, but it's less about integration and more about scale. To succeed, we had to figure out how to work in the world of eventual consistency.

As I shared these examples, I couldn't help but think of a few other topics. But these are the ones I wanted to share. If you've been in this industry for any time, I'm sure you have your own integration history. Integration is a persistent problem, a challenge that we all face, and it's safe to say that it's not going away anytime soon.

A Brief History of Time - AI Style