Tag: software

Nov 29 2017

Apple Platform Layer Bugs

The word “platform” used to mean something in the technology industry. Like many other words, it has been applied to so many different things to almost be meaningless.

Yesterday, when I started seeing stuff about the MacOS High Sierra blank root password bug, I took a deep breath and clicked on the first link I saw, hoping it was an Onion article. I read it, picked my jaw up off the floor, and then said out loud “Someone at Apple got fired today.”

Then I wondered if that was true and realized it probably wasn’t. And, that someone probably shouldn’t be fired, but that Apple should do a very deep root cause analysis on why a bug like this could get out in the wild as part of an OS release.

Later in the day, I pulled up Facetime to make a call to Amy. My computer sat there and spun on contacts for about 30 seconds before Facetime appeared. While I shrugged, I once again thought “someone at Apple should fix that once and for all.”

It happened again a few hours later. Over Thanksgiving, I gave up trying to get my photos and Amy’s photos co-managed so I finally just gave all my photos to Apple and iCloud in a separate photo store from all of Amy’s photos (which include all of our 25,000 or so shared photos.) I was uninstalling Mylio on my various office machines and opening up Photo so that the right photo store would be set up. I went into Photos to add a name to a Person that I noticed in my Person view and the pretty Apple rainbow spun for about 30 seconds after I hit the first name of the person’s name.

If you aren’t familiar with this problem, if you have a large address book (like mine, which is around 20,000 names), autocomplete of a name or email in some (not all) Mac native apps is painfully slow.

I opened up my iPhone to see if the behavior on the iPhone was similar with my contacts and it wasn’t. iOS Contacts perform as expected; MacOS Contacts don’t. My guess is totally different people (or teams) work on code which theoretically should be the same. And, one is a lot better than the other.

At this point, I realized that Apple probably had a systemic platform layer engineering problem. It’s not an OS layer issue (like the blank root password bug) – it’s one level up. But it impacts a wide variety of applications that it should be easily abstracted from (anything on my Mac that uses Contacts.) And this seems to be an appropriate use of the word platform.

Software engineering at scale is really difficult and it’s getting even more, rather than less, challenging. And that’s fascinating to me.

Comments
Jun 6 2016

More Coders. More Diversity.

If there’s one consistent concern I hear from the companies I work with, it’s the shortage of qualified tech talent. But just like in so many other areas, a Boulder entrepreneur has come up a great idea to address the problem that not only adds to the talent pipeline, but also brings in more diversity — a personal passion of mine.

Too often, aspiring engineers who lack the funds to pursue a computer science degree from a university or take part in a bootcamp find themselves locked out of technology jobs, despite often severe talent shortages. Think about it: if you need to pay rent and buy groceries, it’s pretty tough to quit, or work part time, and pay either tuition or boot camp fees. To address this, Heather Terenzio, founder and CEO of Boulder’s Techtonic Group, developed Techtonic Academy, an innovative solution in the form of Colorado’s first federally recognized by the Department of Labor technology apprenticeship. Rather than paying thousands in tuition or fees, qualified individuals can get their foot in the door to a tech career while earning a salary from their very first day.

Techtonic Academy provides underprivileged youth, minorities, women, and veterans both technical training and mentorship to become entry-level software engineers and pursue a career in the technology field. It works like this: the program looks for people with an interest in and aptitude for tech but little or no formal training — think gamers, self-taught hobbyists and the like — and puts them to work as apprentices. They work with senior developers to gain coding experience on real client projects under careful guidance and supervision while earning a livable salary. They are required to earn a series of accreditation badges covering coding skills and are constantly mentored in “soft” skills — things like being on time or working effectively on a team.

After about six months, graduating apprentices are qualified junior developers, ready to work. Some choose to stay at Techtonic Group, where they become part of a team to build custom software, mobile applications and content-managed websites, while others move on to Techtonic Group clients. If a client hires an apprentice, Techtonic does not charge a conversion fee, which can run into the thousands for a junior developer hired through a traditional recruiter.

As Heather told me, “I have an Ivy League education, but that’s not where I learned to code. I learned to code doing it on the job.” I think many software developers share that sentiment.

Heather welcomes all technology hopefuls and works hard to bring diversity to the program, recruiting women, veterans and those who aren’t in a financial position to quit work to pursue a degree or certificate. The benefits are obvious. Apprentices earn a living salary on their first day, and we as a tech community can support a program that puts more coders in the market with a keen eye toward diversity and opportunity while getting work completed.

Heather’s got a great idea and it gives all of us the chance to both find help on projects and add new, diverse talent to our community. Reach out to Heather if you’d like more information.

Comments
Sep 3 2015

Email Conventions and Why Email Clients Suck

There are two common email conventions in my world that I use many times a day in Gmail. I don’t remember where either of them came from or how much I influenced their use in my little corner of the world, but I see them everywhere now.

The first is +Name. When I add someone to an email thread, I start the email with +Name. For example:

+Mary

Gang – happy to have a meeting. Mary will take care of scheduling it.

Now, why in the world can’t gmail recognize that and automatically add Mary to my To: line? If I needed to do “+Mary Weingartner”, that would be fine. Gmail is supposed to be super smart – it should know my address book (ahem) or even my most recently added Mary’s and just get it done.

The other is bcc: Whenever I want to drop someone from an email chain, I say “to bcc:” For example:

Joe – thanks for the intro. To bcc:

Pauline – tell me more about what you are thinking.

Then, I have to click and drag on some stuff in the address field to move Joe from the To: line to the bcc: line.

Dear Developers Working On Email Clients Of The World: Would you please put a little effort into having the email client either (a) learn my behavior or (b) Add in lots of little tricks that are common, but not standard, conventions?

Comments
Dec 31 2014

Fundamental Software Problems That Haven’t Been Solved Yet

I hate doing “reflections on the last year” type of stuff so I was delighted to read Fred Wilson’s post this morning titled What Just Happened? It’s his reflection on what happened in our tech world in 2014 and it’s a great summary. Go read it – this post will still be here when you return.

Since I don’t really celebrate Christmas, I end up playing around with software a lot over the holidays. This year my friends at FullContact and Mattermark got the brunt of me using their software, finding bugs, making suggestions, and playing around with competitive stuff. I hope they know that I wasn’t trying to ruin their holidays – I just couldn’t help myself.

I’ve been shifting to almost exclusively reading (a) science fiction and (b) biographies. It’s an interesting mix that, when combined with some of the investments I’m deep in, have started me thinking about the next 30 years of the innovation curve. Every day, when doing something on the computer, I think “this is way too fucking hard” or “why isn’t the data immediately available”, or “why am I having to tell the software to do this”, or “man this is ridiculous how hard it is to make this work.”

But then I read William Hertling’s upcoming book The Turing Exception, remember that The Singularity (first coined in 1958 by John von Neumann, not more recently by Ray Kurzweil, who has made it a very popular idea) is going to happen in 30 years. The AIs that I’m friends with don’t even have names or identities yet, but I expect some of them will within the next few years.

We have a long list of fundamental software problems that haven’t been solved. Identity is completely fucked, as is reputation. Data doesn’t move nicely between things and what we refer to as “big data” is actually going to be viewed as “microscopic data”, or better yet “sub-atomic data” by the time we get to the singularity. My machines all have different interfaces and don’t know how to talk to each other very well. We still haven’t solved the “store all your digital photos and share them without replicating them” problem. Voice recognition and language translation? Privacy and security – don’t even get me started.

Two of our Foundry Group themes – Glue and Protocol – have companies that are working on a wide range of what I’d call fundamental software problems. When I toss in a few of our HCI-themes investments, I realize that there’s a theme that might be missing, which is companies that are solving the next wave of fundamental software problems. These aren’t the ones readily identified today, but the ones that we anticipate will appear alongside the real emergence of the AIs.

It’s pretty easy to get stuck in the now. I don’t make predictions and try not to have a one year view, so it’s useful to read what Fred thinks since I can use him as my proxy AI for the -1/+1 year window. I recognize that I’ve got to pay attention to the now, but my curiosity right now is all about a longer arc. I don’t know whether it’s five, ten, 20, 30, or more years, but I’m spending intellectual energy using these time apertures.

History is really helpful in understanding this time frame. Ben Franklin, John Adams, and George Washington in the late 1700s. Ada Lovelace and Charles Babbage in the mid 1800s. John Rockefeller in the early 1900s. The word software didn’t even exist.

We’ve got some doozies coming in the next 50 years. It’s going to be fun.

Comments
Nov 25 2013

It’s An Agile World

My post on How to Fix Obamacare generated plenty of feedback – some public and some via email. One of the emails reinforced the challenge of “traditional software development” vs. the new generation of “Agile software development.” I started experiencing, and understanding, agile in 2004 when I made an investment in Rally Software. At the time it was an idea in Ryan Martens brain; today it is a public company valued around $600 million, employing around 400 people, and pacing the world of agile software development.

The email I received described the challenge of a large organization when confronted with the kind of legacy systems – and traditional software development processes – that Obamacare is saddled with. The solution – an agile one – just reinforces the power of “throw it away and start over” as an approach in these situations. Enjoy the story and contemplate whether it applies to your organization.

I just read your post on Fixing the Obamacare site.

It reminds me of my current project at my day job. The backend infrastructure that handles all the Internet connectivity and services for a world-wide distributed technology that was built by a team of 150 engineers overseas. The infrastructure is extremely unreliable and since there’s no good auditability of the services, no one can say for sure, but estimates vary from a 5% to 25% failure rate of all jobs through the system. For three years management has been trying to fix the problem, and the fix is always “just around the corner”. It’s broken at every level, from the week-long deployment processes, the 50% failure rate for deploys, and the inability to scale the service.

I’ve been arguing for years to rebuild it from scratch using modern processes (agile), modern architecture (decoupled web services), and modern technology (rails), and everyone has said “it’s impossible and it’ll cost too much.”

I finally convinced my manager to give me and one other engineer two months to work on a rearchitecture effort in secret, even though our group has nothing to do with the actual web services.

Starting from basic use cases, we architected a new, decoupled system from scratch, and chose one component to implement from scratch. It corresponds roughly to 1/6 of the existing system.

In two months we were able to build a new service that:

  • scales to 3x the load with 1/4 the servers
  • operates at seven 9s reliability
  • deploys in 30 seconds
  • implemented with 2 engineers compared to an estimated 25 for the old system

Suddenly the impossible is not just possible, it’s the best path forward. We have management buy-in, and they want to do the same for the rest of the services.

But no amount of talking would have convinced them after three years of being entrenched in the same old ways of doing things. We just had to go build it to prove our point.

Comments
Aug 28 2012

A Brain Transplant For Your Robot

Orbotix just released a new version of the Sphero firmware. This is a fundamental part of our thesis around “software wrapped in plastic” – we love investing in physical products that have a huge, and ever improving, software layer. The first version of the Sphero hardware just got a brain transplant and the guys at Orbotix do a brilliant job of showing what the difference is.

Even if you aren’t into Sphero, this is a video worthwhile watching to understand what we mean as investors when we talk about software wrapped in plastic (like our investments in Fitbit, Sifteo, and Modular Robotics.)

When I look at my little friend Sphero, I feel a connection to him that is special. It’s like my Fitbit – it feels like an extension of me. I have a physical connection with the Fitbit (it’s an organ that tracks and displays data I produce). I have an emotional connection with Sphero (it’s a friend I love to have around and play with.) The cross-over between human and machine is tangible with each of these products, and we are only at the very beginning of the arc with them.

I love this stuff. If you are working on a product that is software wrapped in plastic, tell me how to get my hands on it.

Comments
Apr 26 2012

A Logical AND With @ In A Mainstream World

Irony alert: A lot of this post will be incomprehensible. That’s part of the point.

I get asked to tweet out stuff multiple times a day. These requests generally fit in one of three categories:

  1. 1. Something a company I’m an investor in wants me to tweet.
  2. 2. Something a smart, respected person wants me to tweet.
  3. 3. Something a random person, usually an entrepreneur, who is well intentioned but unknown to me wants me to tweet.

Unless I know something about #3 or are intrigued by the email, I almost never do anything with #3 (other than send a polite email reply that I’m not going to do anything because I don’t know the person.) With #1 and #2, I usually try to do something. When it’s in the form of “here’s a link to a tweet to RT” that’s super easy (and most desirable).

There must have been a social media online course somewhere that told people “email all people you know with big twitter followings and ask them to tweet something out for you. Send them examples for them to tweet, including a link to your product, site, or whatever you are promoting.”

Ok – that’s cool. I’m game to play as long as I think the content is interesting. But the social media online course (or consultant) forgot to explain that starting a tweet with an @ does a very significant thing. Specifically, it scopes the audience to be the logical AND clause of the two sets of twitter followers. Yeah, I know – that’s not English, but that’s part of my point.

Yesterday, someone asked me to tweet out something that said “@ericries has a blah blah blah about https://linktomything.com that’s a powerful explanation”. Now, Eric has a lot of followers. And I do also. But by doing the tweet this way, the only people who would have seen this are the people who follow Eric AND follow me. Not OR. Not +. AND.

Here’s the fun part of the story. When I sent a short email to the very smart person who was asking me to tweet this out that he shouldn’t start a tweet like this since it would be the AND clause of my followers and Eric’s followers, he jokingly responded with “that’s great – that should cover the whole world.” He interpreted my comment not as a “logical AND” but a grammatical AND. And there’s a big difference between the two.

As web apps go completely mainstream, I see this more and more. Minor syntatical things that make sense to nerds like me (e.g. putting an @reply at the beginning of a tweet cause the result set to be the AND clause of followers for you and followers for the @reply) make no sense to normal humans, or marketing people, or academics, or – well – most everyone other than computer scientists, engineers, or logicians.

The punch line, other than don’t use @ at the beginning of a broadcast tweet if you want to get to the widest audience, is that as software people, we have to keep working as hard as we can to make this stuff just work for everyone else. The machines are coming – let’s make sure we do the best possible job with their interface which we still can influence it.

Comments
Dec 22 2011

Resistance Is Futile

Marc Andreessen recently wrote a long article in the WSJ which he asserted that “Software Is Eating The World.” I enjoyed reading it, but I don’t think it goes far enough.

I believe the machines have already taken over and resistance is futile. Regardless of your view of the idea of the singularity, we are now in a new phase of what has been referred to in different ways, but most commonly as the “information revolution.” I’ve never liked that phrase, but I presume it’s widely used because of the parallels to the shift from an agriculture-based society to the industrial-based society commonly called the “industrial revolution.”

At the Defrag Conference I gave a keynote on this topic. For those of you who were there, please feel free to weigh in on whether the keynote was great, sucked, if you agreed, disagreed, were confused, mystified, offended, amused, or anything else that humans are capable of having as stimuli-response reactions.

I believe the phase we are currently in began in the early 1990’s with the invention of the World Wide Web and subsequent emergence of the commercial Internet. Those of us who were involved in creating and funding technology companies in the mid-to-late 1990’s had incredibly high hopes for where computers, the Web, and the Internet would lead. By 2002, we were wallowing around in the rubble of the dotcom bust, salvaging what we could while putting energy into new ideas and businesses that emerged with a vengence around 2005 and the idea of Web 2.0.

What we didn’t realize (or at least I didn’t realize) was that virtually all of the ideas from the late 1990’s about what would happen to traditional industries that the Internet would distrupt would actually happen, just a decade later. If you read Marc’s article carefully, you see the seeds of the current destruction of many traditional businesses in the pre-dotcom bubble efforts. It just took a while, and one more cycle for the traditional companies to relax and say “hah – once again we survived ‘technology'”, for them to be decimated.

Now, look forward twenty years. I believe that the notion of a biologically-enhanced computer, or a computer-enhanced human, will be commonplace. Today, it’s still an uncomfortable idea that lives mostly in university and government research labs and science fiction books and movies. But just let your brain take the leap that your iPhone is essentially making you a computer-enhanced human. Or even just a web browser and a Google search on your iPad. Sure – it’s not directly connected into your gray matter, but that’s just an issue of some work on the science side.

Extrapolating from how it’s working today and overlaying it with the innovation curve that we are on is mindblowing, if you let it be.

I expect this will be my intellectual obsession in 2012. I’m giving my Resistance is Futile talk at Fidelity in January to a bunch of execs. At some point I’ll record it and put it up on the web (assuming SOPA / PIPA doesn’t pass) but I’m happy to consider giving it to any group that is interested if it’s convenient for me – just email me.

Comments
May 3 2010

My Obsession With The Product

For some reason I’ve been doing a lot of interviews lately.  In many of them I get asked similar questions, including the inevitable “what makes a great entrepreneur?”  When I’m on a VC panel, I’m always amused by the answers from my co-panelists as they are usually the same set of “VC cliches” which makes it even more fun when I blurt out my answer.

A complete and total obsession with the product”

The great companies that I’ve been an investor in share a common trait – the founder/CEO is obsessed with the product.  Not interested, not aware of, not familiar with, but obsessed.  Every discussion trends back toward the product.  All of the conversations about customer are really about how the customer uses the product and the value the product brings the customer.  The majority of the early teams are focused entirely on the product, including the non-engineering people.  Product, product, product.

And these CEO’s love to show their product to anyone that will listen.  They don’t explain the company to people with powerpoint slides.  They don’t send out long executive summaries with mocked up screen shots.  They don’t try to engage you in a phone conversation about the great market they are going after.  They start with the product.  And stay with the product.

When I step back and think about what motivates me early in a relationship with an entrepreneur, it’s the product.  I only invest in domains that I know well, so I don’t need fancy market studies (which are always wrong), financial models (which are always wrong), or customer needs analyses (which are always wrong).  I want to play with the product, touch the product, understand the product – and understand where the founder thinks the product is going.

I don’t create products anymore (I invest in companies that create them), but I’m a great alpha tester.  I’ve always been good at this for some reason – bugs just find me.  While my UX design skills are merely adequate, I’ve got a great feel for how to simplify things and make them cleaner.  Plus I’m happy to just grind and grind and grind on the product, offering both detailed and high level feedback indefinitely. 

How a founder/CEO reacts to this speaks volumes to me.  I probably first noticed this when interacting with Dick Costolo at FeedBurner when I first met him.  I am FeedBurner publisher #699 and used it for my blog back when it was “pre-Alpha”.  I had an issue – sent support@feedburner.com a note – and instantly got a reply from Dick.  I had no idea who Dick was, but he helped me and I quickly realized he was the CEO.  Over the next six months we interacted regularly about the product and when he was ready to start fundraising, I quickly made him an offer and we became the lead investor in the round.  My obsession with the product didn’t stop there (as Eric Lunt and many of the other FeedBurner gang can tell you – I still occasionally email SteveO bugs that I find.)

I can give a bunch of other examples like FeedBurner, but I wrap up by saying that I’m just as obsessed with product as the founders.  And – as I realize what results in success in my world, I get even more obsessed.  Plus, I really like to play with software.

Comments
Mar 4 2010

Are Apple’s Competitors Stealing Its Patented Inventions?

The Apple patent suit against HTC really riled up my friend Sawyer.  I wasn’t planning on posting another missive from him until next week, but I thought this was particularly timely given the public statement from Apple, including a specific quote from Steve Jobs about its competitors stealing their patented inventions.  Sawyer explains why this is simply inflammatory rhetoric and actually has no basis in fact or the way patent law works.  He also makes the case – using this as an example – that patents stifle, rather than promote innovation.  Enjoy.  And, after you read this, if you want a little “doesn’t this sound familiar” action, take a look at the Wikipedia page on Apple Computer v. Microsoft Computer with regard to the GUI – with a little Xerox tossed in as a side dish.  And now, my friend Sawyer.

The other day Apple announced that it is suing HTC for infringing several patents related to the iPhone, including patents on the UI, i.e., software patents.  As part of the press release, Steve Jobs said the following (emphasis mine):

“We can sit by and watch competitors steal our patented inventions, or we can do something about it. We’ve decided to do something about it. We think competition is healthy, but competitors should create their own original technology, not steal ours.”

The rhetoric of "stealing" and "theft" surrounding accusations of patent infringement is bothersome, both because substantive patent law doesn’t embrace the concept of theft, and because most patent cases don’t involve credible allegations of actual theft or even copying. 

Plaintiffs try to use "theft" to inject a moral element into patent suits, but there is no substantive moral element in patent law.  The point of a patent is to grant a monopoly in exchange for public disclosure, and patentees want people to use the ideas (in exchange for license fees), otherwise the public disclosure aspect is pointless.  The Constitution doesn’t authorize patent or copyright law for moral reasons either:  “To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries…” 

The only doctrine in patent law that shades into morality is willful infringement.  The shifting law on willful infringement will be the subject of another post, but in any case, willfulness isn’t a morality doctrine; willful infringers aren’t bad people, they are just people who decided to continue possibly infringing because they didn’t think they infringed, thought the suit was frivolous, or thought they would lose more money by stopping, at least in the short term.  The doctrine is set up to penalize people who recklessly infringe by potentially trebling damages, and so acts as an incentive to settle suits and pay licensing fees.  This isn’t a moral calculus, it’s a utilitarian one.

Willfulness, however, acts as the main vehicle for plaintiffs to inject moral rhetoric and copying allegations into a patent suit.  “Copying” in a patent law sense means that an infringer either literally read the patent and copied what the claims said wholesale, or saw a product embodying the patent and copied the patented aspect of it.  Copying in patent law does not mean “theft.”  Theft of secret ideas is actionable under trade secret law, and I know of very few cases pairing the two.  Literal copying is often actionable under copyright law as well.  Isn’t it the case though that patentees want people to copy?  Doesn’t copying mean that their ideas are spreading and being used for follow-on innovation, which are good things?  The issue if anything is proper compensation, not the act of copying itself.

Unsurprisingly, we don’t usually even get into copying as a consideration.  A paper by Mark Lemley and a good blog post titled Patent defendants aren’t copycats shows that the vast majority of patent cases don’t involve an assertion of copying (and we’ll have to see if the Apple case does).  Putting in place an independent invention defense to infringement, as suggested recently by Brad Burnham at Union Square Ventures, would potentially wipe out 90% of patent cases. 

Setting all of that aside, in my experience, when plaintiffs do allege copying, particularly in software cases, the allegations are uniformly flimsy and bogus litigation tactics aimed at getting “black hat” stories about defendants told to juries.  And it’s a great tactic because juries are people, and regardless of the merits, they like to stick it to the bad guys, especially so where the merits are boring patent law issues that no one understands anyway.

Now we have one of the biggest and most innovative companies out there, Apple, trying to sue one of its competitors out of the market with patents, and using the false rhetoric of theft to justify the suit.  This underscores that the patent problem isn’t just "trolls" versus "big companies," it’s big companies using patents to sue others in the same market into oblivion, cutting off competition and destroying innovation.  Imagine, if HTC weren’t making great Android phones to compete with the iPhone, would Apple be incentivized to significantly improve its products?  Would we have no iPhone if patents didn’t exist?  I think it’s fairly obvious that in the absence of patents, we would have more competition and more innovation here, not less.

In any case, the takeaway for reform advocates is that we need to shift the rhetorical frame in discussions around patents from the moralizing of "stealing" and "theft" to what the issue actually is, a dry utilitarian calculus about what outcomes are better for innovation and competition.  When we think about the issues in that frame, it sort of takes the wind of out of Steve Jobs’ sails, doesn’t it?

Comments