Matthew Prince’s Internet Most days, I’m a fan of CloudFlare. The company provides much-needed protection against denial of service attacks and helps keep the internet functioning and responsive. But I found the actions of CloudFlare CEO Matthew Prince after the events in Charlottesville downright bizarre. Cutting off his company’s support for the website Daily Stormer was odd. Not because I support the repulsive views of that neo-Nazi website, but because those opinions must have been known to Prince before last week. Perhaps Prince missed that day in school where they taught history. But Nazis are, by definition, violent and racist. After CloudFlare withdrew their protection, the site was taken down by hackers. Which was an outcome that Prince insists he anticipated. Finally, Prince declared his own actions to be “arbitrary” and “dangerous,” which they surely were. Why does this matter? Governments around the world are starting to debate the need to regulate the internet like a utility. The internet is the public square of our era. Most political speech is now routed through routers, servers and services provided by private companies. Under current law, those companies have the right to deny service to anyone at any time at their sole discretion. But a case could certainly be made that, given the crucial role of the internet in public debate, it should be open and available to all people regardless of the distastefulness of their views. Just as your electricity provider can’t cut off service to a customer because they dislike her political views. Being regulated like a utility would certainly be bad for all the big tech players. Regulations are more restrictive and there are limits to profitability. By drawing attention to his dangerous and arbitrary decision to kick Daily Stormer off the internet, Prince’s needless act of self-promotion reminds regulators that his company probably has too much control over people’s freedom of speech. By his own words, Prince seems to agree. Like Cleavon Little taking himself hostage in Blazing Saddles, he begs us to protect the internet from dangerous men like himself. In a nutshell: Attention-seeking CEO’s should not be making decisions about the limits of democratic freedoms. Read More Design’s Network Effects Most designers of digital interfaces still use Adobe’s Photoshop, despite a deep loathing of Adobe and their extortionate prices and monthly fees. Recently, new design tools like Sketch and Figma have emerged to challenge this dominance. In terms of features and utility, you can think of these tools as Google Docs and Adobe as Microsoft Office. Photoshop is a bloated and inefficient product tied to a business model that charges far too much for a software product. But Photoshop, much like Office, is the industry standard. Since most designers use Photoshop, most designers have to use Photoshop or they will be unable to collaborate or work on existing design files. This is the definition of a network effect. But network effects can break both ways. If enough designers switch over to Sketch or Figma due to either cost or utility, the Photoshop monopoly will be broken and its market will disappear. Why does this matter? Photoshop was always a crappy fit for user interface design. It was created for manipulating photos and was only repurposed by digital designers due to the absence of alternatives. But I think this time Adobe really is a lot of trouble. Not only are Sketch and Figma natively built for digital design, but they both offer features better suited to this era of collaborative work and hybrid designer/developers. Adobe has spent far too long as the only game in town and they are institutionally incapable of responding to the lower price points of these products. Inevitably, they’ll try to wring the last drop of profitability out of their current business model, relying on hidebound designers and large advertising agencies loathe to redirect and learn new programs. They’ll still make a lot of money. But now they have an expiration date. In a nutshell: The network effects that protected Adobe’s profit margins are about the disappear. Read More Fake news begone! No one likes fake news. Everyone agrees we should get rid of it. But no one seems to agree on a shared definition of what constitutes fake news. Functionally, it’s any news article you personally disagree with. Of course all (most) of us can agree on the particularly egregious examples. Hillary Clinton was not running a child prostitution ring out of the basement of a pizza parlor that did not have a basement. So how do we eliminate these spurious non-stories before they start affecting things like Presidential elections? A group at BCG Digital Ventures has suggested a possible model for eliminating fake news in a proof of concept they call Geppetto. (Get it?) I confess I found the vagueness of their outline somewhat difficult to evaluate. Essentially, a “veracity engine” powered by machine learning evaluates whether an article is true or not using continuous learning to get ever more precise evaluations of “true” or “not true.” Such a veracity engine would undoubtedly be wonderful, only it doesn’t exist and they provide no framework for how they would train such an engine. Is it based on semantic quirks of untruths? Past articles by the same author/publication? A well-labeled infinite database of known facts? No detail is forthcoming. In addition, they suggest that a human set of evaluators would also rank articles in a shared ledger using the blockchain. To which I have to wonder: why? If the veracity engine has already established veracity, what are these people doing? And what is preventing their biases from warping the rankings? And why, apart from the buzzword value, are we invoking the blockchain? Why does this matter? Not to get all first amendment about this, but freedom of speech is kinda important. Yes, it is certainly bad if people knowingly spread untruths. But they do have a constitutional right to do so. And any time you are going to evaluate and potentially censor speech, including news articles, you had better have a transparent, fair, and even-handed model for doing so. People criticize Facebook for moving too slowly and too late on this issue. Heck, I’ve criticized them. But fake news is a fiendishly difficult problem. Their incremental and experimental approach is probably appropriate. Machine learning and the blockchain are wonderful technologies. But this leaves people with the impression that they magically evaluate truth and falsehood. Not so. In a nutshell: Fake news is going to be with us for a long, long time. Read More