Downwingers and dilettante-ism: Bryan Appleyard on Futurism

posthuman2

Appleyard is a British journalist and contributor at The New Statesman, a generally well-respected if left-leaning political and cultural magazine that’s been around since 1913 and has hosted the musings of such luminaries as John Maynard Keynes, Richard Dawkins, and Christopher Hitchins. But this is the nature of modern journalism, so it’s not surprising that The New Statesman recently gave us yet another lesson that no matter what, or who, or where you imbibe your news and information, it’s a process which should be done with a critical eye. That’s why history, despite the best efforts of arts and humanities deans, school boards, and football coaches, is not going anywhere soon. It is the discipline, the most successful discipline, I think, that teaches evidence-based inquiry, critical thinking, and the big picture. Because without it, you might read this new article by Bryan Appleyard and think he knows what he’s talking about. Caveat: because this is the internet, I’d like to head off the professional offense-takers by saying I don’t think Appleyard himself is stupid. Many of his pieces are good. But smart, generally thoughtful people can say stupid things. Except me. I never say stupid things.

Check out the article here. Essentially, in a nutshell, he says futurists are all naïve utopians and we should never listen them, because doing so robs us of our humanity:

 http://www.newstatesman.com/culture/2014/04/why-futurologists-are-always-wrong-and-why-we-should-be-sceptical-techno-utopians

To begin: there are plenty of legitimate criticisms to be leveled at technoprogressives (a far more accurate, descriptive and less generally disingenuous term than Appleyard’s deliberate “techno-utopian”), so there’s little point in muddying the waters by making up illegitimate ones by expressing poor thinking using bad writing. Lack of consideration of all the sociocultural implications of the posthuman future, occasional prophetic tendencies, tendency to rely on trite, meaningless phraseology (“the future is now”), willingness to pin down a date past which “everything” will change—these are quality criticisms to be made, and yet don’t all movements have such elements within? So shouldn’t we be careful about generalizations?

Appleyard’s polemic is really a mess of logical fallacies, bad analogies, and clumsy attempts at ad hominem. To take but a few:

1)      He equates Ray Kurzweil and Michio Kaku, two well-educated and thoughtful individuals, with Malcolm Gladwell (who no one with legitimate experience in any of the areas Gladwell ventures takes too seriously, though he is fun to read and no doubt a smart guy) with caricatured versions of Larry Page and Peter Thiel (the former of whom is in probably the most compromised position on any discourse regarding technoprogressivism and the latter who is, plainly and simply, a bombastic, self-important dilettante), and then some anthropomorphized version of the Ted Talks (which Appleyard and I will agree are mostly white noise obfuscating the real signal). Such easy comparisons betray a lack of nuanced consideration of the vast differences between these individuals, and does the unfamiliar reader no favors during a time such as now when science is already under attack by ignorance and misdirection on so many fronts already.

2)      Appleyard attempts also to equate futurism with a religion, and the singularity with the Rapture (using the 2045 date as a straw man). Completely laughable, if you go even one step beyond the most superficial “structural” similarities Appleyard trots out as hard proof. Plus, anything that smacks of religion has to be bad, right?

3)      After criticizing Michio Kaku for uncritical use of language regarding DARPA’s mission, Appleyard spends the rest of the essay calling Kaku and Kurzweil manic, foaming-at-the-mouth, poppie (because anything the public likes can’t be intelligent, apparently) and a host of other less-than-subtle attempts at pejoratives designed to get you on his team.

And then there are statements like this:

“Neuroscientists now routinely make claims that are far beyond their competence, often prefaced by the words “We have found that . . .” The two most common of these claims are that the conscious self is a illusion and there is no such thing as free will . . . The first of these claims is easily dismissed – if the self is an illusion, who is being deluded? The second has not been established scientifically – all the evidence on which the claim is made is either dubious or misinterpreted – nor could it be established, because none of the scientists seems to be fully aware of the complexities of definition involved. In any case, the self and free will are foundational elements of all our discourse and that includes science. Eliminate them from your life if you like but, by doing so, you place yourself outside human society. You will, if you are serious about this displacement, not be understood. You will, in short, be a zombie.”

And all of a sudden neuroscientists are a monolithic entity who are, en masse, incapable of recognizing astonishing logical non sequitors that render everything they do idiotic.

So what’s really going on here? What’s with the tone and substance of this piece? I think Appleyard is afraid. He’s afraid of the future (though he may not want to admit it), and as such is looking to the past to calm himself down. Ehrlich (whose Population Bomb came out in 1968, before the full implications of Borlaug’s dwarf wheat (taking shape during the early and mid-1960s) would be realized) and Somer and all the others were wrong, and so the current generation of futurists has to be wrong too, right? Check this statement from the piece out: “We are, it is said, on the verge of mapping, modelling and even replicating the human brain and, once we have done that, the mechanistic foundations of the mind will be exposed. Then we will be able to enhance, empower or (more likely) control the human world in its entirety. This way, I need hardly point out, madness lies.”

The fact is this entire piece is really just a regurgitation of Max Dublin’s twenty five year-old Futurehype, which was a far better critique of the worst elements of the futurist tendency. In fact, it sounds like Appleyard’s piece reads like that of a downwinger. But I agree with Appleyard’s frustration and the general unhelpfulness of “technological chatter,” that which is heavy on the fluff and language and light on hard evidence. It’s why I wonder books like his own Aliens: Why They Are Here and How to Live Forever or Die Trying, the latter of which is promoted on the dust jacket with claims that it is Funny, thought-provoking and often profound, it manages to grapple with the big issues of existence without blinding the reader with science” get published. Because thanks Appleyard. I wouldn’t want to be blinded with science.

Advertisement

Wherein I try some Math (Flamestower edition)

macgyver

Thank you to everyone who has in one way or another found themselves at the slowlorisblog during its two-week launch extravaganza. I hope you continue to find it interesting enough to visit on a weekly basis as we settle into a rhythm here. To start our regular programming, I’d like to do a little math. Not really my wheelhouse, but occasionally something piques my interest and provides a fun diversion until I realize I’m out of my depth.

To business: A few months ago I learned about the kickstarter for Flamestower, an off-the-grid way to charge your cell phone using a plain vanilla fire:

http://www.kickstarter.com/projects/flamestower/flamestower-charge-your-gear-with-fire?ref=live

This is certainly not the first MacGyver-style charger on the market, nor is it the first or best to use thermal energy to generate electricity. But, in passing it along to the group of friends with which I game regularly each week, one happened to ask the question: How many of these would we need to run 3 desktop computers playing Sid Meier’s Civilization V?” Excellent question, I thought. I figured with my high-school level math skills, I could put away this question in twenty minutes. Two hours later, far more humbled, and finally finished, I concluded I needed more math in my life. In any case, below is my proof of the problem, because maybe someone out there has once thought something like How many solar backpack chargers would I have to daisy chain together to keep PrimeGrid running after the apocalypse?

________________________________________

The Flamestower charges via USB 3.0 (Thank god it’s not 2.0). At 2 amps and 5 volts, the max USB 3.0 is rated at is 10 watts, so the short answer is you’re looking at 65 of them per computer if that computer is running a 650 watt PSU. So my two friends and I would need 185 between us. The next logical, and far more interesting question, is how much wood would you need to power it? This is where we go down the rabbit hole.

The kickstarter doesn’t give exact specs, but here’s what we know.

1 watt = 4.1868 calories/sec

1000g (1 kg ((standardized amount of wood one might collect)) mass of oak)  * .00048 (specific heat of oak in cal/gram Celcius) * 482 (combustion temp, in C, of oak):

= 231.6 calories contained in the wood

231.6/4.1868cal/sec = 55.32 seconds of power at 1w. But this is also assuming 100% efficiency.

Assuming a thermal efficiency of 8% (this is the top end of efficiencies for a thermoelectric generator):

= 4.43 seconds of power at 1 watt, or 1.48 seconds at 3 watts (since we want to minimize the number of units we’d have to buy).

So, for every 1 kg of wood we collect, we can power this thing for 1.48 seconds. Weak. But it gets worse.

 

We need 1650 watts for a 10-hour game of Civ 5.

10 hours = 36000 seconds.

We need 1650 watts continuously.

550 (minimum number) units pumping 3 watts continuously, each burning 1 kg of wood, lasts for 1.48 second. So we’d consume:

550kg wood/sec * 24324.3 (36000 seconds * 1.48 seconds of 3 watt-rate contained in each):

13,378,378 kg of wood. Off the grid, indeed.

________________________________

The only problem I ran into that I can’t resolve is that, while the thermoelectric generator runs at 8% efficiency, this doesn’t include the heat lost to the surrounding air, which I assume is a lot. So this experiment is assuming that if you can find a way to burn a certain mass of wood at an exactly controlled rate, you could also devise a way to minimize heat loss to the convecting air. Otherwise, you’d probably have to multiply that number by 100 or something on the assumption that only 1% of a fire’s thermal energy gets trapped by the Flamestower.

And that’s that. I’m exhausted.

Feel free to point out any mistakes you see, but be warned I reserve the right to incorporate your corrections, delete your comment, and pretend I knew what I was doing all along.

Morality and Chimeras in a Posthuman World

young family

Two scenarios from which to begin this discussion:

1)      Someone straps a computer onto the brainstem of Merriweather the Chimp in an experiment to translate her brainwaves to speech and develops sophisticated software for interpretation. And it turns Merriweather into a chimp-borg, where she develops the ability to enter a discursive space not just with trainers who’ve learned ASL in a way that has been largely ignored by the public as legitimate interaction on equal footing, but with humanity and in her own voice. And she tells humanity of her thoughts, and fears, and dreams. She hopes, she laughs, she wonders, and she cries. She is, by all the measures we administer, a moral person. Right? Or no?

2)      Or how about one that, while less immediately clear, will probably happen first: it looks like chimps are going to be, in the next 10 years or so, granted “personhood” status. This will mean that, legally (and ethically, as far as the law goes), they have to be treated as humans (sidebar: this doesn’t mean that they will have to be treated as equal in all capacities as human. Rather, it will be an instantiation of law, informed by science, which “fills” chimps as “legal vessels” with rights). At the same time, this will be the first definitive act by humanity which acknowledges that humanity doesn’t have a monopoly on moral instantiation. So, chimps are granted personhood status, are the moral equals of humans. Then someone takes stem cells from the brain of a chimp and implants them into a dog fetus. The dog doesn’t develop any morally relevant capabilities (cognition, etc.), but the cells came from a moral being. And we’ve said a chimp is a legal, ethical, and moral person, just like a human. And in the past, moral philosophy (which directs juridical philosophy) has said, it comes in part from a moral being, it’s morally equal. So what is this dog, then? A moral being? Or not?

Unless you’re someone uncapable of thinking rationally, soberly, and with self-reflection, it’s clear that morality and moral frameworks are going to be increasingly contested spaces during the twenty first century, especially as genetics continues its foray into splicing and transfection and we enter fully the era of the posthuman. The creation of chimeras is a rich, exciting field of inquiry and therapeutics. It is, without qualification, one of the next frontiers of genetics as well as philosophy of science.

Until now, the standard operating procedure regarding whether an other-than-human animal is morally relevant has relied on anthropocentric cell-origin arguments, i.e. if it came from a human, the chimera attains morally relevant status (morally relevant status just means we have to treat it like it’s human when it comes to questions of morality. So the operative word is “relevant”). So, at this juncture in time, if human cells were used, the new animal is a chimera, and is the moral equivalent of a human being. If no human cells were used, it does not.

But it’s becoming an increasingly nebulous position thanks to advances in genetics and experimental technique, and thus difficult to defend. See the two examples above. And moral philosophers are, because of this, running into an increasingly difficult problem to parse: How do we treat chimeras which have cell origins from one or more types of species?

It has become clear, in other words, that we need a more nuanced framework for defining moral relevancy, or we run the very real risk of not only violating some philosophical boundary, but, as any good lawyer will tell you, legal ones as well. After all, jurisprudence has been in the past, and remains today, informed and even directed by political and moral philosophy. The exciting thing to historians of science is that, in a post-enlightenment world, moral and political philosophy has itself seen the replacement of previous vocabularies and epistemologies of religion with vocabularies and epistemologies of science.

One of the solutions offered gets around the cell origin problem is to consider capacity in a more complex way instead. Monika Piotrowska of Florida International University suggests a two-fold solution. If you take brain stem cells from a human in one case and inject it into a mouse, and in a second case take brain stem cells from a chimp and inject it into a mouse, you’ve (arguably) got chimeras with indistinguishable morally relevant capacities (because they are both capable of, for instance, rationality or sentience, and thus we need to treat them as moral equals).

But what if the cell transfer doesn’t result in the acquisition of distinguishable morally relevant capacity (if you didn’t transfer brain stem cells, or the experiment was not concerned with sentience or rationality), she asks? You still need to consider moral capacity. So how do you do it?

This is where Piotrowska suggests cell origin can still play a role. If the cells came from a phyologenetically morally relevant origin (like humans), then you can still give moral relevance to the chimera.

Some philosophers have a problem with this approach because it retains an anthropocentrism and relies on vague definitions of “easy-to-determine” and “difficult-to-determine.”

I agree with this criticism, not least because it completely falls apart when you consider non-organic intelligences, like AI. The larger reality when it comes to other-than-human animals is that there will be very little reason, outside the subdisciplines that make up moral philosophy, to construct any kind of hierarchy or dichotomy at all when we are no longer measuring other-than-human animals for our dinner plates and work harnesses. In a world of synthetic protein and cheap, universal, open-source robotics, all other-than-human animals will enjoy “protected from” status, and we’ll be looked upon by history—as the general public understands it—in this particular instance as discussing the symptoms rather than the source of a larger problem.

Monika Piotrowska , “Transferring Morality to Human–Nonhuman Chimeras” The American Journal of Bioethics, 14(2): 4–12, 2014.

DOI: 10.1080/15265161.2013.868951

*image credit “Young Family” by Patricia Piccinini