Category Archives: technology

Should everyone learn to program?

A few weeks ago I read a blog post by Jeff Atwood about this topic. You can probably guess by the title of the post, “Please Don’t Learn to Code” that Atwood would answer “no” to the question I’ve posed above. I’ve thought about Atwood’s article for a few weeks, and I’ve come to the conclusion that I disagree. I think it’s important that everyone should know how to program. Having some programming experience is a valuable tool in our modern world, where computers are ubiquitous.

Atwood starts by ridiculing NYC Mayor Mike Bloomberg’s 2012 New Year’s Resolution to learn coding. Atwood is absolutely right here; this is a dumb resolution. But that’s not because programming would be something useless for Bloomberg to learn. It’s because there are only 24 hours in a day, and the best use the Bloomberg can make of them is to learn things that will help him do his job better. If Bloomberg were, say, 15 years old again and he had a ton of free time on his hands, it would probably be worthwhile for him to learn programming. Not right now though.

Atwood then says, “I would no more urge everyone to learn programming than I would urge everyone to learn plumbing. That’d be ridiculous, right?” Plumbing is an interesting choice for comparison purposes. Just like computers, we interact with plumbing every day, although perhaps not to the same extent. But you know what, it’s not ridiculous. I think that everyone should know some plumbing. I think that we all should know enough so that we’re not completely helpless if the sink is blocked. I’m not saying that we all need to know enough to become a professional plumber; there’s nothing wrong with calling a plumber if you have a big plumbing job. Even if your big job is something you could fix yourself with sufficient expertise, it’s still fine to call a plumber. I probably could replace the motor on my sump pump if it fails, but I’d much rather pay someone to stick their hands into smelly pits and I’d much rather be absolutely sure that the job is done right. However, with something easy and simple like plunging a blocked sink, I think you should know how to do that yourself. Furthermore, it’s a good idea to have a basic idea of how the plumbing in your house works; this helps prevent you from doing dumb things like trying to flush your garbage down the toilet.

With programming, it’s the same sort of thing. I don’t necessarily think that everyone should major in computer science in college, but if they have a basic understanding of programming, it should help them feel more empowered when they need to figure out how to do something, and it might help them understand why some things don’t work the way they think they should.

Atwood writes, “Don’t celebrate the creation of code, celebrate the creation of solutions.” I agree completely. Teaching someone how to program should not be a matter of getting them to memorize keywords and syntax, just like teaching someone mathematics should not be a matter of getting them to memorize formulae and tables. An introductory programming course should be primarily about teaching students how to solve problems, teaching them how to think logically, with the language syntax being secondary. (This is what mathematics classes should be about too. All too often they aren’t, though, but that’s an issue for another day…)

In a world where computers will very shortly be pervasive, it is important to have a basic understanding of how people get them to work. I think that the best way of getting that understanding is to do a bit of programming. Not everyone should become a programmer for a living; far from it. But having an idea of how it’s done will be invaluable in the world of tomorrow.

Call of Duty: Modern Warfare 3

Call of Duty: Modern Warfare 3 was released earlier this week. Based on the sales figures (the Xbox 360 and PS3 versions of the game are the #1 and #2 video game bestsellers on Amazon.com and were there well before the game was released), it appears that fans appear to have forgiven and forgotten about the buggy multiplayer mode on Call of Duty: Modern Warfare 2 (although a better-quality Call of Duty: Black Ops was released in the intervening time).

There are two interesting trends here. The first is the idea of video game as phenomenon. The game had been promoted as well as a blockbuster movie, at least to the target audience. Thousands of people lined up on Monday night to buy the game when it came on sale at midnight Tuesday. Launch parties were held at over 13,000 retailers worldwide, according to Activision. This isn’t the only game that is released in this manner. Battlefield 3 was released last month, also to wide acclaim.

The other interesting trend is the move towards multiplayer games. Certainly that’s nothing new, but it’s hard to imagine that a lot of people buy CoD:MW3 for the single-player game, which is actually pretty boring. It’s the ability to play multiplayer that really interests the people that buy this game. Originally, video games were social games. You would go to the arcade with your friends and play there. The video game console made games a little less social; you would sit at home by yourself for hours. Now, with online play, these games are becoming more social again.

These two trends are sort of related; the common theme is being able to share these experience, whether it’s the experience of waiting in line to buy the game or the experience of playing the game. Will we see more games which essentially sell the buyers experiences? Only time will tell.

Dennis Ritchie 1941–2011

Dennis Ritchie

Dennis Ritchie

Dennis Ritchie, the creator of UNIX and C, died last week at age 70. This follows only a few weeks after the death of Steve Jobs. I found it quite curious, however, that Jobs’ death was front-page news in most newspapers, while so far I’ve waited six days to see if I could see it in any of the newspapers I tend to read, without success. Did anyone happen to see this in their local newspaper?

It’s very curious why the mainstream media pounced over one death but were silent on the other. Ritchie has a much more significant impact on modern computing than Jobs does. UNIX is the direct descendent of almost every modern non-Microsoft OS. These descendents can be found in routers and servers all across the Internet, in Android phones, Mac computers, even in your TiVo. C is one of the widest-used programming languages ever. So why not pay more attention to Ritchie? One could argue that, if he hadn’t invented C and UNIX, someone else would have come and invented, say, PL/2 and MULTOS and the world would be more or less like it is now. Probably true, but on the other hand I’m sure someone would have invented something similar to the iPod and iPhone and iPad even without Jobs.

So maybe it’s just the “cool” factor and it has nothing to do with substance. It’s interesting where the priorities of the mainstream media are.

Earth: Population 7 Billion

The Earth, shown as a globe, from space.I saw a news article recently that stated that the population of the Earth is going to hit 7 billion in two weeks’ time. The emphasis of the article, however, was not so much on interesting statistics about population, but rather on birth control in developing countries, which seems to be a big thing now. This emphasis on birth control strikes me as at least a teeny bit imperialistic and hypocricital. How come there was no emphasis on birth control when the populations of developed countries were skyrocketing? If Europeans have more kids than they have space for, they can kick the natives off of other continents and live there, but you guys, no, you’ll just have to start acting responsibly.

Certainly attempts at population control aren’t the only imperialist activities that developed countries engage in. If you live in the developed world, you probably get your electricity from coal or nuclear or hydroelectric power plants, but attempts to make lives better in the developing world by building coal power plants or nuclear power plants or hydroelectric power plants are criticized as being too polluting or too risky or spoiling virgin rivers. Us, we’ll use whatever technology that we want, but you, you’ll just have to use technologies that don’t work too well. Another example is the use of DDT. In the past, developed countries have used this to eliminate their malaria problem. What if a developing country wants to use it now for the same purpose? Tough, it pollutes.

Let’s face it, most organizations, corporations, and individuals in developed countries are not really all that interested in bringing the standard of living in, say, Africa, anywhere near what it is in developed countries. The developing countries will have to do it for themselves. And, since many of them are not all that rich in natural resources, they’ll need all the human capital they can get their hands on. One illustration of what human capital can do can be found in the book The Boy Who Harnessed the Wind. It sounds like it might be a Stieg Larsson book, but it isn’t. It’s a true story about William Kamkwamba, who built a pair of windmills in his village in Malawi (to run a few lights and a pump for a well) from spare parts. Face it, this accomplishment didn’t raise the standard of living in his village to developed country status, but if Africa is going to become great, it needs more great minds like his. The only way to get more great minds is to get more minds, period. Africa needs all the people it can get.

If the developed countries don’t like that, they’ll need to start thinking in ways that benefit those in developing nations. Right now, there are real benefits for residents of developing countries to have children: They provide useful labour on farms, and they ensure a reasonably secure retirement. If we want people in developing nations to have fewer children, we’ll need to provide similar benefits. Are we going to start helping those people who responsibly only had a few children, and, due to their children having died, moved away, or whatever, are now indigent in their old age? Are there even any charities that support such people? Face it, pictures of old people don’t really tug at the ol’ heartstrings the same way that starving children do. However, this is the sort of thing that we need to think about.

If we don’t start thinking in those ways, the Earth’s population will rise to 8 billion, 9 billion, 10 billion, 11 billion, and so on. However, with the rise of human capital in developing countries, I’m sure that they’ll be able to solve the problems of rising population in ways that we in developed nations can’t. Certainly there are no other continents for them to move to. However, perhaps they’ll be able to find more efficient ways of using the space on the Earth that we already have, or of creating brand new places for people to live (maybe even in outer space? Who knows.), or of managing the problem in ways we can’t even think of yet. But we need those people.

Google+ and Real Names

Google+ has recently launched to rave reviews.  One interesting thing about the TOS for Google+is that Google+ requires that real names be used.  OK, no big deal.  I wasn’t aware this was actually a big problem, but apparently it is for some.  My response would be that, if you don’t like Google’s TOS, don’t use Google+.  Google owns the site, and they can decide under what terms people can use the site.

That’s just the way things are.  Over the past decade or so, the content on the Internet has been increasingly dominated by larger entities.  If you’re looking for information on, say, tigers, you’ll probably go to Wikipedia, instead of the homepage of some tiger fanatic or tiger researcher.  If you’re looking to connect with people, you might use Facebook or Google+ instead of decentralised newsgroups or mailing lists.  You might post on Twitter or Tumblr instead of on your own blog.  The people that run these large websites certainly have the right to insist on certain ways of behaving; they have their reputations and profits to protect.

Having said that, I think that there are some advantages in independence or competition on the Internet.  Take this blog, for example.  This is not a page on Facebook or some large Internet corporation, so I don’t have to adhere to whatever its terms of service are. Obviously, this blog is hosted by an ISP, which certainly has the right to insist on my using my web space in a way that won’t get it into legal troubles, but if they were to impose overly weird terms of service, I would probably host my blog on some other ISP.

Obviously, this model isn’t useful for everything.  If you want to keep updated on what’s going on, you probably don’t want to check a thousand different sites for this information.  There’s also benefit in standardisation.  A social network isn’t very useful if everyone uses a different social network.  Hence enormous corporations are required to run these sites.

However, I think it’s useful to keep in mind that not everything may need to be put on mega-sites.  I think there’s a place for them (and a very big place at that), but I don’t think they should be the entirety of one’s Internet experience.

Thoughts on the Terrorist Attacks in Norway

Is it possible to prevent terrorist attacks such as the ones that happened in Norway on Friday?

I’m not going to explore the answer to that question in well-researched detail; rather, I’ll list some thoughts.  Preventing such an attack would require intervening at some point or another.  There are a few possibilities:

  1. In the perpetrator’s youth, to ensure that whatever experiences that caused him to become a sociopath and to lose respect for human life don’t happen
  2. Preventing the perpetrator from obtaining the weapons to be used in the attack
  3. Being there on the scene to stop him just before committing the crime

I don’t think any of these three will work.  #1 seems impractical.  Psychology isn’t an exact science and it isn’t known exactly what turns Timothy McVeigh or Anders Behring Breivik.  I don’t think #2 will work either.  First off, it’s hard to identify these people (if it weren’t, wouldn’t people have tried to help them before?) so you’d need to take the weapons away from everyone, which isn’t always practical.  Second, I suspect that, if they didn’t have one weapon, they’d use another.  As technology advances, it becomes easier for people to obtain or create things that could be used as destructive weapons, and banning every single possibility would be a major infringement on civil rights.  As for #3, how would anyone know where to be (unless you were in a police state where the police were everywhere anyway)? One possibility is that a lot of these folks that commit these sort of acts leave some sort of message on the Internet explaining themselves; possibly there might be the ability to look for this sort of material, but I suspect that, if this were in place, these folks’ MO would change so that this wouldn’t work anyway.

So, I think the answer to the question I posed in the first paragraph is “No”.  The risk of these sort of terrorist attacks is unfortunately inevitable in a world that contains 7 billion people.  This huge population makes it more likely that the right (or, more accurately, wrong) combination of circumstances will cause people like Anders Behring Breivik to go down the path that they do, and it makes it more likely that there will be a lot of innocent bystanders in the way when they do unleash their anger.

How significant a problem is this, though? Before I investigate this question further I want to apologise for the tone of the following paragraph.  I’m going to be taking a “big picture” look at these events, and that unfortunately excludes examining the individual suffering of the dead or wounded and their friends and family.  Having said that, it seems that 200 deaths is around an upper limit on the number of people that one person can kill before getting caught.  Timothy McVeigh managed to kill 168 people in Oklahoma City.  The terrorists in the September 11, 2011 attacks managed to kill nearly 3,000 people, but there were 19 of them.  I can’t think of any incident in which a single person managed to kill more than 200 people.  This is because there are so many things that could go wrong that it takes a rare combination of circumstances to ensure that they all go right.

Those with lurid imaginations could probably imagine how these people could have killed more people, but I think it’s likelier that other events would happen that result in more people being saved.  The events that we do hear about on the news were the ones that were (from the terrorist’s perspective, not the world at large) wildly successful.

Not to minimize the individual suffering of the victims and their relatives and friends, but even 200 deaths are not highly significant from a global perspective. Worldwide, around 267 people are born every minute, so the number of people killed in a hypothetical terrorist attack that takes 200 lives are equal to the number of people born worldwide every 45 seconds.  One attack doesn’t have a significant effect on the Earth’s population, and attacks of this magnitude are rare.

Unfortunately this is the way it is because there are so many people on this planet and we currently can’t choose to go somewhere else.  “Animals can be driven crazy by placing too many in too small a pen.  Homo sapiens is the only animal that voluntarily does this to himself” (Robert A. Heinlein).   I hope that, one day, technology will make it possible to go somewhere else and that we don’t have to continue to crowd each other out in the same tiny pen that is the Earth.

Book review: In Defense of Flogging by Peter Moskos, part 2

So last week (actually, it’s getting closer to two weeks ago now) I wrote the first half of a book review of the book In Defense of Flogging by Peter Moskos. I concluded by asking whether punishing criminals for the sake of punishment was the best that we could do as a society. I’ll look into answering this question now.

As a society, we don’t really spend a lot of time actually thinking about free will. Do we have free will? Are we completely responsible for our choices, or are our decisions caused by other factors, or is it a combination of the two? The concept of free will has never been scientifically explained. If you accept that scientific assumption that everything is caused, it would follow that your thoughts are caused by something, possibly something external, possibly other thoughts, possibly the way your brain is put together, possibly some combination of all those. If you were able to follow all of the causal chains back far enough, you would end up with a set of causes all of which are external to you. Therefore, your thoughts are caused by external factors and there is no such thing as free will. (I have seen arguments that suggest that suggest that quantum effects in the brain produce stuff like consciousness and free will. I feel that these are based on a poor understanding of quantum mechanics. Quantum fluctuations produce random behaviour, not rational, conscious behaviour). So, if rational thought would suggest that we don’t have free will, why do we continue to implicitly assume that we do?

In the essay “Free Will, Determinism, and Self-Control” in The Philosophical Legacy of Behaviorism, Bruce Waller suggests that one reason is so that we can hold others morally responsible for their actions; they deserve the punishment that they get. I’ll look at a non-criminal example from the essay, that of a woman who doesn’t leave her abusive husband. A psychologist might see her refusal to leave as being a learned behaviour (“learned helplessness”), which she learned in childhood as a result of inescapable suffering. We can’t blame someone for having a traumatic childhood, so how can we blame them in this scenario? Yet we often do blame the victim, seeing the woman as being “weak” and so deserving to continue living a miserable existence. This also serves as an handy excuse to avoid trying to understand the victim and the causes of her behaviour better, which in turn makes it easier to avoid helping her and to ignore any role we, either individually or as a member of society, might have in her predicament.

One can look at criminal behaviour in the same way. The antisocial tendencies that cause criminal behaviour can often be traced to a defective upbringing (in some cases, defective genes may also play a role), which is something that the criminal couldn’t control. So why do we insist that we should lock the criminal up and throw away the key?

This belief probably comes from our caveman heritage. We are hard-wired to be outraged when someone does something wrong and satisfied when that wrong is punished. Back when humans were “cavemen”, this was probably a trait that helped us to survive. A readable summary of this can be found in the book Risk: The Science and Politics of Fear by Dan Gardner.

In the days of hunting and gathering, when someone did something that hurt the tribe or its members, there wasn’t any good way of trying to rehabilitate him, nor was there any good way of segregating him from the rest of the tribe if he didn’t want to be segregated. The only way of hoping to achieve these aims was to punish the person. Perhaps for serious infractions the perpetrator would be killed, thus ensuring that he wouldn’t commit any more crimes. Other punishments for less serious infractions might in some cases discourage future antisocial behaviour (they might not in other cases, but there isn’t any better alternative). One could also hope that the punishment would serve as a deterrent for the other members of the tribe. Nowadays, one would hope that we could put systems in place that would allow us to do better than cavemen.

Furthermore, claiming that people are “responsible” for their criminal behaviour allows us to ignore any role that we might have in creating this behaviour, so that we don’t have to think about how we, either individually or collectively, might be responsible for creating criminals, perhaps through our toleration of poverty and other social ills, unwillingness to help those in need, lack of moral standards, lack of support for incompetent parents, indifference or even cruelty toward those that we meet each day, lack of respect for the law in our own day-to-day behaviour, etc.

I think that, if we want to advance as a species, we need to leave these caveman notions behind. Hard as it will be, we need to suppress our instinct calling for wrongdoers to be punished. Instead of dragging criminals down, we need to make people’s lives better. This can be done by working to prevent crime in the first place, to rehabilitate the criminal, and to replace punishment by restitution—having the criminal try to fix what he broke as far as possible. “Don’t get mad, get even”.

So, I feel that Moskos’ premise in In Defence of Flogging, while not without merit, is flawed. Instead of finding different ways to punish criminals, we should look at removing punishment from the criminal justice system, without neglecting deterrence, incapacitation, rehabilitation, and restitution. Corporal punishment doesn’t accomplish any of the last four aims, so it doesn’t have a part in my view of the corrections system of the future.

In order to serve the purposes of incapacitation and/or deterrence, prison sentences may still be required in some cases. However, there still is the problem of 2.3 million people currently in prison in the United States. Reducing sentences only to what is required to deter or incapacitate, while substituting activities aimed at rehabilitation and restitution for the rest of the sentence may help. Perhaps a technological solution could help as well. We already have various monitoring devices often worn by those under house arrest and/or probation. We might want to look into smarter devices that play a more active role in restricting the actions of the wearer. Perhaps a device could be created that causes physical pain when the wearer is doing something they shouldn’t. Perhaps a device could be created that physically restricts the wearer in various ways in various situations in order to force compliance with terms of sentencing. Perhaps even some sort of chip could be implanted in someone’s brain (kind of like Spike in Buffy the Vampire Slayer) that would prevent them from doing various off-limits activities. The big question is: Will people think of these devices as immoral, either because they torture wearers or invade their brains, or would it be seen as a humane alternative to prison sentences? I don’t know. This is something that’s worthwhile to start talking about, though.

Hopefully these measures will save money. Some of the savings should be earmarked for crime prevention, but in a smart way. Too often we try to prevent crimes in ways that don’t really prevent crime. Much the “war on drugs” involves arresting drug dealers, but as soon as the authorities do that, someone else sets up shop and we’re back to square one again. It would be much more effective to put money toward treatment programmes for addicts and make it very easy for addicts to get treatment. This would likely prevent a lot of thefts, prostitution, and other crimes that addicts commit to get money. We could undertake similar “smart” approaches with other crimes.

With luck, a combination of all these approaches would both significantly reduce crime and increase national productivity.

Superior Autobiographical Memory and Technology

Sorry for not writing for nearly a month, but I’ve been busy with another project.  Anyway, last night I was flipping through channels on the TV.  I briefly watched “60 Minutes” (sort of a rerun, I believe), specifically the article about “Superior Autobiographical Memory”.  Now, if psychologists describe a phenomenon using a name like that, that would suggest to me that they know nothing about it, but I digress.  At any rate, the article discussed a very small number of people who seem to be able to remember every last little detail of their lives.  That’s a very interesting talent.  It’s not one I’m sure I’d like to have (I already have a hard enough time forgetting all of the mistakes that I make) but I certainly can see some advantages in it.

While I still remember what was going on in, for example, high school, the details of everything have faded and I no longer remember them clearly.  I certainly would like to be able to remember, for example, the music that was playing in the 10th grade art class I took when I was in my final year of high school, or any of the details of the presidential elections dance of that year, or what the date of various events were, or zillions of other things.

What would be even better would be to be able to remember these things in detail, without having to be pestered by unwanted memories.  I think that technology could be used to solve this problem, although it may take a few decades (say, maybe 20 years) for technology to advance to the point where it can be applied.  The technology that I think would be useful here would be some sort of 3-D video recorder that is somehow permanently attached to a person, coupled with an incredibly large storage space that would allow a video and audio feed for a person’s entire life to be captured (it would be useful if it could record thoughts too, although I wouldn’t think that would be possible in the 20-year timeframe that I suggested above).  Combined with a really intuitive playback mechanism, I think it would be really cool to be able to look back on such video and bring any day from any time in your life back to life.

Claude Choules

Claude Choules, the last WWI combat veteran.

I think that this technology could be used for things other than amusing ourselves about our high school exploits, though.  Something else that happened last month is the death of the last remaining combat veteran of World War I, Claude Choules.  With his death is the death of all the memories of the war that he had, all of his thoughts about the war, and all of the other personal experiences that would be useful for the historian to bring the war to life.  I think that a recorder as described above would be incredibly useful as a historic record for people who have played a part, even a minor one, in significant world events.  Being able to see, as through their own eyes, what they saw in, for example, the Great War, would be invaluable.  Of course, we never really know who is going to be playing the interesting roles, so it’s important that this technology be widespread enough that we have a good enough sample of people using it.

Like any other new technology, there would be issues with it.  One big one would be privacy.  What if someone got their hands on the recording?  What if the police used it as evidence against you?  Since this technology won’t be ready for 20 years, I don’t think it’s necessary to explain how to solve these problems in detail.  To gloss over a possible solution, I would suggest that these recorders would need to be integrated sufficiently with the wearer such that the video would be unplayable by anyone else, although there would then need to be some sort of release in case of death that would allow others to play it then.  It would require a fair bit of thought to figure out how to do that, but like I said, there’s no rush.

So, will the class of 2035 be the first people in the world to have their entire past accessible to them?  Only time will tell.

another project

CBC's Vote Compass and Voter Apathy

It’s been about a week since the federal election was called in Canada.  As part of their election coverage, the CBC has published Vote Compass, designed to show Canadians which party’s political views are closest to their own. Several issues with this tool have been pointed out.  I think an interesting flaw was pointed out by Queen’s University professor Kathy Brock. She tried the survey three times; the first selecting “somewhat agree” for everything, the second “somewhat disagree” for everything, and the third “strongly agree for everything. Each time the survey scored the results as being closest to the Liberal party (the article didn’t say what the results were if you strongly disagreed with everything, but I tried it out myself and the result was also Liberal).

The people that designed the survey pointed out that the questions were split between the left and right of the political spectrum, so if you answer all of the questions the same you’ll answer half to the left and half to the right, which presumably averages out to the centre, which presumably corresponds to being a Liberal (only in Canada). Fair enough; if you are an independent thinker and have these half-left-wing, half-right-wing views, possibly you’d be happy with a party that’s in the middle rather than a party that drives you up the wall 50% of the time.

Let’s assume that there are two hypothetical voters out there, one of whose opinions is such that he/she would answer “strongly agree” to each of the CBC’s questions, and the other would answer “strongly disagree” to each question. Further assume that they take the advice of the CBC’s survey and both vote Liberal. Now, if the Liberals were to win the election (which I doubt, but this is all hypothetical (-: ). Now, the Liberals can only take one position on each issue. Since these two voters have opposite views on every issue, at least one of these two people is going to be unhappy with every action that the Liberals take, even though both of these people voted Liberal and even though the Liberal party is closest to both of these people’s views.

This isn’t something that only happens in fantasyland; it happens in real life as well. No doubt, assuming you’re old enough and occasionally vote for political parties that win, you’ve had the disillusioning experience of voting for some party or another, the party getting elected, and then the party doing all kinds of things you don’t like. This isn’t because the party decided to break all of its election promises. It’s because there are so many issues and the party simply can’t have the same views as you do all of the time.

I think that this disillusionment is the cause of a fair bit of of voter apathy, another topic that is often in the news around this time of year. Who wouldn’t get tired of year after year of voting and then not have the party in power doing what you want?

I feel that this is a problem that technology can solve. Looking back to the dawn of democracy in ancient Greece, the people (well, at least those people who were citizens) would participate in the political process directly, instead of electing representatives to participate for them.  This would require that those participating in the democratic process spend a significant amount of time away from their farm or their profession, which was rather inconvenient. Thus was born the practice of electing political representatives, instead of people representing themselves. However, in a world where people vote for their favourites on Dancing With the Stars and American Idol every week, it seems way too infrequent for people to only have one chance to voice their political opinions every three or four or five years.

It doesn’t seem likely that we can just immediately ditch the existing political machine and replace it with one where people directly represent themselves electronically. On the other hand, I think that, with modern technology, it would be important to create something where the general public can actually influence government decisions. I think that the best way to start would be to create various votes along the lines of referendums (referenda?), which bind the government to taking some action or another based on the result of the vote. These should be held online in a manner that it’s not prohibitively expensive to hold. We should probably start with matters that aren’t too significant, as I’m sure there will be flaws in the system to start. I suspect that giving people this sort of direct involvement will stimulate their interest in politics, and turn “voter apathy” into a thing of the past.

Watson and Jeopardy!

Not exactly current news, Watson’s appearance on Jeopardy! I’ve had
exactly a month as of today to think about it, and here’s what I think.

I used to watch Jeopardy! all the time back in the 1990s or something like that.  I haven’t really watched it much in the past 15 years or so, although I did tune in for special events like Ken Jennings winning a zillion games in a row a while back.  I only watched some parts of the Ken Jennings/Watson/Brad Ritter match back in February.  I tuned in for the last part of Monday’s episode and the first part of Tuesday’s, but the show seemed more like an infomercial for IBM than a game show, and I’m not a big fan of IBM.  I didn’t watch on Wednesday.  Of course, I didn’t need to tune in to find out the result, namely that Watson won, since it was so well-publicized.  Incidentally, it seems funny to me to name a computer after Thomas Watson, who is the same person that said in 1943 that “There is a world market for maybe five computers,” but I won’t pursue that idea further.  What I am interested in looking at was the following points.

It’s pretty obvious that the reason why Watson won is not because it is “smarter,” however you define that, than the competition, but rather because it’s a lot faster on the signalling button.  I’m not really sure what the configuration was for Watson’s signalling button, but regardless of what it was, the signalling path had to be a lot faster than it was for the two humans. Their brains aren’t hooked up directly to the signalling button; rather, their brains have to register that the question has finished being read, and then the brains need to send slow electrical impulses to the fingers, which then need to move to activate the signalling button.  So, this achievement just demonstrates how electronics are faster than the human brain (which everyone already knew), not that computers are better at answering questions than humans.

Next point.  To start, I believe that it was Marvin Minsky who said something along like “AI is anything that we haven’t done yet”.  It’s a relevant quote, since it illustrates that, once we understand how to do something, it isn’t anything special.  So, on the one hand, we probably shouldn’t discount Watson’s “achievement” solely based on the fact that a machine managed to do it and therefore it isn’t all that special, but on the other hand I think there’s a lot of room for improvement for machines.  Take the Final Jeopardy! answer on Tuesday for example.  I don’t remember exactly what the answer was anymore, but the category was “U.S. Cities” and the answer was something along the lines of “One of this city’s two airports is named after a World War II flying ace; the other, a World War II battle”.  Watson answered (queried?) “What is Toronto?”  To me, this shows a significant defect in the machine’s semantic representation of the answers and questions.  First of all, what human would provide that answer?  I’m sure that anyone, no matter how smart or dumb they are, would at least provide an answer that is a U.S. city.  If you had a highly advanced semantic map, you would realise that the answer has to be a really big city; even a city as large as Toronto only has one international airport (unless maybe you count Hamilton airport as serving Toronto).  New York?  No, its airports are named after a president and a mayor.  Los Angeles?  No.  So you might get to Chicago, and just stumble on the right answer that way.  It appears that Watson answers questions in a way that is incredibly different from how humans do, and I think that could be a significant disadvantage for it.

During part of the “infomercial”, some IBMer suggested that people might want to use this technology for intelligent agents who answer people’s questions online or whatever.  I would doubt it.  First, how much will IBM want you to pay for this sort of technology?  If history is any precedent, this won’t come cheap.  It’s probably a lot cheaper to hire people in India to chat with a website’s users.  Second, it’s a lot easier to get people to understand search engine syntax and semantics than it is to get machines to understand people semantics.  Why would anyone want to type “What’s the best resource on the web about mathematical paradoxes?” or whatever subject matter you’re interested in, when it’s a lot easier, and most people know, to just type “mathematical paradoxes”.

One final point:  I think that machines’ accomplishments such as this one can’t be considered equal to that of humans until they are intentional.  In other words, until the computer chooses to show up to the Jeopardy! match, I don’t think that the accomplishment can be considered to be equal to that of a human.  We don’t crown a pitching machine as Cy Young Award winner or a cheetah as a gold medallist in sprinting, and I think the significant thing is that these objects cannot choose or appear to choose to attend the sporting events, so similarly, Watson’s accomplishment is not complete without Watson actually choosing to show up to Jeopardy!