Saturday, December 10, 2005

Dissecting the Anti-Terrorism Bill

after almost a month of no updates, here's some local matter for a change.

Dissecting the Anti-Terrorism Bill

from a local blogger no less! It gives pretty useful insights on what the Anti-Terrorism bill does and what exactly is wrong with it. An excerpt:

No anti-terrorism law can be Constitutional because, in any form, it will be a bill of attainder, something that the Constitution says the Congress shall never pass.

What is a bill of attainder? It is a law that punishes a person for a status or association rather than for the commission of any criminal act. A similar law outlawing the Communist Party of the Philippines was passed decades ago and was attacked exactly on that ground. Believing in communism is not a crime. Communism is a political ideology in the same way that parliamentarism, republicanism and federalism are. When, however, a communist commits an act that violates an existing law--rebellion, insurrection, sedition--then, the commission of the act is punishable. But that is because of the act, not because of his status as a believer in communism.

It is the same thing with the anti-terrorism law.

Wednesday, November 16, 2005

Religious Representation in the Armed Forces

No, this isn't a 'women/gays in the military' debate with a twist. This isn't about muslim soldiers in the army, or buddhist, or jewish, or whatever. It goes without saying that in first world liberal democracies, religion doesn't matter in signing up for the army. The issue in question here is the presence of chaplains.

The British Armed Forces recently appointed Buddhist, Hindu, Muslim, and Sikh Chaplains as part of the miltary.

The status quo for most first world military is (at least as far as I know) is to have a Christian/Catholic Chaplain.

Possible questions would be, should we even have chaplains in the military in the first place?
If so, should we have different chaplains according to the demographics of soldiers in the military?

Second question is kind of one-sided. You can worm yourself out of it, but it's going to be sneaky, and unfair.

Sunday, November 06, 2005

Internet matter

A series of articles in the Observer about Internet issues. Simple and direct, they're a good way to understand what exactly is happening in the Internet without necessarily getting bogged down by the technical aspect of it.



True enough, since people absolutely HATE Internet debates, there probably won't be as much Internet debates where you're going to need this. But hey, they're fun to read.

Friday, October 21, 2005

Countries turn backs on Hollywood

Unesco member states have formally voted to support their own film and music industries against globalisation.

The United Nations cultural body voted in favour of a cultural diversity convention, backed by France, Canada and the UK.

The US had said the "deeply flawed" convention could be used to block the export of Hollywood films and other cultural exports.

The vote follows French moves to protect its film and music industries.

Strict quotas

France already awards large subsidies to its own film, music, theatre and opera industries to support its cultural heritage.

It also imposes strict quotas on the level on non-French material broadcast on radio and television.

The new convention on cultural diversity aims to recognise the distinctive nature of cultural goods and services.

It enables countries to take measures to protect what it describes as "cultural expressions" that may be under threat.

The majority of Unesco's 191 member states voted for the convention.

Britain's representative to Unesco, Timothy Craddock, said the wording was "clear, carefully balanced, consistent with the principles of international law and fundamental human rights".

But it was opposed by the US, which said the convention was unclear and open to wilful misinterpretation.

French culture minister Renaud Donnedieu de Vabres said nations had a right to set artistic quotas because 85% of the world's spending on cinema tickets went to Hollywood.

The US suggested 28 amendments to the convention, which were almost unanimously rejected by Unesco delegates.

It was feared that Thursday's vote could isolate the US, which rejoined Unesco in 2003 after a 19-year absence.

The convention will need to be ratified by 30 member states in order to take effect.
Story from BBC NEWS:
http://news.bbc.co.uk/go/pr/fr/-/2/hi/entertainment/4360496.stm

Monday, October 17, 2005

Stem cell strides may help resolve ethical dilemmas

New methods preserve viable embryos, but some opponents skeptical of tactics New processes don't kill viable embryos -- opponents say moral issue s
- Carl T. Hall, Cornelia Stolze, Chronicle Staff Writers
Monday, October 17, 2005

Scientists are reporting two new ways of creating embryonic stem cells without killing viable embryos, potentially reshaping the biggest bioethical debate of the Bush administration.


In one case, embryonic stem cells were made from a genetically abnormal embryo designed to be incapable of developing. The other method was an attempt to fashion stem cells from an embryo without damaging it.

The new methods, detailed in separate research reports released online Sunday by the British journal Nature, are intended as laboratory answers to the moral questions raised by the destruction of human embryos. If the strategies work, one result could be the availability of more federal grants for one of the most promising fields of biomedical research.

A White House spokesman said it was premature to speculate on any potential change in administration policy. But William Hurlbut of Stanford University, a member of a White House bioethics advisory council, called it "a starting point for an important new dialogue" on possible "technological solutions for the moral problems surrounding human embryonic stem cell research."

The new techniques raise their own questions about just what sorts of laboratory creations deserve human status. The latest research is "right there on that boundary between what I would consider ethically permissible and potentially ethically troubling," said biochemist Fazale Rana at Reasons to Believe, a Christian group in Southern California opposed to human embryonic stem cell research.

Much of the debate centers on the precise definition of "embryo," because it is considered by some people to have the same moral status as a human being. In one of the new sets of experiments, researchers crafted stem cell lines from lab creations characterized as "nonviable" entities.

Others dismissed such arguments as semantic quibbling.

"This is an attempt to solve an ethical issue through a scientific redefinition that really doesn't solve the issue," said Jaydee Hanson, director of human genetics at the International Center for Technology Assessment, a Washington, D.C., nonprofit organization that opposes some kinds of cloning and stem cell research on moral grounds.

In August 2001, President Bush made the production of any new stem cell lines ineligible for federal grants because such work involves the destruction of human embryos. Bush also objects to cloning embryos, which scientists advocate as a way of creating specialized stem cell lines carrying disease genes or the DNA of an individual patient.

Those restrictions helped inspire California's $3 billion Proposition 71 initiative, which state voters approved in the 2004 general election specifically to pursue research banned from receiving federal support.


On Sunday, stem cell researchers Rudolf Jaenisch and Alexander Meissner of the Whitehead Institute at the Massachusetts Institute of Technology showed how embryonic stem cells -- the flexible early-stage cells that can mature into all the cell types of the body -- can be produced from a type of research cloning known as "alternate nuclear transfer."

The researchers devised a way to block the activity of a gene from an adult cell that would have allowed the cell to develop into an embryo once in the uterus. With that activity blocked, the cell is nonviable because it lacks the ability to "establish the fetal-maternal connection" in the uterus. This abnormal DNA then was inserted into the nucleus of an egg whose own DNA had been removed.

The idea was to create something akin to a cloned embryo but that would be inherently incapable of developing beyond the pre-implantation stage. But the researchers showed they could still generate a specialized stem cell line, which would have the same DNA as that of the adult cell used to produce the cloned embryo. Thus, it could be considered a strategy to make "patient-specific" embryonic stem cells without destroying any potential life.

A separate team of researchers led by Robert Lanza and Young Ching of Advanced Cell Technology, a Massachusetts biotech company, used yet another method to obtain stem cells.

Experimenters used a single cell, known as a "blastomere," snipped from a developing embryo at the eight-cell stage. This is sometimes done as a type of biopsy in fertility clinics when would-be parents are concerned about implanting an embryo carrying a genetic disorder. Known as "pre-implantation genetic diagnosis," the goal is to screen out disease-carrying embryos. Previous clinical evidence suggests that removing one or sometimes even two cells for diagnosis leaves a viable embryo.

The latest study showed that a cell removed for diagnosis can be coaxed into replicating itself overnight. The copy then can be used to generate a line of stem cells, the researchers reported, while allowing the original cell to be subjected as usual to pre-implantation analysis.

The new approaches so far have only been tried in laboratory mice, and there's no guarantee similar results can be obtained in humans. They were described as proof of principle for ideas long championed by those opposed to embryo destruction.

He insisted that an entity such as that produced in the MIT experiments has "no inherent principle of unity, no coherent drive in the direction of the mature human form."

But the new methods do not assuage all ethical concerns.

"The tinkering doesn't change the essential nature of the cloned entity," said Hanson, of the International Center for Technology Assessment. "The only reason it's not an embryo is definitional."

Douglas Melton, a renowned stem cell scientist at Harvard University, said he doubted critics of stem cell research will be placated by the alteration of a single gene. An altered embryo, he said, may still be considered an embryo.

As for using a biopsied cell, several studies in mice, rabbits, sheep, swine and primates have shown that single cells transplanted into the uterus of the respective species are capable of propagating viable offspring. Thus, even if removing a single cell doesn't interfere with the developmental potential of the embryo, the isolated cell itself could be considered capable of embryo status.

Jaenisch said this human potential argument shows how "absurd" the theoretical discussions can get in stem cell biology.

"If one used this argument to protect cells developed through nuclear transfer because with further manipulation they might become a living clone, then every cell of our body would deserve the chance to become a human being,"
he said. "In not cloning them, each of us would be barring millions of individuals from getting a chance to live."

Despite all the arguments, Jaenisch said it's still conceivable that special cloning or other techniques might be an acceptable compromise to allow expanding the federal role in stem cell research.

If so, he said, "We would have made a big step forward."

So far it is not clear if stem cells created by either of the new methods would qualify for federal grants, according to James Battey, head of a stem cell task force at the National Institutes of Health.

Bernard Lo, a prominent bioethics expert at UCSF who also advises the California Prop. 71 program, called on those who object to stem cell research to make their views on the alternative derivation methods clear at the outset.

"This work is really driven by a desire on the part of scientists to address the moral concerns some people have. So those people should say now if it doesn't settle the problem," to avoid a lot of wasted effort, he said.

E-mail the writers at chall@sfchronicle.com and cstolze@sfchronicle.com.

Thursday, October 13, 2005

The one and three-quarter-state solution?

So much for those who thought Israel's withdrawal from Gaza would trigger rapid progress toward peace.

By Jonathan Freedland

Oct. 12, 2005 | For Britons who managed to tear themselves away from the David Blunkett saga on TV Monday night, there was drama of a different kind on BBC2. "Elusive Peace" charted the story of Bill Clinton's failed attempt to resolve the conflict between Palestinians and Israelis, a struggle that reached its dismal climax at Camp David in 2000. This latest effort by remarkable filmmaker Norma Percy, who has created her own subgenre of TV diplomatic history, featured interviews with all the key players -- Ehud Barak, Yasser Arafat, Clinton himself -- telling the inside story of midnight talks, eavesdropped conversations, last-minute panics and, tragically, the inability to move that final inch toward what might have been a deal.

It was compelling television, but also instructive. For it showed just how much has changed in the intervening five years. Arafat is dead; Sharon is no longer the rabble-rouser whose walkabout on the Temple Mount did so much to derail the peace process, but prime minister; Clinton is the elder statesman, his former residence occupied by a man whose Middle East focus has not been peace in Israel-Palestine but war in Iraq.

It's not just the personalities who have changed. The past five years have also seen a wider shift, away from the across-the-table negotiations of the Clinton era toward a newer, more enigmatic model. The days of bilateral talks and mutuality have gone. Now we are in the age of unilateralism.

As if to underline the point, Sharon and Palestinian leader Mahmoud Abbas were due to meet Tuesday for a summit. For the second time in as many weeks, they called it off. So much for those who thought that Israel's August withdrawal from Gaza -- the prime example of the new unilateralism -- would trigger a return to the negotiating table and rapid progress toward a signed agreement.

That's not how it is anymore. Yes, Gazans are relieved to be rid of the Israelis at last. And yes, Israelis -- despite some persistent violence, with Palestinian rockets fired across the new "border" -- still believe the pullout was the right move. But that does not mean the two sides are about to reach across the divide and touch each other. Instead they are looking inward.

For the Israelis, that's a matter of politics. Sharon's concern now is not Abbas, but his Likud rival, Benjamin Netanyahu. A fortnight ago he successfully fought off a leadership challenge from Bibi, and he wants to preserve that advantage; he will do nothing that might hand his rival ammunition. He will not release Palestinian prisoners, nor bow to Abbas' request for more weapons for his security forces -- nothing, in other words, that would allow Bibi to accuse Sharon of treachery. That's why the summit with Abbas could not go ahead: There was nothing Sharon was willing to give his Palestinian counterpart.

Meanwhile, Abbas (or Abu Mazen) is in a strikingly similar hole. Challenged by Hamas, which pulled in a quarter of the vote in recent municipal elections on the West Bank -- a creditable score, given that their political base is Gaza -- Abbas could not afford to return from a summit empty-handed. He has a genuine fight on his hands with Hamas -- one that could explode into a civil war that his own threadbare forces could lose. The sense that the Palestinian Authority writ does not run in Gaza, that either anarchy or Hamas rules there, is proving deeply damaging, suggesting the Israeli withdrawal has not helped the Palestinian Authority but undermined it. The result is that Abbas too is devoting the post-Gaza lull to securing his own internal position, rather than hatching grand schemes for an accord with the enemy.

This phase of introspection reflects the broader trend. I spoke Tuesday with Eival Gilady, who served as a close advisor to the Israeli prime minister on the Gaza disengagement. His message was clear: The ball is now in the Palestinians' court. Under the internationally endorsed road map, the next step is for the Palestinians to put their own house in order, starting with a crackdown on terrorism.

If that were to happen, then Israel might make a further move. Revealingly, Gilady cites the unilateral disarmament steps taken by Mikhail Gorbachev, which paved the way for a mutually agreed arms pact later. "When you act unilaterally, it doesn't stay unilateral," he says. In other words, Israel moves first on Gaza. Then Abbas stabilizes the P.A. Then Israel will act again. Not a peace process exactly, but a series of one-sided moves: Call it sequential unilateralism.

Under that logic, what would Israel's next act be? In the past few days, the Israeli press has been bubbling with hints from key officials at further unilateral pullouts, this time from the West Bank. The scenario seems to be that Sharon sits tight for now, sees off Bibi, fights, wins an election next year -- and then stages a series of mini-disengagements. Gary Sussman, an analyst at Tel Aviv University, says the map for those withdrawals is already laid out. "The fence is the border," he says, confident that Israel would pull back, more or less, to the line traced by the wall, or security barrier, it has built through the West Bank. That would entail dismantling a few isolated settlements and keeping the large settlement blocs.

Such a move would see Israel out of, perhaps, 50 percent or 60 percent of the West Bank. Combined with Gaza that would represent the de facto Palestinian state, promised by the road map and now routinely demanded by George W. Bush, Tony Blair and everyone else.

The old guard of Palestinian leaders, including Abbas, are said to be deeply depressed at this prospect. For such an entity would leave them no access to Jerusalem and would represent substantially less territory than the Clinton parameters promised in December 2000. It would not be the two-state solution they sought for two decades but, says Sussman, something less: "A one and three-quarter state solution."

What's more, Sharon would make this move and win not just international acceptance but praise. The Gaza withdrawal won plaudits from the United Nations and the European Union; even Pakistan broke Muslim ranks to start a diplomatic engagement with Israel last month. If there were to be more pullouts in the West Bank, Sharon would be a hero once more. There would be no pressure on him; it would all be on the Palestinians, who would rapidly be cast as grudging and difficult for not receiving these chunks of the West Bank with gratitude.

No wonder the likes of onetime peace negotiators Saeb Erekat and Hanan Ashrawi are said to be glum. They must realize that in the new game of sequential unilateralism they are being outplayed by an Israeli prime minister who is proving a far cannier strategist than anyone expected. They should avoid watching "Elusive Peace"; it will only make their moods darker. There they will see how much better they might have fared under the old game.

Clinton recalls a proposal he made in late 2000 that would have split Jerusalem and given the Palestinians sovereignty over the upper Haram al-Sharif, with Israeli control over the lower Temple Mount. "Who could accept this?" says Arafat, from the grave. Now his people may have to brace themselves for accepting much less.

This article has been provided by the Guardian through a special arrangement with Salon. © Guardian Newspapers Limited 2005. Visit the Guardian's Web site at http://www.guardian.co.uk.

-- By Jonathan Freedland

Wednesday, October 12, 2005

Battle blogging for profit

By Xeni Jardin
Xeni Jardin is co-editor of the blog BoingBoing and a contributor to Wired magazine and National Public Radio.

October 9, 2005

AS BLOGS become big business, Internet giants have begun trying to profit from new forms of journalism, including war coverage. The results are not encouraging.

Yahoo's latest experiment reveals that it considers war news just another form of entertainment. This from an online giant that has already shown it is cavalier about press freedom and a friend of oppression.

Look back to 2004, when reporters at a Hunan province newspaper listened as their editorial director read a statement from the Communist Party's Propaganda Department about the upcoming 15-year commemoration of the Tiananmen Square massacre. It warned that dissidents may use the Internet to spread "damaging information."

One reporter used an anonymous Yahoo e-mail account to ask a colleague in New York to post a report about the statement on pro-democracy website Minzhu Tongxun (Democracy Newsletter).

But as the 37-year-old married reporter behind the numeric pseudonym "198964" learned, he shouldn't have assumed that Yahoo defends press freedom. When Chinese security agents asked executives at Yahoo Holdings (Hong Kong) to identify the man, they did so. Police grabbed him on a street, searched his house and seized his computer and other belongings, according to documents filed in his defense.

Mr. "198964," whose real name is Shi Tao, is serving a 10-year jail sentence for "divulging state secrets abroad." Bloggers, human rights groups and journalism organizations, including PEN and Reporters Without Borders, condemned the action.

Yahoo co-founder Jerry Yang brushed off responsibility. At an Internet conference Sept. 10 in Hangzhou, China, Yang said Yahoo and other U.S.-based multinationals "have to comply with local law."

Or else what? They lose access, that's what, which means losing profits.

Shi Tao's attorney, Guo Guoting — who was detained, placed under house arrest and shut out of his office before his client's trial — argues that the company has a greater obligation to international law than to local law. "China is a signatory of the [U.N.] International Covenant on Economic, Social and Cultural Rights," Guo told the Hong Kong independent daily Epoch Times. "Shi Tao … was legitimately practicing his profession, not committing a crime. The legal entity of Yahoo Holdings [Hong Kong] is not in China, so it is not obligated to operate within the laws of China or to cooperate with Chinese police."

As morally repugnant as Yahoo's actions may be, other tech vendors before it have acted similarly. "Many big companies, such as Microsoft and Nortel, in their quest to gain shares of the large Internet market in China, transform China into an information prison by collaborating with the Chinese regime on questions of censorship," Guo said. "They should not forget all moral principles under the temptation of financial gain."

Yahoo's hypocrisy is even more shameful because it is also in the news business. The company recently opened a news production division with promises of hard-hitting stories that U.S. mainstream media are afraid to report.

Yahoo launched "Kevin Sites in the Hot Zone," pledging to send the former television reporter to "every armed conflict in the world within one year" and dispatch blog-sized "bites" of war.

Several years ago, I introduced Sites to the world of blogs, collaborating with geek friends to launch kevinsites.net. I helped him publish his firsthand impressions of the Iraq war as a not-for-profit project. But as the war heated up, Sites' employer, CNN, forced him to shut down the blog. Sites later joined NBC and videotaped the shooting by a Marine of an unarmed Iraqi. As a way to explain why that piece of truth mattered, he reopened his blog. (Last November, these pages excerpted his explanation of the shooting.) Another "warblogger" is BBC news producer Stuart Hughes, who stepped on a landmine while covering the Iraq war. On his blog, he documented the amputation of his right leg and his recovery. Like me, he is troubled about "Hot Zone."

"It seems like the journalistic equivalent of a Simpson and Bruckheimer high-concept movie — all concept and very little content," Hughes said from London. "I've lost too many friends in war zones — and come too close myself — to have any time for this 'stamp-collecting' approach to conflict. The presentation is distasteful — war reporting comes with a strong public service agenda, and it's cheapened by this 'Geraldo Rivera' presentation. This goal of covering every armed conflict in the world — so what? At what cost? It leaves a very nasty taste in my mouth."

The launch of Yahoo's corporate-powered warblog, and its promise of more newsertainment to follow, raises anew the question how to define journalism.

One obvious answer: Real journalists don't treat war as entertainment, and real news companies don't help imprison a man for reporting the truth — even if that would ensure profits.

Tuesday, October 11, 2005

Treating China's online addicts

By Daniel Griffiths
BBC News, Beijing

The internet is taking China by storm, with millions of people logging on in record numbers and web cafes busier than ever.

Rising personal wealth means more people are able to buy computers or pay to go online. The vast majority are young people using the net to chat or play games.

But behind the boom, there is a downside.

Wang Yiming, 21, is a self-confessed internet addict, one of a growing number in China. He used to spend hours online each day, often going without food or sleep. His face is drawn and sallow.

He said addiction changed his whole life:

"A month or two after I started surfing the internet, I failed some of my school tests, but I was too afraid to tell my parents. When my father found out, he was very angry.

"But I couldn't control my addiction. Friends were also telling me that I was on the net too long, but I thought: 'It's my life, I can do what I want.' I became a real loner, was withdrawn, and wouldn't listen to anyone."

For help, Wang Yiming went to China's first internet clinic, a low-rise, anonymous building in central Beijing.

All 15 patients when I visited were young men - the main social group affected by this problem - and they all told a similar story of how their addiction to the net destroyed their lives.

The clinic itself is part of a bigger addiction centre also treating those hooked on alcohol or drugs. The internet addicts go on a two-week course involving medical treatment, psychological therapy, and daily workouts.

The latter are a key part of the programme. Many of the men have spent every waking moment in front of a computer screen and have never experienced regular exercise.

Dr Tao Ran, head of the clinic, said the scale of the problem in China was enormous:

"Every day in China, more than 20 million youngsters go online to play games and hit the chat rooms, and that means that internet addiction among young people is becoming a major issue here.

"And it's only recently that the authorities have started to wake up to the seriousness of the problem with more articles in the papers highlighting the dangers of going online for too long," he said.

Rising demand

The clinic is getting an extra 200 beds next year to meet demand and new centres are due to open in other major cities like Shanghai and Guangzhou.

But the programme only lasts two weeks, followed by minimal after-care. Many have their doubts about the long term, like one patient's mother.

"The work of the doctors here at the centre has been very important, but of course I'm still worried," she said.

"I admire the doctors for what they have done so far and all we can do is follow their advice and knowledge to help our son," she said.

All the men know this centre is just the beginning. Now they must return to the outside world and the real test for these computer addicts.

And with millions of Chinese logging on every day, it is likely that the country's first internet clinic is going to have its hands full.

Story from BBC NEWS:
http://news.bbc.co.uk/go/pr/fr/-/1/hi/world/asia-pacific/4327258.stm

Published: 2005/10/10 14:06:17 GMT

Just Enough Piracy

It's not news that the main reason the movie and television industries are wary of BitTorrent is that they're freaked out by the music industry's experience with piracy. Although they see the economic advantages of P2P distribution, they're concerned that once they put their stuff out there, even wrapped in triple layers of kryptonite DRM, it might be cracked and then circulate in unprotected form. For movies, that's lost revenues. For TV shows, that means ads could be stripped out, expiration routines could be removed and (gasp!) content could be modified or remixed.

All that counts as Very Scary Stuff to industry executives, and as a result they're looking for "strong" DRM before they consider letting their premier content circulate online. This is a mistake, for two reasons:

The first is about the user experience: Any protection technology that is really difficult to crack is probably too cumbersome to be accepted by consumers.

We've seen all sorts of failures of this sort before, from dongles to laborious and confusing registration schemes. Each seems better at annoying consumers than at building markets. The lesson from these examples is that zero-percent piracy is not only unattainable, it's economically suboptimal. If your content is uncrackable, it means you've probably locked the market down so tight that even honest consumers are being inconvenienced.

Instead, efficient software and entertainment markets should exhibit just enough piracy to suggest that the industry has got the balance of control about right: not too loose and not too tight. That number is not zero percent (which requires protection methods so invasive they kill demand), and it's not 100% (which kills the business). It's somewhere in-between.

The second reason the quest for zero-piracy is a mistake is an economic one: piracy can actually let you raise your prices.

I'll give you a surprising example. I was chatting with a former Microsoft manager the other day and he revealed that after much analysis Microsoft had realized that some piracy is not only inevitable, but could actually be economically optimal. The reason is counterintuitive, but intriguing.

The usual price-setting method is to look at the entire potential market, from the many at the economic lower end to the few at the top, and set a price somewhere in between the top and bottom that will maximize total revenues. But if you cede the bottom to piracy, you can set a price between the top and the middle. The result: higher revenues per copy, and potentially higher revenues overall.

(This is, by the way, the opposite of the conventional economic approach to developing-world piracy, which is to lower the cost of a product closer to the pirate version, closing the pricing gap to try to win customers over to the official version. In practice, however, the pirate price is so low that it's rarely possible to close that gap enough to make much of a difference.)

Add to this the familiar (if controversial) argument that piracy helps seed technology markets, and can be a net benefit. Especially in fast-developing countries such as China and India, the ubiquity of pirated Windows and Office have made them de-facto national standards. Few users could have paid for the retail versions at the start, but now that the spread of cheap technology, including free software, has led to an economic boom, Microsoft is finding a nice market for commercial software at the very top, in big companies and government offices.

When all these effects are considered, it appears that there actually is an optimal level of piracy. That right level would vary from industry to industry. Today the estimated piracy rates are 33% for CDs and 15% for DVDs. The industries say that's too high, but most anti-copying technologies they've brought in to lower that figure have proven unpopular. Would even tighter lock-downs help? Probably not. Maybe 15%-30% is simply the market saying that this is the optimal rate of piracy for those industries, and any effort to lower that significantly would either choke demand or push even more people to the dark side.

So the moral for video content holders and others considering DRM: be careful what you ask for, because you just might get it. "Uncrackable" DRM could make the P2P problem worse, by driving more users underground and depressing prices. Don't imagine that if you release content in a relatively weak DRM wrapper (like today's DVDs) and copies get out that the whole market will collapse. Instead, you may find that piracy stays constant at relatively low levels, leaving the rest of the market happier and more profitable.

The lesson is to find a good-enough approach to content protection that is easy, convenient and non-annoying to most people, and then accept that there will be some leakage. Most consumers see the value in paying for something of guaranteed quality and legality, as long as you don't treat them like potential criminals. And the minority of others, who are willing to take the risks and go to the trouble of finding the pirated versions? Well, they probably weren't your best market anyway.

http://www.thelongtail.com/the_long_tail/2005/08/just_enough_pir.html

Sunday, October 09, 2005

'Glorifying' terror plan revised

'Glorifying' terror plan revised
A proposed law banning the "glorifying" of terrorist acts has been revised, following criticism of the proposals.

People would have to "intend to incite" further acts of terror to be convicted, Home Secretary Charles Clarke has said.

Opponents had said the original proposal was unclear and could threaten civil liberties. Mr Clarke denied plans were being "watered down".

The plan to detain terror suspects for up to three months without charge would stay, the home secretary said.

Mr Clarke also published new plans to give police powers to temporarily close down places of worship being used by extremists.

Failure of the trustee or registered owner of the place of worship to take steps to stop such behaviour would be a criminal offence.

On the "glorifying" offence, Mr Clarke said: "We believe that glorification of terrorism is wrong and should be outlawed in law - we have made that clear all the way through.


THE NEW OFFENCE
To make a statement glorifying terrorism if the person making it believes, or has reasonable grounds for believing, that it is likely to be understood by its audience as an inducement to terrorism
"But a number of people have made observations to the effect that there were difficulties in the wording we originally suggested, and so we thought bringing together the glorification and incitement clauses in the bill would be the best way to deal with that."

He said it was not a case of "watering down" the new Terrorism Bill.

It would, he said, make it an offence to "make a statement glorifying terrorism if the person making it believes, or has reasonable grounds for believing, that it is likely to be understood by its audience as an inducement to terrorism".

Kevin Martin, the head of the Law Society, which represents solicitors, said the government had been right to amend the "poorly drafted legislation which would have put free speech at risk".

But John Cooper, a criminal barrister, said the legislation remained "totally and utterly unworkable".

"The courts are going to have a great deal of difficulty in establishing what intent is," he said.

'Climb-down'

Dominic Grieve, the Conservative shadow attorney general, said: "It's a climb-down - common sense has finally prevailed."

He said it had been "immediately apparent" when the plans were published three weeks ago that the one relating to glorification of terrorism was "completely unworkable".

Liberal Democrat home affairs spokesman Mark Oaten said: "The new definition is a major improvement.

"It means that cases where people are deliberately trying to provoke terrorism are more likely to stand up in court."

But he said the "case still has not been made" for the detention without charge of suspects beyond 14 days.

'Moderation'

Mr Clarke said he remained convinced the maximum time limit for detention of terror suspects should be increased to three months.

He said that under existing laws, the police were using the maximum 14 day period only in exceptional circumstances - with the suspects charged in all cases.

"The police use their existing detention powers cautiously and in moderation, and I am confident that they would use an amended power in the same careful fashion," he said.

"There would also be proper judicial oversight of detention. Such powers already operate successfully in other European countries - in France and Spain suspects can be detained for up to four years before trial."

The Home Office issued a seven-page Metropolitan Police document defending the three-month detention plan - it contained details of three terror cases yet to come to court, including one described as the largest mounted in the UK.

Story from BBC NEWS:
http://news.bbc.co.uk/go/pr/fr/-/1/hi/uk_politics/4316326.stm

Published: 2005/10/08 14:53:27 GMT

Thursday, October 06, 2005

Schenck v. United States

from Wikipedia:

Schenck v. United States, 249 U.S. 47 (1919), was a United States Supreme Court decision concerning whether the defendant possessed a First Amendment right to free speech against the draft during World War I. The defendant, Charles Schenck, a Socialist, had circulated a flyer to recently drafted men. The flyer, which cited the Thirteenth Amendment's provision against "involuntary servitude," exhorted the men to "assert [their] opposition to the draft," which it described as a moral wrong driven by the capitalist system. The circulars proposed peaceful resistance, such as petitioning to repeal the Conscription Act.

Schenck was charged with conspiracy to violate the Espionage Act by attempting to cause insubordination in the military and to obstruct recruitment. The Court, in a unanimous opinion written by Justice Oliver Wendell Holmes, Jr., held that Schenck's conviction was constitutional. The First Amendment did not protect speech encouraging insubordination, since, "[w]hen a nation is at war many things that might be said in time of peace are such a hindrance to its effort that their utterance will not be endured so long as men fight." In other words, the court argued, the circumstances of wartime permit greater restrictions on free speech than would be allowable during peacetime.

In the opinion's most famous passage, Justice Holmes sets out the "clear and present danger" standard:

"The question in every case is whether the words used are used in such circumstances and are of such a nature as to create a clear and present danger that they will bring about the substantive evils that Congress has a right to prevent."

This case is also the source of the phrase "shouting fire in a crowded theater", a paraphrase of Holmes' view that "The most stringent protection of free speech would not protect a man in falsely shouting fire in a theater and causing a panic."

Critics of the decision argued that a more apt analogy for Schenck's actions would have been someone getting up between the acts and declaring that there were not enough fire exits, or shouting, not falsely, but truly that there was a raging inferno inside to people about the enter the theater.

As a result of the decision, Charles Schenck spent six months in prison. The "clear and present danger" test was later strengthened to the more inclusive "bad tendency" test in "Whitney v. California". Justices Holmes and Brandeis shied from this test, but concurred with the final result. Both of these cases were later narrowed by Brandenburg v. Ohio (1969), which replaced the "bad tendency" test with the "imminent lawless action" test.

Thursday, September 29, 2005

Search and Rescue

Sebastopol, Calif.

AUTHORS struggle, mostly in vain, against their fated obscurity. According to Nielsen Bookscan, which tracks sales from major booksellers, only 2 percent of the 1.2 million unique titles sold in 2004 had sales of more than 5,000 copies. Against this backdrop, the recent Authors Guild suit against the Google Library Project is poignantly wrongheaded.

The Authors Guild claims that Google's plan to make the collections of five major libraries searchable online violates copyright law and thus harms authors' interests. As both an author and publisher, I find the Guild's position to be exactly backward. Google Library promises to be a boon to authors, publishers and readers if Google sticks to its stated goal of creating a tool that helps people discover (and potentially pay for) copyrighted works. (Disclosure: I am a member of the publisher advisory board for Google Print. As the name implies, it is simply an advisory group, and Google can take or leave its suggestions.)

What's causing all the fuss? Google has partnered with the University of Michigan, Harvard, Stanford, the New York Public Library and Oxford University. Google will scan and index their library collections, so that when a reader searches Google Print for, say, "author's rights," the results point to books that contain that term. In a format that resembles its current Web search results, Google will show snippets (typically, fewer than three sentences of text from each page of each book) that include the search term, plus information about the book and where to find it. Google asserts that displaying this limited amount of content is protected by the "fair use" doctrine under United States copyright law; the Authors Guild claims that it is infringement, because the underlying search technology requires a digitized copy of the entire work.

I'm with Google on this one. It would certainly be considered fair use, if, for example, I circulated a catalog of my favorite books, including a handful of quotations from each book that helps people to decide whether to buy a copy. In my mind, providing such snippets algorithmically on demand, as Google does, doesn't change that dynamic. Google allows click-through to the entire book only if the book is in the public domain or if publishers have opted in to the program. If it's unclear who owns the rights to a book, only the snippets are displayed.

A search engine for books will be revolutionary in its benefits. Obscurity is a far greater threat to authors than copyright infringement, or even outright piracy. While publishers invest in each of their books, they depend on bestsellers to keep afloat. They typically throw their products into the market to see what sticks and cease supporting what doesn't, so an author has had just one chance to reach readers. Until now.

Google promises an alternative to the obscurity imposed on most books. It makes that great corpus of less-than-bestsellers accessible to all. By pointing to a huge body of print works online, Google will offer a way to promote books that publishers have thrown away, creating an opportunity for readers to track them down and buy them. Even online sellers like Amazon offer only a small fraction of the university libraries' titles. While there are many unanswered questions about how businesses will help consumers buy the books they've found through a search engine for printed materials that is as powerful as Google's current Web search, there's great likelihood that Google Print's Library Project will create new markets for forgotten content. In one bold stroke, Google will give new value to millions of orphaned works.

I'm sorry to see authors buy into the old-school protectionism of the Authors Guild, not realizing they're acting against their own self-interest. Their resistance can come only from a failure to understand the nature of the program. Google Library is intended to help readers discover copyrighted works, not to give copies away. It's a tremendous service to authors that will help them beat the dismal odds of publishing as usual.

Tim O'Reilly, a publisher of computer books, is the co-producer of theWeb 2.0 conference.

Tuesday, September 13, 2005

Storm Warnings

The National Association of Insurance Commissioners, founded in 1871 and headquartered in Kansas City, Missouri, bills itself as the “oldest association of state officials” in the country. Every three months, its members, who include the chief insurance regulators of all fifty states plus the District of Columbia, hold a four-day meeting to discuss issues of common concern. The association’s fall, 2005, meeting was scheduled for this past weekend, and, in addition to seminars on such perennial favorites as “Property Casualty Reinsurance” and “Receivership and Insolvency,” the event’s planners had organized a session on a new topic: global warming. Given recent events in Louisiana and Mississippi, a session on weather-related disasters would surely have been well attended. Unfortunately for the association, the meeting was booked into the Sheraton in downtown New Orleans.

Katrina was so destructive—whole towns and cities devastated, and their traditions swept away—that anyone who would presume to comment on it has a heavy burden. A disaster of this magnitude seems to demand not dispassionate analysis but simple human empathy. To use it as an occasion to point out the folly of U.S. energy policy, as, for example, the German environmental minister, Jürgen Trittin, did, is to invite the charge of insensitivity, or even worse. “The American president shuts his eyes to the economic and human damage that the failure to protect the climate inflicts on his country and the world economy through natural catastrophes like Katrina,” Trittin wrote in the Frankfurter Rundschau. An editor for the London Times online accused Trittin of “intellectual looting,” while the Web version of Der Spiegel announced “another low point for transatlantic relations—and set off by a German minister. How pathetic.” But, callous as it may seem to say so, America’s consumption of fossil fuels and catastrophes like Katrina are indeed connected.

Though hurricanes are, in their details, extremely complicated, basically they all draw their energy from the same source: the warm surface waters of the ocean. This is why they form only in the tropics, and during the season when sea surface temperatures are highest. It follows that if sea surface temperatures increase—as they have been doing—then the amount of energy available to hurricanes will grow. In general, climate scientists predict that climbing CO2 levels will lead to an increase in the intensity of hurricanes, though not in hurricane frequency. (This increase will be superimposed on any natural cycles of hurricane activity.) Meanwhile, as sea levels rise—water expands as it warms—storm surges, like the one that breached the levees in New Orleans, will inevitably become more dangerous. In a paper published in Nature just a few weeks before Katrina struck, a researcher at the Massachusetts Institute of Technology reported that wind-speed measurements made by planes flying through tropical storms showed that the “potential destructiveness” of such storms had “increased markedly” since the nineteen-seventies, right in line with rising sea surface temperatures.

The fact that climbing CO2 levels are expected to produce more storms like Katrina doesn’t mean that Katrina itself was caused by global warming. No single storm, no matter how extreme, can be accounted for in this way; weather events are a function both of factors that can be identified, like the amount of solar radiation reaching the earth and the greenhouse-gas concentrations in the atmosphere, and of factors that are stochastic, or purely random. In response to the many confused claims that were being made about the hurricane, a group of prominent climatologists posted an essay on the Web site RealClimate that asked, “Could New Orleans be the first major U.S. city ravaged by human-caused climate change?” The correct answer, they pointed out, is that this is the wrong question. The science of global warming has nothing to say about any particular hurricane (or drought or heat wave or flood), only about the larger statistical pattern.

For obvious reasons, this larger pattern is also of deep interest to the insurance industry. In June, the Association of British Insurers issued a report forecasting that, owing to climate change, losses from hurricanes in the U.S., typhoons in Japan, and windstorms in Europe were likely to increase by more than sixty per cent in the coming decades. (The report calculated that insured losses from extreme storms—those expected to occur only once every hundred to two hundred and fifty years—could rise to as much as a hundred and fifty billion dollars.) The figures did not take into account the expected increase in the number and wealth of people living in storm-prone areas; correcting for such increases, the losses are likely to be several hundred per cent higher. A report issued last week, which was supposed to have been presented at the National Association of Insurance Commissioners’ meeting in New Orleans, noted that, even before Katrina, catastrophic weather-related losses in the U.S. had been rising “significantly faster than premiums, population, or economic growth.”

Since President Bush announced that the country was withdrawing from the Kyoto Protocol, in March, 2001, the Administration has offered a variety of excuses for why the U.S., which produces nearly a quarter of the world’s greenhouse-gas emissions, can’t be expected to cut back. On the one hand, Administration officials have insisted that the science of global warming is inconclusive; on the other, they’ve cited this same science to argue that the steps demanded by Kyoto are not rigorously enough thought out. As the rest of the world has adopted Kyoto—earlier this year, the treaty became binding on the hundred and forty nations that had ratified it—these arguments have become increasingly indefensible, and the President has fallen back on what one suspects was his real objection all along: complying with the agreement would be expensive. “The Kyoto treaty didn’t suit our needs,” Bush blurted out during a British-television interview a couple of months ago. As Katrina indicates, this argument, too, is empty. It’s not acting to curb greenhouse-gas emissions that’s likely to prove too costly; it’s doing nothing.

Monday, September 05, 2005

Apocalypse Soon

by Robert McNamara

Robert McNamara is worried. He knows how close we’ve come. His counsel helped the Kennedy administration avert nuclear catastrophe during the Cuban Missile Crisis. Today, he believes the United States must no longer rely on nuclear weapons as a foreign-policy tool. To do so is immoral, illegal, and dreadfully dangerous.

It is time—well past time, in my view—for the United States to cease its Cold War-style reliance on nuclear weapons as a foreign-policy tool. At the risk of appearing simplistic and provocative, I would characterize current U.S. nuclear weapons policy as immoral, illegal, militarily unnecessary, and dreadfully dangerous. The risk of an accidental or inadvertent nuclear launch is unacceptably high. Far from reducing these risks, the Bush administration has signaled that it is committed to keeping the U.S. nuclear arsenal as a mainstay of its military power—a commitment that is simultaneously eroding the international norms that have limited the spread of nuclear weapons and fissile materials for 50 years. Much of the current U.S. nuclear policy has been in place since before I was secretary of defense, and it has only grown more dangerous and diplomatically destructive in the intervening years.
Today, the United States has deployed approximately 4,500 strategic, offensive nuclear warheads. Russia has roughly 3,800. The strategic forces of Britain, France, and China are considerably smaller, with 200–400 nuclear weapons in each state’s arsenal. The new nuclear states of Pakistan and India have fewer than 100 weapons each. North Korea now claims to have developed nuclear weapons, and U.S. intelligence agencies estimate that Pyongyang has enough fissile material for 2–8 bombs.

How destructive are these weapons? The average U.S. warhead has a destructive power 20 times that of the Hiroshima bomb. Of the 8,000 active or operational U.S. warheads, 2,000 are on hair-trigger alert, ready to be launched on 15 minutes’ warning. How are these weapons to be used? The United States has never endorsed the policy of “no first use,” not during my seven years as secretary or since. We have been and remain prepared to initiate the use of nuclear weapons—by the decision of one person, the president—against either a nuclear or nonnuclear enemy whenever we believe it is in our interest to do so. For decades, U.S. nuclear forces have been sufficiently strong to absorb a first strike and then inflict “unacceptable” damage on an opponent. This has been and (so long as we face a nuclear-armed, potential adversary) must continue to be the foundation of our nuclear deterrent.

In my time as secretary of defense, the commander of the U.S. Strategic Air Command (SAC) carried with him a secure telephone, no matter where he went, 24 hours a day, seven days a week, 365 days a year. The telephone of the commander, whose headquarters were in Omaha, Nebraska, was linked to the underground command post of the North American Defense Command, deep inside Cheyenne Mountain, in Colorado, and to the U.S. president, wherever he happened to be. The president always had at hand nuclear release codes in the so-called football, a briefcase carried for the president at all times by a U.S. military officer.

The SAC commander’s orders were to answer the telephone by no later than the end of the third ring. If it rang, and he was informed that a nuclear attack of enemy ballistic missiles appeared to be under way, he was allowed 2 to 3 minutes to decide whether the warning was valid (over the years, the United States has received many false warnings), and if so, how the United States should respond. He was then given approximately 10 minutes to determine what to recommend, to locate and advise the president, permit the president to discuss the situation with two or three close advisors (presumably the secretary of defense and the chairman of the Joint Chiefs of Staff), and to receive the president’s decision and pass it immediately, along with the codes, to the launch sites. The president essentially had two options: He could decide to ride out the attack and defer until later any decision to launch a retaliatory strike. Or, he could order an immediate retaliatory strike, from a menu of options, thereby launching U.S. weapons that were targeted on the opponent’s military-industrial assets. Our opponents in Moscow presumably had and have similar arrangements.

The whole situation seems so bizarre as to be beyond belief. On any given day, as we go about our business, the president is prepared to make a decision within 20 minutes that could launch one of the most devastating weapons in the world. To declare war requires an act of congress, but to launch a nuclear holocaust requires 20 minutes’ deliberation by the president and his advisors. But that is what we have lived with for 40 years. With very few changes, this system remains largely intact, including the “football,” the president’s constant companion.

I was able to change some of these dangerous policies and procedures. My colleagues and I started arms control talks; we installed safeguards to reduce the risk of unauthorized launches; we added options to the nuclear war plans so that the president did not have to choose between an all-or-nothing response, and we eliminated the vulnerable and provocative nuclear missiles in Turkey. I wish I had done more, but we were in the midst of the Cold War, and our options were limited.

The United States and our NATO allies faced a strong Soviet and Warsaw Pact conventional threat. Many of the allies (and some in Washington as well) felt strongly that preserving the U.S. option of launching a first strike was necessary for the sake of keeping the Soviets at bay. What is shocking is that today, more than a decade after the end of the Cold War, the basic U.S. nuclear policy is unchanged. It has not adapted to the collapse of the Soviet Union. Plans and procedures have not been revised to make the United States or other countries less likely to push the button. At a minimum, we should remove all strategic nuclear weapons from “hair-trigger” alert, as others have recommended, including Gen. George Lee Butler, the last commander of SAC. That simple change would greatly reduce the risk of an accidental nuclear launch. It would also signal to other states that the United States is taking steps to end its reliance on nuclear weapons.

We pledged to work in good faith toward the eventual elimination of nuclear arsenals when we negotiated the Nuclear Non-Proliferation Treaty (NPT) in 1968. In May, diplomats from more than 180 nations are meeting in New York City to review the NPT and assess whether members are living up to the agreement. The United States is focused, for understandable reasons, on persuading North Korea to rejoin the treaty and on negotiating deeper constraints on Iran’s nuclear ambitions. Those states must be convinced to keep the promises they made when they originally signed the NPT—that they would not build nuclear weapons in return for access to peaceful uses of nuclear energy. But the attention of many nations, including some potential new nuclear weapons states, is also on the United States. Keeping such large numbers of weapons, and maintaining them on hair-trigger alert, are potent signs that the United States is not seriously working toward the elimination of its arsenal and raises troubling questions as to why any other state should restrain its nuclear ambitions.


A Preview of the Apocalypse
The destructive power of nuclear weapons is well known, but given the United States’ continued reliance on them, it’s worth remembering the danger they present. A 2000 report by the International Physicians for the Prevention of Nuclear War describes the likely effects of a single 1 megaton weapon—dozens of which are contained in the Russian and U.S. inventories. At ground zero, the explosion creates a crater 300 feet deep and 1,200 feet in diameter. Within one second, the atmosphere itself ignites into a fireball more than a half-mile in diameter. The surface of the fireball radiates nearly three times the light and heat of a comparable area of the surface of the sun, extinguishing in seconds all life below and radiating outward at the speed of light, causing instantaneous severe burns to people within one to three miles. A blast wave of compressed air reaches a distance of three miles in about 12 seconds, flattening factories and commercial buildings. Debris carried by winds of 250 mph inflicts lethal injuries throughout the area. At least 50 percent of people in the area die immediately, prior to any injuries from radiation or the developing firestorm.

Of course, our knowledge of these effects is not entirely hypothetical. Nuclear weapons, with roughly one seventieth of the power of the 1 megaton bomb just described, were twice used by the United States in August 1945. One atomic bomb was dropped on Hiroshima. Around 80,000 people died immediately; approximately 200,000 died eventually. Later, a similar size bomb was dropped on Nagasaki. On Nov. 7, 1995, the mayor of Nagasaki recalled his memory of the attack in testimony to the International Court of Justice:

Nagasaki became a city of death where not even the sound of insects could be heard. After a while, countless men, women and children began to gather for a drink of water at the banks of nearby Urakami River, their hair and clothing scorched and their burnt skin hanging off in sheets like rags. Begging for help they died one after another in the water or in heaps on the banks.… Four months after the atomic bombing, 74,000 people were dead, and 75,000 had suffered injuries, that is, two-thirds of the city population had fallen victim to this calamity that came upon Nagasaki like a preview of the Apocalypse.
Why did so many civilians have to die? Because the civilians, who made up nearly 100 percent of the victims of Hiroshima and Nagasaki, were unfortunately “co-located” with Japanese military and industrial targets. Their annihilation, though not the objective of those dropping the bombs, was an inevitable result of the choice of those targets. It is worth noting that during the Cold War, the United States reportedly had dozens of nuclear warheads targeted on Moscow alone, because it contained so many military targets and so much “industrial capacity.”

Presumably, the Soviets similarly targeted many U.S. cities. The statement that our nuclear weapons do not target populations per se was and remains totally misleading in the sense that the so-called collateral damage of large nuclear strikes would include tens of millions of innocent civilian dead.

This in a nutshell is what nuclear weapons do: They indiscriminately blast, burn, and irradiate with a speed and finality that are almost incomprehensible. This is exactly what countries like the United States and Russia, with nuclear weapons on hair-trigger alert, continue to threaten every minute of every day in this new 21st century.


No Way To Win
I have worked on issues relating to U.S. and NATO nuclear strategy and war plans for more than 40 years. During that time, I have never seen a piece of paper that outlined a plan for the United States or NATO to initiate the use of nuclear weapons with any benefit for the United States or NATO. I have made this statement in front of audiences, including NATO defense ministers and senior military leaders, many times. No one has ever refuted it. To launch weapons against a nuclear-equipped opponent would be suicidal. To do so against a nonnuclear enemy would be militarily unnecessary, morally repugnant, and politically indefensible.

I reached these conclusions very soon after becoming secretary of defense. Although I believe Presidents John F. Kennedy and Lyndon Johnson shared my view, it was impossible for any of us to make such statements publicly because they were totally contrary to established NATO policy. After leaving the Defense Department, I became president of the World Bank. During my 13-year tenure, from 1968 to 1981, I was prohibited, as an employee of an international institution, from commenting publicly on issues of U.S. national security. After my retirement from the bank, I began to reflect on how I, with seven years’ experience as secretary of defense, might contribute to an understanding of the issues with which I began my public service career.

At that time, much was being said and written regarding how the United States could, and why it should, be able to fight and win a nuclear war with the Soviets. This view implied, of course, that nuclear weapons did have military utility; that they could be used in battle with ultimate gain to whoever had the largest force or used them with the greatest acumen. Having studied these views, I decided to go public with some information that I knew would be controversial, but that I felt was needed to inject reality into these increasingly unreal discussions about the military utility of nuclear weapons. In articles and speeches, I criticized the fundamentally flawed assumption that nuclear weapons could be used in some limited way. There is no way to effectively contain a nuclear strike—to keep it from inflicting enormous destruction on civilian life and property, and there is no guarantee against unlimited escalation once the first nuclear strike occurs. We cannot avoid the serious and unacceptable risk of nuclear war until we recognize these facts and base our military plans and policies upon this recognition. I hold these views even more strongly today than I did when I first spoke out against the nuclear dangers our policies were creating. I know from direct experience that U.S. nuclear policy today creates unacceptable risks to other nations and to our own.


What Castro Taught Us
Among the costs of maintaining nuclear weapons is the risk—to me an unacceptable risk—of use of the weapons either by accident or as a result of misjudgment or miscalculation in times of crisis. The Cuban Missile Crisis demonstrated that the United States and the Soviet Union—and indeed the rest of the world—came within a hair’s breadth of nuclear disaster in October 1962.

Indeed, according to former Soviet military leaders, at the height of the crisis, Soviet forces in Cuba possessed 162 nuclear warheads, including at least 90 tactical warheads. At about the same time, Cuban President Fidel Castro asked the Soviet ambassador to Cuba to send a cable to Soviet Premier Nikita Khrushchev stating that Castro urged him to counter a U.S. attack with a nuclear response. Clearly, there was a high risk that in the face of a U.S. attack, which many in the U.S. government were prepared to recommend to President Kennedy, the Soviet forces in Cuba would have decided to use their nuclear weapons rather than lose them. Only a few years ago did we learn that the four Soviet submarines trailing the U.S. Naval vessels near Cuba each carried torpedoes with nuclear warheads. Each of the sub commanders had the authority to launch his torpedoes. The situation was even more frightening because, as the lead commander recounted to me, the subs were out of communication with their Soviet bases, and they continued their patrols for four days after Khrushchev announced the withdrawal of the missiles from Cuba.

The lesson, if it had not been clear before, was made so at a conference on the crisis held in Havana in 1992, when we first began to learn from former Soviet officials about their preparations for nuclear war in the event of a U.S. invasion. Near the end of that meeting, I asked Castro whether he would have recommended that Khrushchev use the weapons in the face of a U.S. invasion, and if so, how he thought the United States would respond. “We started from the assumption that if there was an invasion of Cuba, nuclear war would erupt,” Castro replied. “We were certain of that…. [W]e would be forced to pay the price that we would disappear.” He continued, “Would I have been ready to use nuclear weapons? Yes, I would have agreed to the use of nuclear weapons.” And he added, “If Mr. McNamara or Mr. Kennedy had been in our place, and had their country been invaded, or their country was going to be occupied … I believe they would have used tactical nuclear weapons.”

I hope that President Kennedy and I would not have behaved as Castro suggested we would have. His decision would have destroyed his country. Had we responded in a similar way the damage to the United States would have been unthinkable. But human beings are fallible. In conventional war, mistakes cost lives, sometimes thousands of lives. However, if mistakes were to affect decisions relating to the use of nuclear forces, there would be no learning curve. They would result in the destruction of nations. The indefinite combination of human fallibility and nuclear weapons carries a very high risk of nuclear catastrophe. There is no way to reduce the risk to acceptable levels, other than to first eliminate the hair-trigger alert policy and later to eliminate or nearly eliminate nuclear weapons. The United States should move immediately to institute these actions, in cooperation with Russia. That is the lesson of the Cuban Missile Crisis.


A Dangerous Obsession
On Nov. 13, 2001, President George W. Bush announced that he had told Russian President Vladimir Putin that the United States would reduce “operationally deployed nuclear warheads” from approximately 5,300 to a level between 1,700 and 2,200 over the next decade. This scaling back would approach the 1,500 to 2,200 range that Putin had proposed for Russia. However, the Bush administration’s Nuclear Posture Review, mandated by the U.S. Congress and issued in January 2002, presents quite a different story. It assumes that strategic offensive nuclear weapons in much larger numbers than 1,700 to 2,200 will be part of U.S. military forces for the next several decades. Although the number of deployed warheads will be reduced to 3,800 in 2007 and to between 1,700 and 2,200 by 2012, the warheads and many of the launch vehicles taken off deployment will be maintained in a “responsive” reserve from which they could be moved back to the operationally deployed force. The Nuclear Posture Review received little attention from the media. But its emphasis on strategic offensive nuclear weapons deserves vigorous public scrutiny. Although any proposed reduction is welcome, it is doubtful that survivors—if there were any—of an exchange of 3,200 warheads (the U.S. and Russian numbers projected for 2012), with a destructive power approximately 65,000 times that of the Hiroshima bomb, could detect a difference between the effects of such an exchange and one that would result from the launch of the current U.S. and Russian forces totaling about 12,000 warheads.

In addition to projecting the deployment of large numbers of strategic nuclear weapons far into the future, the Bush administration is planning an extensive and expensive series of programs to sustain and modernize the existing nuclear force and to begin studies for new launch vehicles, as well as new warheads for all of the launch platforms. Some members of the administration have called for new nuclear weapons that could be used as bunker busters against underground shelters (such as the shelters Saddam Hussein used in Baghdad). New production facilities for fissile materials would need to be built to support the expanded force. The plans provide for integrating a national ballistic missile defense into the new triad of offensive weapons to enhance the nation’s ability to use its “power projection forces” by improving our ability to counterattack an enemy. The Bush administration also announced that it has no intention to ask congress to ratify the Comprehensive Test Ban Treaty (CTBT), and, though no decision to test has been made, the administration has ordered the national laboratories to begin research on new nuclear weapons designs and to prepare the underground test sites in Nevada for nuclear tests if necessary in the future. Clearly, the Bush administration assumes that nuclear weapons will be part of U.S. military forces for at least the next several decades.

Good faith participation in international negotiation on nuclear disarmament—including participation in the CTBT—is a legal and political obligation of all parties to the NPT that entered into force in 1970 and was extended indefinitely in 1995. The Bush administration’s nuclear program, alongside its refusal to ratify the CTBT, will be viewed, with reason, by many nations as equivalent to a U.S. break from the treaty. It says to the nonnuclear weapons nations, “We, with the strongest conventional military force in the world, require nuclear weapons in perpetuity, but you, facing potentially well-armed opponents, are never to be allowed even one nuclear weapon.”

If the United States continues its current nuclear stance, over time, substantial proliferation of nuclear weapons will almost surely follow. Some, or all, of such nations as Egypt, Japan, Saudi Arabia, Syria, and Taiwan will very likely initiate nuclear weapons programs, increasing both the risk of use of the weapons and the diversion of weapons and fissile materials into the hands of rogue states or terrorists. Diplomats and intelligence agencies believe Osama bin Laden has made several attempts to acquire nuclear weapons or fissile materials. It has been widely reported that Sultan Bashiruddin Mahmood, former director of Pakistan’s nuclear reactor complex, met with bin Laden several times. Were al Qaeda to acquire fissile materials, especially enriched uranium, its ability to produce nuclear weapons would be great. The knowledge of how to construct a simple gun-type nuclear device, like the one we dropped on Hiroshima, is now widespread. Experts have little doubt that terrorists could construct such a primitive device if they acquired the requisite enriched uranium material. Indeed, just last summer, at a meeting of the National Academy of Sciences, former Secretary of Defense William J. Perry said, “I have never been more fearful of a nuclear detonation than now.… There is a greater than 50 percent probability of a nuclear strike on U.S. targets within a decade.” I share his fears.


A Moment of Decision
We are at a critical moment in human history—perhaps not as dramatic as that of the Cuban Missile Crisis, but a moment no less crucial. Neither the Bush administration, the congress, the American people, nor the people of other nations have debated the merits of alternative, long-range nuclear weapons policies for their countries or the world. They have not examined the military utility of the weapons; the risk of inadvertent or accidental use; the moral and legal considerations relating to the use or threat of use of the weapons; or the impact of current policies on proliferation. Such debates are long overdue. If they are held, I believe they will conclude, as have I and an increasing number of senior military leaders, politicians, and civilian security experts: We must move promptly toward the elimination—or near elimination—of all nuclear weapons. For many, there is a strong temptation to cling to the strategies of the past 40 years. But to do so would be a serious mistake leading to unacceptable risks for all nations.

Robert S. McNamara was U.S. secretary of defense from 1961 to 1968 and president of the World Bank from 1968 to 1981.

A Rocket To Nowhere

The Space Shuttle Discovery is up in orbit, safely docked to the International Space Station, and for the next five days, astronauts will be busy figuring out whether it's safe for them to come home. In the meantime, the rest of the Shuttle fleet is grounded (confined to base, not allowed to play with its spacecraft friends) because that pesky foam on the fuel tank keeps falling off.

There are 28 Space Shuttle flights still scheduled, firmly or tentatively, through 2010, when the current orbiter is supposed to retire in favor of a yet-to-be-designed replacement (which will not fly until 2014). On the eve of this launch, NASA put the likelihood of losing an orbiter at 1 in 100, a somewhat stunning concession by an agency notorious for minimizing the risk of its prize program. Given the track record, and the unanticipated foam problems, it's probably reasonable to assume a failure rate approaching 2%, a number close to the observed failure rate (1 in 57) and one likely to fall on the conservative side as the orbiters age.

For all the talk of safety improvements, there really isn't a way to make the Shuttle much safer. The changes made with so much fanfare after the Columbia loss have been marginal, serving to prevent the psychologically untenable situation of watching damage occur at launch and being unable to do anything about it before re-entry, many days later. Actual safety improvements to the Shuttle - putting the orbiter on top of the launch stack, installing a crew escape system - would be so hideously expensive that they have been consistently vetoed.

With 28 launches to go, probability tells us that the chance of losing another orbiter before the program's scheduled retirement is about 50-50. But past experience suggests that NASA will continue flying these things until one of them blows up again (note that suspicious four-year gap in manned flight capability right around the time the Shuttle is supposed to retire). This seems like as good a time as any to ask: why are we doing this?

Future archaeologists trying to understand what the Shuttle was for are going to have a mess on their hands. Why was such a powerful rocket used only to reach very low orbits, where air resistance and debris would limit the useful lifetime of a satellite to a few years? Why was there both a big cargo bay and a big crew compartment? What kind of missions would require people to assist in deploying a large payload? Why was the Shuttle intentionally crippled so that it could not land on autopilot? 1 Why go through all the trouble to give the Shuttle large wings if it has no jet engines and the glide characteristics of a brick? Why build such complex, adjustable main engines and then rely on the equivalent of two giant firecrackers to provide most of the takeoff thrust? Why use a glass thermal protection system, rather than a low-tech ablative shield? And having chosen such a fragile method of heat protection, why on earth mount the orbiter on the side of the rocket, where things will fall on it during launch?

Taken on its own merits, the Shuttle gives the impression of a vehicle designed to be launched repeatedly to near-Earth orbit, tended by five to seven passengers with little concern for their personal safety, and requiring extravagant care and preparation before each flight, with an almost fetishistic emphasis on reuse. Clearly this primitive space plane must have been a sacred artifact, used in religious rituals to deliver sacrifice to a sky god.

As tempting as it is to picture a blood-spattered Canadarm flinging goat carcasses into the void, we know that the Shuttle is the fruit of what was supposed to be a rational decision making process. That so much about the vehicle design is bizarre and confused is the direct result of the Shuttle's little-remembered role as a military vehicle during the Cold War.

By the time Shuttle development began, it was clear that the original vision of a Shuttle as part of a larger space transportation system was far too costly and ambitious to receive Congressional support. So NASA concentrated on building only the first component of its vision, a reusable manned spacecraft that could reach low earth orbit. Since NASA assumed it would be able to fly Shuttle missions with a turnaround time as low as two weeks, this left the vexing question of what to do with all that spare launch capacity. The tiny commercial launch market was in no shape to supply such a wealth of satellites, so NASA turned to the one agency that had an abundance of things requiring shooting into space - the Air Force - and asked it to abandon its unmanned rocket programs, instead committing all future satellite launches to the Shuttle.

The Air Force was only too happy to agree, but at a crippling price. What the Air Force wanted to launch was spy satellites - lots of them, bulky telescopes with heavy mirrors, the bigger the better - and it wanted to launch them in an orbit over the Earth's poles, so they could snoop over the maximum amount of Red territory. This meant NASA had to go back to the drawing board, since polar orbits would require a heavier orbiter than the Shuttle design had anticipated 2 , which in turn meant using a bigger rocket at launch, and dissipating more heat during re-entry.

Moreover, there was no way to launch a polar mission safely from Kennedy Space Center - it would mean overflying either heavily populated areas in the Carolinas or risking capture of a fuel tank by the wily Cubans. So the Air Force also demanded, and got, billions in funding to build a new Shuttle launch facility at Vandenberg Air Force base in California. And because some of the Air Force's military missions involved capturing a Soviet satellite on the sly and landing after one orbit, the Air Force demanded that the Shuttle be capable of gliding over a thousand miles cross-range during re-entry, so that it could catch up with the rapidly eastbound Air Force base underneath it. This meant bigger wings, which in turn meant more weight, an even more powerful rocket, and again a more complicated heat shield.

Most of the really wrong design decisions in the Shuttle system - the side-mounted orbiter, solid rocket boosters, lack of air-breathing engines, no escape system, fragile heat protection - were the direct fallout of this design phase, when tight budgets and onerous Air Force requirements forced engineers to improvise solutions to problems that had as much to do to do with the mechanics of Congressional funding as the mechanics of flight. In a pattern that would recur repeatedly in the years to come, NASA managers decided that they were better off making spending cuts on initial design even if they resulted in much higher operating costs over the lifetime of the program.

To further cut costs, and keep the weight from growing prohibitive, the Shuttle became the first manned spacecraft to fly without any kind of crew escape system, relying on certain components (solid rockets, wing tiles, landing gear) to function with complete reliability 3 . NASA also decided not to make the Shuttle capable of unmanned flight, so that the first test flight of the vehicle would have astronauts on board. This was a major departure for the traditionally conservative agency, which had relied on redundant systems wherever possible, and always tested unmanned prototypes of any new rocket. It showed how confident NASA had grown in its ability to correctly predict, simulate, and design for high reliability 4 .

The final Shuttle design, incorporating all of the budgetary and Air Force design constraints, was impressive but not particularly useful. Very soon after the start of the program, it became clear that Shuttle launches would not be routine events, that it would cost a great deal of money to repair each orbiter after its trip to space, and that estimates of launch cost and frequency had been wildly optimistic. At the same time, the Air Force proved unable to get the Vandenberg base ready for use, negating much of the reason for the extensive Shuttle redesign. After the Challenger explosion, the Vandenberg base was quietly mothballed. Not once did the Shuttle fly a mission to polar orbit.

Having failed at its stated goal, the Shuttle program proved adept at finding changing rationales for its existence. It was, after all, an awfully large spacecraft, and it was a bird in the hand, giving it an enormous advantage over any suggested replacement.

As the Strategic Defense Initiative took off, the Shuttle played a central role, envisioned both as a way of launching the complex components of SDI and snatching away whatever Soviet satellites might be sent up to interfere. The Shuttle even helped Reagan inadvertently bankrupt the Soviet Union, as the Soviets decided they needed a rival orbiter, and cloned the vehicle at terrific expense 5 .

When the Cold War fizzled out towards the end of the eighties, NASA rebranded the Shuttle as a way of jump-starting the leap of capitalism from the Earth's surface to outer space, offering a variety of heavily subsidized research platforms for the private sector (which proved remarkably resistant to the allure of a manufacturing environment where raw materials cost $40,000/kg). And it stressed the scientific value of manned space flight, with each Shuttle mission now bespangled in a dazzling assortment of scientific experiments, like so many talismans against budget reduction. Suddenly it seemed you could not change your socks in space without doing valuable scientific research that would contribute directly to improving the lives of the American taxpayer.

This period of Shuttle-as-cancer-cure found its apotheosis in the brilliantly cynical return of John Glenn to space. While legislators had been accelerated to orbital velocity before, Glenn was both a Senator and a sixties space hero, making him an ideal public relations cargo. Naturally, the slightest hint that the Senator had been launched into space for reasons other than the urgent demands of medical science was indignantly dismissed by the mission planners. At the now-usual cost of around a billion dollars 6 , STS-95 spent ten days engaged in the following experiments:

* Sent cockroaches up to see how microgravity would affect their growth at various stages of their life cycle
* Studied a "space rose" to see what kinds of essential oils it would produce in weightless environment. (in a triumph of technology transfer, this was later developed into a perfume).
* At the suggestion of elementary school children, monitored everyday objects such as soap, crayons, and string to see whether their inertial mass would change in a weightless environment. Preliminary results suggest that Newton was right.
* Monitored the growth of fish eggs and rice plants in space (orbital sushi?)
* Tested new space appliances, including a space camcorder and space freezer
* Checked to see whether melatonin would make the crew sleepy (it did not)



And of course, there was John Glenn, monitored inside and out, blood tested, urine sampled, entire organism analyzed for signs of accelerated aging. Close observation of the Senator suggested that there might not be any medical obstacles to launching the entire legislative branch into space, possibly the most encouraging scientific result of the mission.

Along with these craggy summits of basic research, the astronauts performed a raft of prepared experiments in metallurgy, medicine, fluid mechanics, embryology, and solar wind detection, all of which had one thing in common - they were designed to minimize crew interaction, in most cases requiring the astronauts to do little more than flip a switch 7 .

This brings up a delicate point about justifying manned missions with science. In order to make any straight-faced claims about being cost effective, you have to cart an awful lot of science with you into orbit, which in turns means you need to make the experiments as easy to operate as possible. But if the experiments are all automated, you remove the rationale for sending a manned mission in the first place. Apart from question-begging experiments on the physiology of space flight, there is little you can do to resolve this dilemma. In essence, each 'pure science' Shuttle science mission consists of several dozen automated experiments alongside an enormous, irrelevant, repeated experiment in keeping a group of primates alive and healthy outside the atmosphere.

Given this shaky ground, NASA has been understandably eager to put up its true brilliancy, part of the original STS plan that would not just create a need for Shuttle missions into the forseeable future, but make it practically impossible to cancel the manned space program: the International Space Station.

The ISS was another child of the Cold War: originally intended to show the Russians up and provide a permanent American presence in space, then hastily amended as a way to keep the Russian space scientists busy while their economy was falling to pieces. Like the Shuttle, it has been redesigned and reduced in scope so many times that it bears no resemblance to its original conception. Launched in an oblique, low orbit that guarantees its permanent uselessness, it serves as yin to the shuttle's yang, justifying an endless stream of future Shuttle missions through the simple stratagem of being too expensive to abandon.

Of course, the ISS has also been preemptively armed with science, but NASA has found much more effective safeguards against potential budget cuts. The station's inordinately expensive modules have mainly come from foreign space agencies, ensuring that even a NASA administrator foolhardy enough to let the thing drop into the sea would contravene a fistful of international treaties. And the station requires a permanent crew, a trick NASA learned from the Shuttle, so that there can be no question of mothballing it or converting it into an unmanned research platform.

In the thirty years since the last Moon flight, we have succeeded in creating a perfectly self-contained manned space program, in which the Shuttle goes up to save the Space Station (undermanned, incomplete, breaking down, filled with garbage, and dropping at a hundred meters per day), and the Space Station offers the Shuttle a mission and a destination. The Columbia accident has added a beautiful finishing symmetry - the Shuttle is now required to fly to the ISS, which will serve as an inspection station for the fragile thermal tiles, and a lifeboat in case something goes seriously wrong.

This closed cycle is so perfect that the last NASA administrator even cancelled the only mission in which there was a compelling need for a manned space flight - the Hubble telescope repair and upgrade - on the grounds that it would be too dangerous to fly the Shuttle away from the ISS, thereby detaching the program from its last connection to reason and leaving it free to float off into its current absurdist theater of backflips, gap fillers, Canadarms and heroic expeditions to the bottom of the spacecraft.

There is no satisfactory answer for why all this commotion must take place in orbit. To the uneducated mind, it would seem we could accomplish our current manned space flight objectives more easily by not launching any astronauts into space at all - leaving the Shuttle and ISS on the ground would result in massive savings without the slighest impact on basic science, while also increasing mission safety by many orders of magnitude. It might even bring mission costs within the original 1970's estimates, and allow us to continue the Shuttle program well into the middle of the century.

But NASA dismisses such helpful suggetions as unworthy of its mission of 'exploration', likening critics of manned space flight to those Europeans in the 1500's who would have cancelled the great voyages of discovery rather than face the loss of one more ship.

Of course, the great explorers of the 1500's did not sail endlessly back and forth a hundred miles off the coast of Portugal, nor did they construct a massive artificial island they could repair to if their boat sprang a leak. And we must remember that space is called space for a reason - there is nothing in it, at least not where the Shuttle goes, save for a few fast-moving pieces of junk from the last few times we went up there, forty years ago. The interesting bits in space are all much further away, and we have not paid them a visit since 1972. In fact, despite an ambitious "Vision for Space Exploration", there seems to be no mandate or interest in pursuing this kind of exploration, and all the significant deadlines are pushed comfortably past the tenure of incumbent politicians.

Meanwhile, while the Shuttle has been up on blocks, a wealth of unmanned probes has been doing exactly the kind of exploration NASA considers so important, except without the encumbrance of big hairless monkeys on board. And therein lies another awkward fact for NASA. While half the NASA budget gets eaten by the manned space program, the other half is quietly spent on true aerospace work and a variety of robotic probes of immense scientific value. All of the actual exploration taking place at NASA is being done by unmanned vehicles. And when some of those unmanned craft fail, no one is killed, and the unmanned program is not halted for three years.

Over the past three years, while the manned program has been firing styrofoam out of cannons on the ground, unmanned NASA and ESA programs have been putting landers on Titan, shooting chunks of metal into an inbound comet, driving rovers around Mars and continuing to gather a variety of priceless observations from the many active unmanned orbital telescopes and space probes sprinkled through the Solar System. At the same time, the skeleton crew on the ISS has been fixing toilets, debugging laptops, changing batteries, and speaking to the occasional elementary school over ham radio 8 .

NASA is convinced that stopping the Shuttle program would mean an indefinite end to American manned space flight, and so it will go to almost any length to make sure there is a continuous manned presence in space. The arguments in its defense may be disingenuous, this reasoning goes, but the manned program is an irreplaceable asset in itself, as well as a high-profile mission that keeps funding flowing in for worthy but less glamorous NASA activities.

But this attitude is actually doing damage to the prospects of real manned space exploration. Sinking half the NASA budget into the Shuttle and ISS precludes the possibility of doing truly groundbreaking work on space flight. As the orbiters age, their upkeep and safety requirements are becoming an expensive antiquarian exercise, forcing engineers to spend their ingenuity repairing obsolete components and devising expensive maintenance techniques for sclerotic spacecraft, rather than applying their lessons to a new generation of rockets. The retardant effect the Shuttle has had on technology (like the two decades long freeze in expendable rocket development) outweighs any of its modest initial benefits to materials science, aerodynamics, and rocket design.

The Apollo program showed how successful the agency could be when given a clear technical objective and the budget required to meet it. But the Shuttle program has shown the flip side of NASA, as rational goals detach from reality under constantly changing political and funding pressures. NASA has learned valuable bureaucratic lessons - it knows to spread its work over as many jurisdictions as possible, it has learned that chronic funding is always better than acute funding, however much money a one-time outlay might save in the long run, and it has demonstrated that ineffectual projects can be sustained indefinitely if cancelling them is sufficiently awkward. But these are lessons we have already learned for far less on the ground, with Amtrak, and building a more photogenic, spaceborne version of the Sunset Limited in orbit hardly seems like a space policy for the 21st century.

The people who work at and run NASA are not cynical, but the charade of manned space flight is turning NASA into a cynical organization. For all the talk of building a culture of safety, no one has pointed out the inherent contradiction in requiring that a program justified on irrational grounds be run in a rational manner. In an atmosphere where special pleading and wishful thinking about the benefits of manned flights to low earth orbit are not just tolerated, but required of astronauts and engineers, how can one demand complete integrity and intellectual honesty on safety of flight issues? It makes no sense to expect NASA to maintain a standard of intellectual rigor in operations that it can magically ignore when it comes to policy and planning.

The goal cannot be to have a safe space program - rocket science is going to remain difficult and risky. But we have the right to demand that the space program have some purpose beyond trying to keep its participants alive. NASA needs to take a lesson in courage from its astronauts, and demand either a proper, funded mandate for manned exploration, or close down the program. By NASA's own arguments, the commercial, technological and intellectual allure of manned space exploration are so great that it will not be a hard case to make. But even if the worst happens and the Shuttles are mothballed, with the the ISS left abandoned, the loss to science will have been negligible. That is the great tragedy of the current 'return to flight', and the sooner we force the agency to confront its failure, the greater our chances of salvaging a space program worth keeping out of the current mess.




Many source links for this article are available on my del.icio.us page.

1 The landing gear switch on the Shuttle is not connected to the flight computer by special request of the astronauts. This is the only impediment to fully automated landing. (up)

2 Launching a shuttle due east from the Kennedy space center steals a 900 mph boost from the Earth's rotation. Launching a shuttle over the poles requires cancelling out this eastward component before accelerating to the usual orbital velocity.(up)

3 The Columbia flew its first four missions with conventional ejection seats, which would have allowed the two pilots to bail out shortly before touchdown, though not during launch. The ejection seats were removed in later missions since it was impossible to provide them to the full crew. The current escape system on the shuttle comprises a set of parachutes, a hatch, and a long stick for the astronauts to slide along; it is only operable during sustained gliding flight below 40,000 feet, and is mainly there in case the crew needs to ditch the orbiter at sea during a launch abort. More elaborate escape systems have repeatedly been considered, but have proven prohibitively heavy and expensive. (up)

4 In a narrow sense, they succeeded. In both cases where a shuttle was lost, NASA had extensive warning of the failure mode in question, and had not addressed it for systemic and organizational reasons. But those organizational failures themselves represent a point of failure, one that lies outside the scope of an engineering analysis, which has to assume that procedures for checking critical components will work as reliably as the components whose reliability the procedures are supposed to safeguard. (up)

5 The Soviet Shuttle, the Buran (snowstorm) was an aerodynamic clone of the American orbiter, but incorporated many original features that had been considered and rejected for the American program, such as all-liquid rocket boosters, jet engines, ejection seats and an unmanned flight capability. You know you're in trouble when the Russians are adding safety features to your design. (up)

6 The original Shuttle plan called for launch costs in the range of $10 - $20 million. A commonly cited figure puts the actual cost at $400 million. (up)

7 The experiments could not be made fully automatic because NASA policy requires that experiments on manned missions involve the crew (up)

8 The NASA obsession with elementary and middle school participation in space flight is curious, and demonstrates how low a status actual in-flight science has compared with orbital public relations. You are not likely to hear of CERN physicists colliding tin atoms sent to them by a primary school in Toulouse, or the Hubble space being turned around to point at waving middle schoolers on a playground in Texas, yet even the minimal two-man ISS crew - one short of the stated minimum needed to run the station - regularly takes time to talk to schoolchildren. (up)

http://www.idlewords.com/2005/08/a_rocket_to_nowhere.htm