Category Archives: Technology

Thinking Outside the Box

Once you’ve secured the software and hardware, why you’re still vulnerable and how to address it.


This is a non-technical article, the aim of which is to raise awareness of the threats that often get overlooked when hardening software and hardware. In practice, you can only ever mitigate against security threats. For example, Symantec’s Bryan Dye has just told the Wall Street Journal that Symantec, the biggest anti-virus vendor, is getting out of the anti-virus business because the software stops at most around 45% of viruses. He says the money is no longer in “protect”, but instead in “detect and respond”. Or consider the compromising of RSA’s SecurID. The theory at the time was that a nation state was trying to get access to secrets at a military aerospace vendor but was blocked by the vendor’s use of SecurID. So instead they sent targeted email to RSA employees, which enabled them to breach the SecurID security and get what they were really after. RSA took a lot of criticism for how it responded to this attack. As an aside, that’s why it’s important to train your staff to recognize phishing emails.

If you don’t have a disaster recovery plan then when the worst happens, and I’ll just call out the heartbleed OpenSSL vulnerability as an example of the worst happening without a disaster recovery plan, because even if you changed all your passwords you know you’ll have to do it again once all of the services you use have got their new keys in place, and you’ll still wonder if anyone managed to leave any snooping software on those services while the keys were compromised.

As IT professionals, when we talk about security we’re mostly talking about confidentiality, integrity, and availability of data. We don’t want confidential data leaving the organization so we enforce a trusted device policy to ensure all BYO devices have their data encrypted and can be remotely wiped. We block the use of file sharing applications like DropBox that can lead to confidential data being stored in the public cloud and we provide users with alternatives that keep the data within the corporate network, because users really like DropBox. We lock down all the USB ports, because corporate spies have started sending out free mice with hidden malware to employees. I’m not making this up. And we use access controls to ensure people only have access to the information they need to do their job. We look after data integrity by making regular backups, and we do periodic restores to make sure those backups are working. And we make sure the data is available by doing system maintenance while the west coast of America is asleep. Ok, so outside of California your mileage may vary. So assuming you’ve done everything you should to secure your software and hardware, what have you missed? Well, I’ll get to that later.


I’ve been interested in security since the late 1980s when I got my copy of Hugo Cornwall’s Hacker’s Handbook, where I discovered the existence of the Internet, or ARPAnet as it was then known. Prior to joining the security business I worked for a retail software company where I discovered all sorts of frightening things about how card payments are processed. For instance, did you know that when chip and PIN payment was originally introduced in the UK that there was no encryption between the mobile radio units and the base stations? Thankfully that’s now been resolved.

Or, and I’m not sure if this is still the case, but I suspect so, that all the card payment transactions in high street stores are stored and sent unencrypted to the banks. Now the reason for this is because, as I’m sure you can imagine, there are a very large numbers of transactions throughout the day’s trading. Traditionally these were sent to the bank at the end of the day for overnight processing. You’ll be glad to know that these are sent over a dedicated line rather than the public Internet. But even so, they are still sitting on the host system without any encryption. And the reason for this is that the overhead added by decrypting each transaction, because they would all have to be individually encrypted and decrypted to work with the batch processing system at the banks, would have added just enough delay to ensure that eventually the system wouldn’t be able to keep up with the number of transactions. Payments would be going into the queue faster than they could be processed.

Now you may have heard of PCI DSS, that’s the Payment Card Industry Data Security Standard. And what, among other things, that standard says, is that organizations have to restrict who has access to the folder with the card payments in it. And so already we’ve gone beyond the software and hardware and we’ve got a security policy, the PCI DSS, and that policy is based at least in part on trust. Now I could spend the rest of my allotted time talking about trust, but instead I’ll just recommend Bruce Schneier’s book Liars & Outliers.

But what I want to get across here though is that software and hardware are just part of the security solution. So all retailers in the UK are supposed to be audited for compliance with PCI DSS. But according to Financial Fraud Action UK, card fraud losses in the UK for 2013 totaled £450.4 million. Now that sounds bad, but it to put it another way it’s equal to 7.4 pence for every £100 spent. And the things we have to consider here are the risk and, the cost of mitigating that risk.

The payment card industry wants to keep fraud down, but if putting in place a solution that eliminates fraud costs more than the cost of the fraud itself then it will look for a cheaper solution. So actually, even before you secure the box, you really need a security policy. Because if there’s nothing of value in the box, then you don’t really need it to be that secure. But if what’s in the box is the most valuable thing you have, then you really need to be able to deal with a situation where all of your security measures failed.


So although that was a bit of a roundabout way to get to my point, what I’m advocating is that organizations need a security policy. And vendors of security solutions, need to help their customers to think about security in this way. So what makes a good security policy? Well first of all you need to have someone with the responsibility for the policy, the chief security officer. And one of their most important responsibilities is to keep the policy under review, because the environment is changing all the time, and a static policy can’t address that.

So how do you come up with a good security policy? Well there are various things you need to take into account. But primarily it’s about working out the risk: How likely is it that someone will walk out of this facility with all this government data on a USB pen drive? And the cost: What will be the effect if this confidential information about everyone we’re spying on gets into the public domain?

So for each risk, you work out the associated cost and then you come up with a solution proportionate to the risk. Let’s go back to the early days of hacking. I’m not sure anyone ever calculated the risk of hackers going dumpster diving for telephone engineer manuals. But I’m reasonably confident that the cost of shredding all those manuals set against the risk of someone typing the whole thing into a computer and uploading it to a bulletin board system was fairly high. Now this is in the days before cheap scanners, good optical character recognition and widespread access to the Internet, which is why everyone now securely disposes of confidential documents, don’t they?

Now in the Snowden case there were a couple of things that surprised me. First, that the NSA wasn’t using mandatory access control. Or in other words they weren’t using a trusted computing solution. They were using the same operating systems as the rest of us. I think partly that can be explained by the fact that it’s expensive to get support for Trusted Solaris and similar operating systems, because almost no-one besides governments use them. And often the applications that governments want to run aren’t available on those platforms so the cost of using them may exceed their benefit in mitigating risk. But the other thing that surprised me is the practice of password sharing.

And that brings me to the main vulnerability you face if your hardware and software are secure. Your users. Kevin Mitnick, I’m assuming you’ve heard of him, if not look him up. He asserts, and I don’t disagree with him, that humans are the weakest link in security. In fact I recommend his book “The Art of Deception” if you want to know exactly how predictable and easy to manipulate people are.

So let’s look at the password sharing issue. If you put up a big enough road block for your users to getting work done, they will find a detour around it. Is it easier to tell someone your password than jump through hoops to get that one file they need? Cisco’s own password policy states that passwords need to contain at least eight alphanumeric characters, both upper and lower case letters, at least one number, and at least one special character. It also can’t be one of the previous three passwords. So what do users do? They pick dictionary words with substitutions. And then users have to change their password every six months, or quarterly if it’s an administrative password. This leads to one of two things. They write the passwords down. Or they repeatedly change their password until they cycle back to their original password. It’s pretty easy to get a valid Cisco username. They’re in all of our email addresses. If you can actually get on to a Cisco site and physically connect to the network you can just keep trying to connect until you brute force the password.

So how do you get on site? Well, this touches on the other main vulnerability, physical security. At Cisco we use our employee badges for building access and various areas are restricted to specific groups of employees. We have a policy of not holding the door open for people we don’t recognize. Unfortunately it is in most people’s nature to be helpful. If I smile at someone as they go through a door and I’m dressed appropriately, they’re less likely to question if they should have just let me follow them. Mitnick’s book is full of these kind of social engineering techniques. But actually the easiest way to get on site at Cisco is to sign up for a training course. You might have read in the news earlier this year about the gang of crooks who stole £1.25 million by going into bank branches and attaching KVM (that’s keyboard/video/mouse) switches. Reports haven’t detailed how they got into the building, but it’s safe to assume it was low tech, and they didn’t break in.

So you need to educate staff about threats. Phishing email, social engineering, not picking up USB pen drives that you find lying around and connecting them to your corporate PC. We’re short on time so I’m not even going to cover BYOD. That’s “Bring Your Own Device”, although some have called it “Bring Your Own Disaster” because of the additional risks and management headaches it entails. Ok, well I will say that the mitigation is to require BYO devices to meet a minimum level of protection: a secure password, encrypted storage, the ability to do a remote wipe. But basically, the message is that it’s all very well having a security policy, but it isn’t much use if your staff don’t know about it.

Once you’ve got a policy in place then you need to stress test it. This is where the “red team” comes in. This can be an internal group, or an externally hired group, the job of which is to attempt to penetrate your security, for instance by leaving USB pen drives lying around or sending test phishing emails. Penetration testing needs to be conducted on a regular basis, the frequency of which will depend on the risk and cost analysis, and the security policy updated following the findings.

But let’s come back to physical security, or location, location, location. In the aftermath of hurricane Sandy it seems fairly obvious to state that if you’re doing offsite backup to multiple data centers that at the very least you don’t want them co-located in the same flood plain. Of course since then everyone has looked at where their critical services are and ensured sufficient redundancy to deal with a major disaster. Haven’t they? Well actually I can think of one Cisco cloud service that has a single point of failure in that it’s primary data centers are located in the same city, which has historically been vulnerable to terrorist attacks.

But assuming you’ve got the location sorted out and you’re outside the 500 year flood plain, you’re going to want to consider alternate power sources, given the increasing demands being placed on the power grid. And when you’ve got your failover power supply in place it helps to test that it actually works. Your backups are only as good as your ability to recover from those backups so it’s important to perform regular testing to make sure that’s the case. Physical access can be controlled by physical barriers, locks, guards, but it can also be monitored by video cameras. Servers get hot, so you need to consider fire suppression systems. Ideally ones that will leave the data in a recoverable state.


I’m afraid I haven’t had the space to go much below the surface, but hopefully I’ve given you some things to think about. So to sum up. You want a security policy that is under continual review and covers:

• Human Nature
• Disaster Recovery
• Physical Location
• Penetration Testing
• Social Engineering

And really the most important thing is to raise security awareness.

Leave a comment

Posted by on October 30, 2014 in Technology


Minds, Brains and Science

“It’s a lot easier to see, at least in some cases, what the long-term limits of the possible will be, because they depend on natural law. But it’s much harder to see just what path we will follow in heading toward those limits.” —K. Eric Drexler

“As computers become more and more complicated, it becomes harder and harder to understand what goes on inside them. With gigabytes of RAM and access to databases comprising almost the whole of human knowledge, it isn’t inconceivable that a research project could go berserk… a virus could blossom beyond its creator’s wildest imagination… a program designed to unify information might begin to learn from the information that it has compiled”

This view of machine intelligence has gained wide appeal in an audience unaware of the realities of technology, fed on cyberpunk science fiction and paranoid about computers taking over the world. In Minds, Brains & Science Searle is primarily concerned with disproving this notion of thinking digital computers (as apart from other types of computers, possibly yet to be invented). Part of the problem of deciding whether or not computers are capable of thought lies in our lack of understanding of what thought actually is. Thus his first theme is “how little we know of the functioning of the brain, and how much the pretension of certain theories depends on this ignorance.”

In a round about way Searle does in fact include other types of computers. He has asserted that a perfect copy of the brain that looks and works the same way is only a simulation. Searle admits that the minds of the title cannot very well be defined, that it is still not completely clear how brains work, and that any discipline with the word science in the name probably isn’t one. Notably one that to all intents and purposes is, is Computer Science.

Searle begins by defining the mind-body problem. Penrose defines it thus, “In discussions of the mind-body problem, there are two separate issues on which attention is commonly focused: ‘How is it that a material object (a brain) can actually evoke consciousness?’; and conversely; ‘How is it that a consciousness, by the action of its will, can actually influence the (apparently physically determined) motion of material objects?’” Searle adds the question, “How should we interpret recent work in computer science and artificial intelligence – work aimed at making intelligent machines? Specifically, does the digital computer give us the right picture of the human mind?”.

Searle rejects the dualist view of the physical world, governed by the laws of nature, being separate from the mental world, being governed by free will. He defines four mental phenomena that make up the processes of the brain as consciousness, intentionality, subjectivity, and causation. He says, “All mental phenomena… are caused by processes going on in the brain,” adding, “Pains and other mental phenomena are just features of the brain”. By this he means that all experience takes place in the mind as distinct from the external, physical message. He gives the example of a patient who undergoes an operation. He is under anaesthetic and hence in his view of reality he suffers no pain from the surgeon’s knife. The physical action is there but the mental consequence is suppressed.

Searle deals with the four requirements for an analysis of the mind in turn. He explains that consciousness involves “certain specific electro-chemical activities going on among neurons or neuron -modules” . He explains that intentionality, drives and desires, can to a degree be proven to exist (in the case of thirst for example). He shows that subjective mental states exist “because I am now in one and so are you”. And he shows that in mental causation thoughts give rise to actions. This brings us back to Penrose’s second question.

If the mind is software to the body’s hardware then it is not so difficult to see how mental causality works. Neither the mind nor computer software have a tangible presence. The mind is held in the matrix of the brain as software is held in the ‘memory’ of a computer. Yet, a thought can result in an action, such as raising an arm because physical signals are sent as software would send physical signals through a computer to raise the arm of a robot. An extension of this thinking gives rise to the notion of the possibility of computer intelligence.

The running theme of the book is Searle’s attempt to counter the claims of proponents of Strong Artificial Intelligence (AI) that computers can be taught to think. Strong AI can be easily refuted if you accept that humans are not taught to think, but advocates of machine intelligence say they are teaching computers to learn. AI is seen by them as the next step in the evolution of the computer towards the ultimate goal of consciousness. Searle says that the digital computer as we know it will never be able to think no matter how fast they get or how much ‘memory’ they have.

In the summer of 1956, a group of academics met at Dartmouth College to explore the conjecture that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” The conjecture was formulated by John McCarthy and the field of enquiry it engendered came to be known as artificial intelligence. The problem as Searle sees it is that the features of intelligence cannot be precisely defined by computer experts in the AI field who end up ignoring at least one of the four mental states in order to get their programs to work. These programs then gain credibility by psychology’s use of them to describe the behaviour as the human mind. Searle points out that when the telegraph was the latest technology it was thought that the brain worked like that.

“There are two ways of thinking about how a machine might approximate human intelligence. In the first model, the goal is to achieve the same result as a human would, although the process followed by a machine and man need not be the similar. (This model of shared result but distinct process describes how computers do arithmetic computation.) In the second model, the goal is not only for the computer to achieve the same result as a person but to do it following the same procedure.”

Advocates of Strong AI believe that given the same information as a human, a computer, following a set process, could derive the same meaning. However, Searle argues there is no distinct process in the human mind for a computer to simulate, and therefore a computer cannot have meaning, one of the five components of human language. “In the case of human beings, whenever we follow a rule we are being guided by the actual content or the meaning of the rule.”

“Language isn’t so much a thing as it is a relationship. It makes no sense to talk about words or sentences unless the words and sentences mean something, For sentences to mean something, their components must be linked together in an orderly way. A linguistic expression must be encoded in some medium—such as speech or writing—for us to know it is there. And there must be people involved in all this too produce and receive linguistic messages.

Thus there are five interrelated components that go to make up human language: meaning (or semantics), linkage (or syntax), medium, expression, and participants.” Searle argues that there can be no meaning in a computer’s understanding of language since it relies on some form of judgement or opinion. It could further be argued that what the computer receives is not in fact a linguistic message but an instruction on how to react.

To some degree you can play around with syntax: ‘Grammar rules break you can; understood still be will you.’ The ‘meaning’ is conveyed and perhaps a computer that looks at individual words could still derive the ‘meaning’. However if we change the sentence to: ‘Can you break grammar rules; will you still be understood?’ then the ‘meaning’ has changed. The words are the same but he order, and meaning, is different. The first is a statement, and there are languages which have this grammatical form such as Tuvan, while the second is clearly a question. How can a computer derive the ‘meaning’ if we do not include the ‘question mark’.

Searle has previously put this idea forward in his Chinese room argument. Chinese characters are passed into a room where an non Chinese speaker follows a set of rules and comes up with a response that is passed out. Searle argues there is no mind in the room. Advocates of strong AI claim the mind works like the room. This does not seem to be the case if we stick to Searle’s four mental states. If this were the case then Charles Babbage’s Analytical Engine (a mechanical computer) would, if constructed, have been capable of thought. The argument about the digital computer is that technology doesn’t matter, as in a car, you can change the engine, but it still works the same way.

This can be seen in William Gibson and Bruce Sterling’s novel The Difference Engine where the computer age arrives a century ahead of time. In this version of history Babbage perfects his Analytical Engine and the steam driven computers of the industrial revolution include “calculating-cannons, steam dreadnoughts, machine guns and information technology”.

As Searle points out, computers follow rules. We don’t have any rules that we follow. If we did then every human being would look and function the same. One brain would be a carbon copy of another. As all brains are unique (especially in the case of identical twins) it is clear that there is no hard and fast rule for building them. They grow. How can you grow a digital computer?

John McCarthy believes that his thermometer has feelings. If you look at the human body as a machine and the mind as part of that machine then it seems clear that machines cannot have feelings as they have no ‘sense receptors’. They may have input devices including audio, video, heat-spectrum or whatever. The point being that the data is converted into something usable at the point it is taken in. In the human brain input comes directly to the brain and is interpreted there. As Searle points out, if someone punches you in the eye then you ‘see stars’ it’s information processed in a visual way. No machine is rigged like us. Nor does Searle believe it ever will be. And, even if it was, he still believes it would be a simulation not a true consciousness. After all, someone has to tell the computer that if it bumps into a wall it hurts because actually when it bumps into a wall it doesn’t hurt it at all. However, you could conceivably build a mobile computer that would feel a ‘pain’ response if it suffered damage. We already have self analysing and repairing computers. This could logically be extended but self-awareness is only one aspect of consciousness.

Self-described postmodernist feminist author Kathy Acker says: “When reality—the meanings associated with reality—is up for grabs, which is certainly Wittgenstein’s main theme and one of the central problems in philosophy and art ever since the end of the nineteenth century, then the body itself becomes the only thing you can return to. You can talk about any intellectual concept and it is up for grabs, because anything can mean anything, any thought can lead into another thought and thus be completely perverted. But when you get to the actual physical act of sexuality, or of bodily disease, there’s an undeniable materiality which isn’t up for grabs. So it’s the body which finally can’t be touched by all our scepticism and ambiguous systems of belief. The body is the only place where any basis for real values exists anymore.”

Perhaps this is the summation of the mind-as-a-computer/body problem. In Japan a great deal of work has been done to achieve a fifth generation of computers with natural languages and heuristic learning capabilities. However, in the main they have been unsuccessful. “Japan has no advantage in software, and nothing short of a total change of national character on their part is going to change that significantly. One really remarkable thing about Japan is the achievement of its craftsmen, who are really artists, trying to produce perfect goods without concern for time or expense. This effect shows, too, in many large-scale Japanese computer programming projects, like their work on fifth-generation knowledge processing. The team becomes so involved in the grandeur of their concept that they never finish the program.”

This is not just the problem of Japan but of computer scientist in general who are approaching the problem from the wrong direction. You cannot craft a brain, children learn to walk and talk without being taught so any computer that computer scientists hope to bestow with intelligence must have a natural learning program. Unfortunately it is still not clear how the concepts of speech are learned and so the future of machine intelligence looks unpromising.

People often ascribe an ‘intelligence’ to (mostly) inanimate objects. The car is a classic example but ships have been called ‘she’ for years. It is easy to see why when writing this essay, for instance, and suffering a ‘crash’ resulting in losing two pages, and many hours of work one is likely to consider the machine ‘evil’ or ‘out to get one’. This is clearly not the case. However, because people generally do not understand what happens inside the ‘box of tricks’ as with the Chinese Room, it is possible for them to ascribe an intelligence when there is not one there. It is also because mankind has become so dependent on technology that paranoia of machines taking over the world has come about.

It does not help the situation when it is fuelled by cyberpunk fiction. The classic text is Gibson’s Neuromancer where a network of computers achieves a single intelligence. This idea is purely fictional but the other public image is more interesting. It derives from the film Blade Runner, and it is of course based on the book Do Androids Dream of Electric Sheep? One of the themes of the book not wholly represented in the film is the idea of the importance of owning an animal. However, since it is a post apocalyptic society most of the animals are dead. While there is a prize on owning a real animal, most people make do with artificial ones. They happily simulate the actions of sheep, cats, frogs or whatever but that is all. They do not think.

An interesting question is raised by the film. At the end when the replicant Batty is dying he says, “I’ve seen things you people wouldn’t believe; attack ships on fire off the shores of Orion. I’ve watched sea beams glitter in the darkness at the ten house gate. All those moments will be lost in time, like tears in rain.” This raises a philosophical question, “If an android with a computer brain can have ‘experiences’ that he can relate to others and yet is mortal then does he not have inalienable human rights?’

One area where computer simulations of intelligence consistently fail is in producing new creative data. In the area of humour for instance. It is probably impossible to program a computer to make random connections. This is, in part, because there is no such thing as randomness. Einstein’s theory of relativity can be seen to apply to every thought in the human brain as having derived in some way from a previous thought or external stimulus. Let us take an example where a computer would have difficulty; the sick joke. Computers do not know what is in poor taste because they do not have ‘taste’. A computer may be told that a joke about ethnic minorities is in poor taste but it would not be able to come up with a new joke of its own in similar poor taste. Socially this is a ‘good’ thing but in terms of simulating the human mind it is very ‘bad.’

Finally we return to Wittgenstein who sums up Searle’s overall argument when he says: “Meaning is not a process which accompanies a word. For no process could have the consequences of meaning.”

Leave a comment

Posted by on June 18, 2011 in Society, Technology


Profile: Teresa Maughan

“Why am I a magazine publisher? Is it because I love magazines?
No. It’s because I had a tiny success back in 1967 selling a hippy magazine on London’s fashionable King’s Road.” —Felix Dennis

T'zer: The YS years

Long before Felix Dennis struck magazine gold with Maxim, Dennis Publishing was known as Sportscene Specialist Press, and as the kung-fu fad of the 1970s passed home computers were going to be the next big thing. One of the many titles Dennis launched in that period was called Your Spectrum, a magazine dedicated to an 8-bit computer designed in Cambridge, England and made in Scotland.

Possibly the longest serving YS staffer, Teresa Maughan, known as T’zer, rose through the ranks to become production, then deputy editor under Kevin Cox. Kevin had taken over editorship from Roger Munford in 1985 and oversaw the relaunch as Your Sinclair.

“He’s a transvestite and likes to be known as Kylie to his friends,” she alleges.

In 1987 she took over from Kevin and remained editor until 1989 when she became YS publisher.

“In reality I did anything and everything,” she says.

Her abiding memories of YS are “laughing like a drain for four years solid, listening to Snouty and Berkmann swap jokes continuously—some of them were actually funny, dressing up in ridiculous outfits in the name of work, young boys asking me to sign their T-shirts (and other things!) at the Earl’s Court games shows—I could never understand why, as I didn’t feel famous, wondering whether Duncan MacDonald was going to show up for work or whether he was out on one of his ‘jaunts’. and Hold My Hand Very Tightly—nobody croons like David Wilson.”

Since leaving YS, she has had three children, born in 1993, 1995 and 2000 and continued her career in journalism. This has included editing Dennis’s Mohammed Ali: The Glory Years, a stint of production on Linux User magazine, and launching and packaging the now forgotten Star Pets Magazine.

“It was aimed at girls and all about celebrities and their pets and pop,” she recalls.

She has written extensively for the teen market from a series of unofficial pop biographies to more serious titles for Channel 4 Books including Model Behaviour, and four self-help books to accopmany the award-winning Wise Up Sunday morning show for teens. Her favourite ZX Spectrum game of all time is the unreleased Prince of Persia. “I loved the way he moved. Otherwise it has to be Advanced Lawnmower Simulator designed by Duncan MacDonald.”

Leave a comment

Posted by on March 27, 2011 in Entertainment, Profile, Technology


Back to BASIC

“The less fortunate BASICs picked up bad habits and vulgar language. We would no sooner think of using street BASIC than we would think of using FORTRAN.” —Kemeny & Kurtz

Detail from the cover of Zilog's ZDS-1 manual

Sinclair BASIC is a popular version of the BASIC (Beginner’s All purpose Symbolic Instruction Code) programming language. Originally written for the ZX80 which celebrated its 30th anniversary last year, it is now available for a wide range of computers in native versions or via emulation. This is the history of its evolution.

In July 1975 Micro-Soft, as it was then called, shipped BASIC (Beginner’s All purpose Symbolic Instruction Code) version 2.0 for the MITS Altair 8800 hobbyist computer. This was the first commercial version of the Beginner’s All-purpose Symbolic Instruction Code programming language, originally developed by Hungaraian-American John George Kemeny, and Thomas Eugene Kurtz in 1964 at Dartmouth College in the United States.

By then Kemeny and Kurtz had addressed the main criticisms of BASIC; that it lacked structure and encourage bad programming habits, but the 4K and 8K versions for the Altair, written by Paul Allen and Bill Gates, were based on the original Dartmouth BASIC.

Microsoft BASIC became so popular that it made Gates and Allen their first fortune and was subsequently supplied with the majority of 8-bit computers. So not surprisingly, when the ANSI Standard for Minimal BASIC (X3.60-1978) was launched, it was based mainly on the Microsoft version.

In May 1979, a team of Clive Sinclair’s engineers in Cambridge, England, headed by Jim Westwood, began work on the machine that would become the ZX80. Sinclair was inspired to create the machine after seeing how much his son enjoyed using a TRS-80 but guessing that many people would be put off buying one because of the high price — just under £500.

Unlike Sinclair’s previous foray in to the computer hobbyist market, the MK14, this machine would ship with BASIC, based on the ANSI standard. At Commodore, Jack Tramiel managed to negotiate a permanent licence from Gates for a fixed one-time fee which did not require Microsoft to be given an on-screen credit. To this day many people are not aware that Commodore BASIC is Microsoft BASIC.

But at Sinclair the aim was to keep costs to a minimum, and that precluded paying even a one off fee to Microsoft. To this end, Sinclair had already met with John Grant of Nine Tiles in April to discuss the software requirements of the ZX80.

Given the tiny research and development budget, Nine Tiles stood to make hardly any money out of the deal, but the feeling was that the project was exciting and worthwhile, and one the company would benefit from being associated with.

To achieve the launch price of £79.95 in kit-form, RAM was limited to 1K
and the integer BASIC had to be crammed into a 4K ROM. Grant wrote the bulk of the ROM between June and July. But the resulting program was 5K in length so Grant spent that August trimming the code.

It was written using Zilog’s own assembler on the ZDS1 development system.

“Most of the source was written in pencil and then typed in,” says John.

“24 lines of 80 characters didn’t let you see enough to write code directly on the screen.”

This was not long after code was typed on to cards or paper tape and programmers had one or two runs a day.

“If you left out a semicolon, say, you couldn’t just put it in and recompile,” says John.

“It meant we were quite careful to make sure everything was right.”

The ROM program was debugged on the Zilog system using the VDU to emulate the screen and keyboard. The only hardware development aid was an oscilloscope.

“It was only the hardware related parts that had to be debugged on the prototype,” says John.

“The first few EPROMs we put into the prototype stuck in a tight enough loop that we could just read and note down each of the address and data signals to find out what was going on.

“Once we got a picture on the television we could see what was happening and the rest was easy. If you didn’t get a picture you knew it was because of a change you’d just made.”

According to Cambridge mathematician Steve Vickers, who wrote the subsequent versions of Sinclair BASIC: “The ZX80 integer BASIC, written by John Grant, was in Z80 assembly code pure and simple, though it did use the usual stack based techniques for interpreting expressions.”

The lack of support for floating-point numbers, overshadows Grant’s achievement. He laid the path for things to come, introducing many unique features of Sinclair BASIC, such as the way it refuses to allow most syntax errors to be entered into the program, instead pointing out where the error is in the line before it is entered, making it much easier to learn and use than any other version of BASIC.

The kit was launched at a computer fair in the first week of February 1980, and while it was not a massive success by comparison with the ZX Spectrum, it turned Sinclair’s fortunes around, eventually earning him a knighthood, and it sold well enough to persuade him to make a new computer – the ZX81.

Work on the hardware had begun in September 1979, even before the launch of the ZX80, but it was the development of the uncommitted logic array, or ULA, which allowed the machine to go into production. The ULA, produced by Ferranti for Sinclair, reduced the total chip count to just four (ROM, RAM, ULA, Z80) and brought the retail cost of the machine, in kit-form, down to £49.95. Clive Sinclair recently remarked that of the ZX machines he was most proud of the ZX81. It was elegant inside and out, and while the Spectrum was a bigger success it should be seen as a development of the amazing work that went into the ZX81.

Again, Nine Tiles was called on to provide the New BASIC, but this time there was 8K to play with. Vickers, who had joined Nine Tiles in January 1980, wrote a new set of floating point arithmetic routines, and modified Grant’s work extensively, while retaining much of the ZX80 code.

“As far as Clive was concerned, it wasn’t a question of what the machine ought to be able to do, but more what could be crammed into the machine given the component budget he’d set his mind on,” said Vickers in an interview on July 23, 1985. “The only firm brief for the ZX81 was that the ZX80’s math package must be improved.”

The ROM was almost complete by the end of autumn 1980, but support still had to be added for the ZX Printer. Somewhere between this time and the launch, a bug crept in which caused the square root of 0.25 to be 1.3591409. Vickers quickly fixed the bug, but Sinclair was somewhat tardy in making this version available to people who had already bought the machine.

Despite this problem, the ZX81 was well received and became a massive success. Buoyed by the public’s reaction, and partly in an attempt to win the contract to design a computer for the British Broadcasting Corporation, which eventually went to Acorn,  Sinclair decided to develop a colour computer.

The ZX80 and ZX81 hardware had been the primarily the work of Jim Westwood, but he had been moved to the flat-screen television department, so the hardware design job on the machine which became the ZX Spectrum, was given to Richard Altwasser. Rick Dickinson again provided the industrial design, while at Nine Tiles, Vickers provided the BASIC.

The ZX Spectrum ROM retains almost the entire ZX81 program but further improves the arithmetic and adds support for the, sound, colour and hi-res graphics.

Sinclair wanted as few changes to the ZX81 code as possible but at Nine Tiles the feeling was that software designed for a machine with 1K was inappropriate for a machine with 16K and that problems would occur later on. They were right.

“Certainly with the Spectrum we wanted to rewrite the code, but there wasn’t the time and there definitely weren’t the resources,” says Grant. “At every point Clive wanted the maximum new facilities for the minimum money.”

After the best part of a year’s work the BASIC was almost finished. While it was greatly enhanced, it was also depressingly slow, but more problems were to follow. The main problem was providing support for the planned peripherals because no working prototypes were available to Vickers until near the end of 1981. But then, in February 1982 Nine Tiles began to have financial disagreements with Sinclair over royalties which it became apparent would not be forthcoming. To make matters worse, Vickers and Altwasser both handed in their resignations in order to form their own company, Cantab, which went on to produce the Jupiter Ace, essentially a ZX80 with the Forth language built-in in place of BASIC. The result of the delays these problems caused was that when Sinclair launched the machine, it did so with an incomplete ROM. Nine Tiles continued working on the ROM for three months after the launch in April 1982, but by then too many units had been sold and the program was never finished.

The original plan was to issue only a limited number of Spectrums with the incomplete ROM and provide an upgrade, much in the way the bug in the ZX81 ROM had been handled,but by the time Sinclair got its act together, around 75,000 units had been sold and the plan became unworkable.

This is the reason why the microdrive commands don’t work in the standard ROM, and hence led to the development by Ian Logan of the shadow ROM in the Interface 1 in order to handle peripherals which should have been supported directly by BASIC.

Various ‘enhancements’ were made to the BASIC over the years, including the extra syntax of the shadow ROM introduced with the Sinclair Interface I, and in America in 1983 when an attempt was made to overhaul the BASIC by Timex when it launched the TS2068. But again, the version of the ROM launched with the machine was incomplete, and the TS2068 was unable to run the majority of Spectrum software because of hard-coded calls to locations in the ROM which were different in the Spectrum.

In 1985, in a joint venture with its Spanish distributor Investronica, Sinclair launched the Spectrum 128, codenamed Derby, with a new editor bolted on to the original BASIC. This was slightly more compatible than the Timex effort but the editor was bug ridden, and some software refused to work, even in 48 mode, because the empty space at the end of the original ROM, used as a table by some programs, was now overwritten with extra code.

It did introduce some useful new commands and a built-in text editor, although inexplicably these were replaced with a menu system with less functionality in the English version of the machine launched the following year. However, criticism of the 128 Editor must be put in context. The programmers were relying on the Logan & O’Hara disassembly of the original ROM publised by Melbourne House, since if Sinclair ever had a copy of the original source by now it had been lost, and were working on a network of VAX machines running CP/M.

Fortunately, tracing the development of the 128 Editor is made easier by the fact that the initials of programmers are stored at the beginning of the Spanish ROM and (MB, KM, and AT) at the end of the English ROM (Martin Brennan, Steve Berry, Andrew Cummins, Rupert Goodwins and Kevin Males).

According to Rupert Goodwins, editor of the +2 manual and the person responsible for the Spectrum logo on the menu system, the Sinclair programmers didn’t realise that the unused bytes in the original ROM were being used as a table by games programmers.

“The television test screen and other ancillary code was in there for production testing,” he says.

“As Spectrums came off the production line, they got checked and set up for keyboard, tape, ports, colour, and sound.”

Goodwins recalls there being an Interface 2-style cartridge system at one point but that most of the test code ended up in the ROM.

“We had the space and it’s obviously cheaper and more efficient that way.”

There were also some strange features planned for the 128 which were removed before production as they couldn’t be made to work properly.

“There were certainly plans to do more with the keypad. What a bizarre idea that was,” he says.

“It was originally supposed to have been a mouse as well. Can you imagine?”

Kevin Males worked on both versions of the 128 Editor ROM.

“I wrote the music string interpreter for the 128, plus various other bits and pieces that never made it into the ROM,” he says.

“I also did a lot of work on microdrives, but its a long time since I wrote any Z80 code though!”

He may also be the author of the text editor in the original Spanish Editor.

“I recall working on various text editors for the 128 that didn’t make it into the ROM,” he says.

In addition, he worked on automated test and diagnostic software for both Spectrum & QL microdrives. He was also involved in the notorious Loki project.

“Towards the end I started looking at software to control a proposed digital synth for the new games machine but the company was sold before that could be realised,” he says

Martin Brennan, who worked on no-end of projects at Sinclair, wrote the editor with contributions from Steve Berry, and Andrew Cummins probably wrote the tricky number handling code.

Amazingly, Sinclair never owned the rights to the ROM. Amstrad had to acquire them seperately from Nine Tiles in 1986 when it bought out Sinclair.

When Spectrum clones began appearing back in late 1984, Sinclair Research boss Nigel Searle found he was powerless to do anything about it because the only really unique part of the Spectrum was the ROM and in the disagreements following the Spectrum’s launch, Sinclair had failed to acquire the rights, for which it had originally offered Grant £5,000. By now the Spectrum had sold more than 2.5 million units.

Amstrad only obtained the rights to the Spectrum and the QL, which they sold on. It permits the distribution of the Spectrum ROM in software only.

Nine Tiles Networks retains the rights to the ZX80 and ZX81 ROMs and has permitted their use under the GPL open source license.

Sinclair Research retains the rights to the Interface 1 ROM. In fact, the developers of the SAM Coupe, a powerful Z80 based machine with a Sinclair compatible BASIC, approached Nine Tiles with a view to licensing the floating-point routines from the ZX81 ROM. But, at the time the asking price was too high.

Towards the end of 1986, when Amstrad wanted to create a Spectrum with a built in disk drive, it simply took the DOS from its PCW machine and patched the 128 editor to provide simple disk access. The operating system, written by Cliff Lawson, was a very good one, although its full power remained untapped by +3 BASIC. It is also at the heart of ResiDOS.

Unfortunately none of the bugs were fixed in the first version of the +3 and new ones were introduced, but perhaps this is understandable as there was little documentation at the Sinclair Computers division and development had moved from a VAX network running CP/M to a room full of PCWs running CP/M which was less than ideal.

Amstrad stopped selling the last Spectrum model, the +2B, in the early 1990s. For a time it looked as if the SAM Coupe might offer an upgrade path to Sinclair BASIC users, but after two false starts the machine disappeared into obscurity.

However, thanks mainly to Paul Dunn’s BASin (an integrated development environment for Sinclair BASIC) the language has been undergoing something of a renaissance. Although it is designed for Windows it also runs on Linux under Wine.

As for the future, Dunn is now working on a project called SpecOS which will enable you to use the full power of the host machine from Sinclair BASIC.

Leave a comment

Posted by on March 27, 2011 in Technology


Timex Computer

“The things that made the most impact, I suppose, that I’m most fond of, were the home computers; the ZX80, the ZX81, and the Spectrum. The neatest really was the ZX81. The ZX80 had to be done on a very low budget so it had a lot of chips in it. The ZX81 only had four chips in the entire machine at a time when the world’s best competitor had 42.” —Sir Clive Sinclair

The 1970s saw the advent of cheap reliable digital watches made in the far east which caused the demise of all but one American watchmaker; Timex. Throughout the 1980s the company phased out mechanical watches in favour of digital, but it also had a sideline in making computers which came about as a result of Timex being selected as a manufacturing partner by Cambridge, England based company Sinclair Research for its new ZX81 computer.

Timex went on to sell its own domestic version of the machine as the TS1000. Meanwhile Sinclair was working on a successor to the ZX81, called the ZX Spectrum. In 1982, Lou Galie left the Burroughs Corporation to join Timex Computer lead the design and development of the Spectrum based TS2000.

“I was convinced Burroughs had no clue about what would be coming in the area of personal computers; Timex were on the leading edge of this revolution.”

By 1983 Timex had already sold over 600,000 TS1000s. In fact, long after it had left the computer market, Timex continued to sell TS1000 boards to a European commercial refrigeration manufacturer that used them as dedicated controllers.

“I don’t think the boards were manufactured after 1983 though, the orders were filled from inventory.”

The key decision makers were Timex Computer Corporation president Danny Ross, Timex executive vice-president Kirk Pond, and Timex vice-president of research and design Rex Naden who had joined from Texas Instruments.

“Rex argued that the opportunity in the personal computer market was huge and that Timex had product and software advantages over Coleco, Commodore, Atari, Tandy and TI.”

The Spectrum failed to meet FCC part 15 regulations on interference. Timex realized it had to modify the machine and began hiring a team of engineers, starting with Lou.

“Based on the products introduced at CES in early 1983, we decided we’d need something better than a patched-up Spectrum.”

The team was made up mainly of former Burroughs staff, recruited from the closing Danbury, Connecticut engineering group. The rest were local college graduates and a few staff from Timex’s Cupertino, California research facility. Two machines were to be developed; the TS1500, an update of the TS1000, and the TS2000, a complete redesign of the Spectrum.

“The TS1500 was finished in seven months. The TS2000 took eleven.”

Both machines were given a better keyboard, more memory, and a new operating system. The TS2000 also got joystick and cartridge ports, a feature more commonly found on consoles, an extended version of BASIC, and an sound chip. They were also slightly faster, to better synchronise with the NTSC television standard. Despite being able to run only a fraction of Spectrum software, the TS2000 was designed to be 100% backwards compatible.

“We made a special board which let us download a British tape program into memory on one of our ‘plug-in’ modules – and we tested a bunch of packages that way.”

The inclusion of cartridge support was entirely down to the problems the team encountered with loading from cassette.

“It never worked properly. We felt a simple ROM cartridge would solve the 1,001 problems we’d been having.”

The TS2068 also replaced the Spectrum’s logic chip, fixing various errors and adding new screen modes including high resolution, and multi-color. However, software rarely took advantage of these modes as they consumed twice as much memory as the standard one.

“These modes were planned for future follow-ons to the 2068. We had lots of plans for faster speed, better colour, real disk drives, and so on.”

A whole series of add-ons were planned that were never completed, including a ‘Bus Expansion Unit’ that would dramatically extend the machine.

“We had drawings and a simple prototype, nothing solid. But we had plans for a new chip to enable it to access 16MB of RAM.”

Timex bowed out of the US computer market due to the price war started by Commodore. Rumours persist that the unsold stock was dumped on the Argentinean market, and that somewhere there is a warehouse full of unopened original equipment.

“No. The situation was more complex than that. When we decided to exit the market we had hundreds of thousands of unsold units. We disposed of them in various ways, using many different outlets.”

Timex sold about 100,000 TS1500s and about 350,000 TS2068s, but this wasn’t the end of the machine. A modified version of the TS2068 was sold as the TC2068 by TMX Portugal, which went on to produce the FDD and FDD3000 disk interfaces, the TC2048, a cut-down TC2068 with better Spectrum compatibility, and the Unipolbrit 2086 for export to Poland. There was even a third generation prototype called the TC3256.

“We had a really smart Portuguese engineer called Al who was involved with our Timex-Sinclair hardware and software work. When we shut the Portuguese factory in the mid-1980s he went ‘on his own’ and continued work on the 3256. I think he actually built a few.”

Timex dabbled with computers again in 1994 with the Data Link watch that carried scheduling, phone numbers, and other personal information, teaming up with Microsoft to create the communications software. But the final fling was a joint venture with Motorola in 1998 called Beepwear, a wrist pager. After that the company went back to making watches.

Leave a comment

Posted by on March 25, 2011 in Technology


Escapism Online

“People talk about escapism as though it’s something nasty but escapism is wonderful.” —Margaret Forster

A pilot, sporting an original series style uniform, on Galactica's flight deck with Viper and Raptor in background

The word escapism dates to 1930s when it was chiefly used in conjunction with films from that period. It was the time of the Great Depression in America, the only industry that was booming was the movie industry. Studios would regularly turn out 50 pictures a year and huge audiences went to see them. It was this phenomena of escaping from the troubles of every day life to a fantasy for an hour or two every week, or even every day, this being before the advent of television.

Today people still go to see films as a retreat from life however there have been and still are films made that pander directly to this desire to leave the world behind. Movies of this genre, escapist films, tend to be fantastic in nature with the action taking place in a screen world that is normally not accessible to the viewer. All films are escapist on one level by their very nature.

An obvious example is the Star Wars series where the theme, and means of escape, is a simple fairy tale. In this case the audience knows how to relate to the central theme of the film because has been familiar to them since childhood. They can empathise with the central characters and see themselves as heroes in their own concurrent fantasy.

Jack Zipes talks about the fourth episode of the series, A New Hope: “The film which was naturally made into a book to capitalise on the cinematic success can be interpreted as a science fiction fairy tale about the evils of totalitarianism…The most obvious symbol of the republican [democratic] virtues is our snow-white princess Leia… It is obvious that the alliance or forces she represents are true-blooded Americans: they are clothed in the traditional American khaki uniforms and behave loosely and good naturedly in contrast to the members of the Empire who are clothed in dark olive resembling the uniforms of the Nazis…Their manner is austere an authoritarian, and, of course, Lord Darth Vader, the dark force behind the throne, is clad in black.”

Another key factor about escapist films is the mood they set. It is larger than life, as opposed to naturalistic or realistic films. In the pictures of the 1930s this was characterised by the influx of European film makers, especially from Germany, who used shadows to great effect, or filmed mostly at night. In these films the screen envelopes the audience and draws them into a world that becomes real for the duration of the film.

In Double Indemnity, there are a number of factors at work which make the film escapist. It was taken from a novel by James M Cain, and the screenplay was written by Raymond Chandler who was one of the best escapist writers of the time and whose detective, Philip Marlowe, has become a symbol of films of that era. Let us attempt to dissect the film to demonstrate how it allows the audience to enter its world.

The film’s title sequence opens with sinister music and a menacing shadow figure walking on crutches. The music is important because it can be used to make us, almost involuntarily, feel a mood and already we begin to be drawn in to the scene. The use of light is important the shadow is suggestive of many things including death. We are tempted in further.

The opening shot is at night. We see a ‘Los Angeles Railway Corp’ sign for a moment long enought to remember it. Then a car careening out of control along railroad tracks, going through a stop sign. This is a subliminal implantation, made possible by the audiences acceptance that the film is in some way real.

The film centres around an insurance agent called Neff. Chandler uses him to great effect by having him play the role of the audience. In other words the individual watching the film takes the place of Neff and sees the world of Double Indemnity through Neff’s eyes. It is a standard device but interesting in this case since not only is Neff ultimately an anti-hero, he is also quite probably gay. He boss keeps on telling him: “I love you”, and at the end of the film, as he lies dying, Neff replies: “I love you too.”

There is a psychoanalytical view that Double Indemnity is very symbolistic, the clipped style neck ties implying impotence or castration. Correct or not it is clear that there are things in the film there primarily to help the audience get their barings. There is the ‘match-snick’ device, where a character lights a match on his thumb nail, which occurs at significant points in the movie and helps the viewer to stay on top of it without getting lost. If the audience did become lost the spell would be broken, the illusion shattered and it would not work as an escapist film. Contrast this with Jean-Luc Goddard whose films compel you to realise you are watching a film in every shot.

The film sees Neff making a telephone call to sort out a renewal for a man whose wife wants to get rid of him. Neff realises this when she enquires about taking out life insurance on him without his knowledge. He conspires with her but he knows he can’t get away with it “I couldn’t hear my footsteps, I was walking the walk of a dead man.”

As part of Chandler’s wider work the film can be seen as an extension of his metaphor of the city for modern life. However what makes the film escapist is its ultra-reality. An intensified reality that Umberto Eco explains as the reason why Disneyland feels more real than a wax museum. It is because of the audience interaction and the induced willing suspension of disbelief.

Massively-multiplayer online role-playing games (MMORPGs) let people live out an interactive fantasy existence where virtual riches are there for the taking and death is a mere inconvenience. But are there any fringe benefits to games that could see you spending years of your life clicking rocks?

Runescape, the most popular free MMORPG, has been the subject of much discussion about its effects on children. The game was developed by Jagex (Java games experts), a company based in Cambridge in the UK, and is played directly in the web browser without the need to download a standalone client application. In it you play a human adventurer in a Tolkeinesque world with a very British sense of humor. For example all the names of the white knights are puns, such as Sir Amik Varze, and the game references old British television shows from The Adventure Game to Monty Python’s Flying Circus.

Until recently Jagex went to great pains to deal with parents’ concerns about the content of the game, even to the point of censoring mild curses such as ‘hell’ from in-game chat. One of the effects of this ‘swear filter’ was to make it difficult for players to exchange personal information, blocking out words such as ‘address’ and ‘telephone’ to allay parental fears of online predators. However, as its community of players grows up, Jagex is attempting to make the game appeal to an older audience and almost a decade after its release players under 13 are now banned and the ‘swear filter’ is now optional. Younger children who signed up before November 2010 can continue to play, but the filter is always enabled for them. However, it’s not very difficult to lie about your age.

The main criticisms of Runescape are that the game encourages stealing, is overly violent, is addictive, pits players against each other, is filled abuse and negative behaviour, encourages gambling, and leads to depression. Interestingly, none of the articles criticising the game seem to take issue with the morality of killing town guards for experience points, although as previously mentioned very few characters stay dead. On the other hand the game has been praised for teaching children about economics, encouraging co-operation, and even for encouraging self-discipline in limiting the amount of time spent playing the game.

Many MMORPGs have their own unique backstory, but an alternative approach is to create a game based on an existing franchise; Star Wars and Lord of the Rings being two of the most successful. These two examples both require Microsoft Windows, but Bigpoint’s Battlestar Galactica Online is another browser based game making it platform independent. What these games have in common is a well established universe which the fans are already familiar with. Both Tolkein and Lucas created epic fantasies where good and bad are relatively clear cut, but Moore’s reimagined Battlestar Galactica is far more nuanced.

So how well do the themes of the series translate into what is essentially a space shoot-em-up? The answer is surprisingly well, for now. Whichever faction you choose to join, it’s a battle for survival, with limited resources, the ever present danger of attack, and only your friends to rely on. The experience system feels refreshingly unobtrusive and while not the focus of the game, the interactions with characters from the series feel right. There’s just one problem. When the game gets out of beta you’re going to be able to spend real cash on resources, at which point the whole illusion will be ruined by people spending hundreds of dollars on upgraded ships and recouping their cost by ratcheting up kills against players who don’t pay to play.

1 Comment

Posted by on February 27, 2011 in Society, Technology


WiReD != M2k

“The techno-elite are perhaps the only group advantaged by the new economy. They will be the new lords of the terrain in a Dickensian world of beggars and servants. Just because they think of themselves as hipsters doesn’t mean we should expect them to share the wealth.” — R. U. Sirius

I am currently trying to collect the complete set of Mondo 2000. The successor to High Frontiers and Reality Hackers, M2k was an independently financed magazine published in San Francisco from 1989 to 1998. There were 17 issues in all and a book, The User’s Guide To The New Edge, which Albert Finney can be seen reading in Dennis Potter’s Karaoke. It was published sporadically during much of its life and whenever I happened to be in Forbidden Planet in Cardiff and they happened to have a new issue in stock, I bought it. You may not have heard of it.

But you have heard of WiReD. Also started in San Francisco, in 1993, WiReD is M2k without the heart. I was going to go into a lot of detail on why I feel that way but what it comes down to is that, to the best of my knowledge, M2k never carried a three page fold-out advert for Lexus.


Leave a comment

Posted by on February 15, 2011 in Media, Technology