Pregled posta

Adresa bloga: https://blog.dnevnik.hr/illuminati

Marketing

Chipping away from freedom

11 September 2005 – THE FICTION

By 2005, the war on terror had spread throughout the world. The definition of a terrorist had widened. Outspoken academics had their funding withdrawn. Journalists reporting controversial aspects of the war were chastised. Peaceful demonstrators were cut down with capsicum spray and stun batons. Anti-globalisation activists were probed by intelligence agencies. Outspoken politicians were assaulted with costly court cases and legal actions.


The independent media and Internet sites had been gob smacked by UN declaration WORLD PATRIOT (World Order Ruling to Leverage Democracy by Providing Appropriate Tools Required To Intercept and Obstruct Terrorism). The nightly news was riddled with fear and paranoia. A terrorist suspect detained outside a local kindergarten. A man with shoe bombs in a parliamentary lift. White powder envelopes delivered to media executives. A major food tampering scare. Grainy videos purporting to show terrorist training camps.

Millions lived a prison like existence, secured inside their gated communities, knowing intuitively that the terrorists were everywhere, like ticking time bombs waiting to go off. Thankfully, most citizens were on the look out for the evil doers and could dial 1300TERRORIST. Suspects were automatically arrested and detained without legal counsel for 48 hours.

Human tracking and surveillance had become universal by 2005. Most people in the cities had been microchipped. It was just logical – anyone not microchipped was obviously part of a terrorist network. Only terrorists – people that wanted to roam around freely without being tracked and monitored by the government – rejected the Countering Homeland Insurgency and Terrorism Implant (CHITI). If you were caught without a microchip, you were taken in for questioning, behaviour modification and routine implantation at the local CENTRELINK (Centre for Enrolment of National Terrorists, Radicals, Extremists, Loonies, Insurgents, Nonconformists and Kidnappers).

THE FACTS

In the early days after 11 September, there were many warnings about police state measures and emerging totalitarianism. Critics referred to the linking of technology and the police state which had been perfected by Adolf Hitler’s Nazi regime in the 1940’s. In those days, Hitler developed a strategic alliance with IBM, then America’s most powerful corporation. The partnership began in 1933 in the first weeks that Hitler came to power and continued well into World War II.

As the Third Reich embarked upon its plan of conquest and genocide, IBM and its subsidiaries helped create enabling technologies, step-by-step, from the identification and cataloguing programs of the 1930s to the selections of the 1940s. Only after Jews were identified – a massive and complex task that Hitler wanted done immediately – could they be targeted for efficient asset confiscation, ghettoisation, deportation, enslaved labor and ultimately, annihilation.

Hitler’s cross-tabulation and organisational challenge was so monumental it called for a computer. In the 1930s no computer existed, but IBM’s Hollerith punch card technology did exist. Aided by the company’s custom-designed and constantly updated Hollerith systems, Hitler was able to automate his roundup of Jews. IBM technology was used to organise much of Germany and then Nazi Europe, from the identification of Jews in censuses, registrations, and ancestral tracing programs, to the running of railroads and the organisation of concentration camps.

IBM and its German subsidiary custom-designed complex solutions, one by one, anticipating the Reich’s needs. They did not merely sell the machines and walk away. Instead, IBM leased these machines for high fees and became the sole source of the billions of punch cards that the Reich needed.

Hitler would have been salivating over many of the late 20th century technological breakthroughs, especially implantable microchips. They had been on the breadboard for more than a decade before the terrorist strike on 11 September, 2001.

People first got used to the idea of microchips when governments and local councils started to introduce regulations for control of domestic animals. In Australia, the New South Wales Companion Animals Act required microchipping and registration for both cats and dogs. The argument went that puppies and kittens could be injected with a microchip so that if they became lost, they could simply be scanned like a can of coke at a checkout. A unique number would be displayed on the scanner and matched with an identification database containing pet owner’s names and addresses. The hapless puppy and happy owner could then be reunited. It was not only companion animals that literally copped it in the neck. Millions of farm and research animals – even fish and canaries – throughout the ‘civilised world’ became victims of vast microchip experimentation.

While microchips were being developed for animals, with the ultimate goal of human implantation, the latter were being softened up with new identification regimes. Originally, the system was devised so that PIN numbers gave way to smart cards with digitised photos. But with smart card technology too expensive and low uptake rates by the banking industry, the entire smartcard phase was leap-frogged by 9/11, with biometrics heralding the new security revolution.

The biometric companies were quick to capitalise on the terrorist threat. Numerous schemes were rolled out just days after 9/11, while the World Trade Centre was still a smoking ruin. Airport security, driver’s licenses, and national identity schemes were all touted to make a citizen’s life more secure and terrorist proof.

Many corporations had previously engaged in trials of biometric technology. In 1999 Sensar introduced the first iris scanning ATM in the US with others to follow throughout Europe and Japan. A trial of biometrics (INSPASS) at major US airports used hand geometry to verify the identity of overseas travellers. Hundreds of government departments and corporations used biometric fingerprint recognition for employees to access secure areas. The US defense forces undertook major biometric research in the field and the National Security Agency headed up the Biometric Consortium of companies interested in the technology. Even McDonald’s customers in California could authorise payment for a Big Mac using a fingerprint instead of a credit card. Maccas called it “Pay by Touch”. Customers used a ‘finger image” because McDonalds thought the ‘finger print’ was fraught with negative connotations. The new system was promoted as “faster, easier and more secure than cash, paper checks and plastic cards”. Customers were invited to “Sign up today to touch to pay. Prizes awarded every day.”

In many respects, the post 9/11 environment was a police state with a Hollywood face. A kind of slick, sickly propaganda washed over citizens and ebbed into their brains. Technology corporations offered up surveillance technology for free – out of the goodness of their hearts. Newspaper articles urged a new role for the icon of American cultural imperialism, coke-a-cola. “An alQa’ida type gets his box cutter out. No worries, just chuck a can of Coke at the bad guy, whip his nose off with your belt buckle and garrote him,” the Weekend Australian urged.

Snappy, jingoistic Presidential speeches urging a widening of the war on terror were given standing ovations. A lot of people knew the real war was an information war but feeling powerless against the machine, they just stuck their heads back in the sand. The media lapped it up. Finally something big had come along to make their life even easier. The Pentagon and the State Department, with their respective allies across the world, churned out information 24 hours a day. In the age of desktop journalism the media releases flowed freely via email and fax. A spokesperson was always available for a glib TV or radio grab adding just enough realism to give the report an air of authoritative authenticity.

The call for human microchipping came just days after the World Trade Centre was pulverised. Applied Digital Solutions (ADS) implanted its first human chip when a New Jersey surgeon, Richard Seelig, injected two of the chips into himself. He placed one chip in his left forearm and the other near the artificial hip in his right leg. He was motivated after he saw firefighters at the World Trade Center writing their Social Security numbers on their forearms with Magic Markers. He thought that there had to be a more sophisticated way of doing identification.

The implantation procedure had been proven safe by Kevin Warwick, a cybernetics professor from the University of Reading in the UK. Warwick once told reporters: “I was born human. But this was an accident of fate – a condition merely of time and place. I believe it’s something we have the power to change.”

Warwick’s first experiment, Cyborg 1, began in August 1998 when a silicon chip was implanted in his left arm, allowing a computer to monitor him as he moved through the halls and offices of the Department of Cybernetics. His implant communicated via radio waves with a network of antennas throughout the department that in turn transmitted the signals to a computer programmed to respond to his actions. At the main entrance, a voice box operated by the computer said “Hello” when he entered; the computer detected his progress through the building, opening the door to his lab as he approached it and switching on the lights. The implant was in place for nine days.

The next phase of the project, Cyborg 2, planned to move a step further by placing implants in two people at the same time. The goal was to send movement and emotion signals from one person to the other, possibly via the Internet. Irena, Kevin Warwick’s wife, bravely volunteered to go ahead with his-and-hers implants.

Much of Warwick’s groundbreaking work focused on the need to help people with disabilities, an area known as rehabilitation robotics. Ultimately microchips would allow blind people to navigate around and give amputees and people in wheelchairs greater mobility. While Warwick and his team concentrated on the humanitarian angle, across the Atlantic, Applied Digital Solutions (ADS) was ready to roll out the microchip as a means of identification.

ADS was marketing two products. The VeriChip, similar to the devices implanted in millions of pets in the United States and Australia. The chip was a 12mm by 2.1mm radio frequency device, about the size of a grain of rice. The chip contained a unique identification number and other data. Utilising an external scanner, radio frequency energy passed through the skin energising the dormant VeriChip, which then emitted a radio frequency signal containing the identification number. The number was displayed by the scanner up to four feet away and transmitted to a secure data storage site by authorised personnel via telephone or Internet. The identity of a person could be checked and their data linked up with any other relevant information. As the technology developed, the VeriChip insertion procedure was able to be performed in an office setting, requiring only local anaesthesia, a tiny incision and a small adhesive bandage.

The second ADS product, Digital Angel, was more elaborate. It combined a global satellite positioning system and monitoring service. The system combined a watch and a device the size of a pack of cigarettes that clipped onto a waistband or a belt like a pager. At first it was suggested Digital Angel could be helpful to monitor Alzheimer’s patients and parolees. Later, it was mooted as a device for people who were potential kidnap victims. The company agreed to distribute its product first in South America (needing no FDA approval), where the GPS system could help locate kidnapping victims, and the VeriChip could identify them if they were drugged, or in a worst case scenario, killed.

The potential for microchipping humans was enormous, only limited by imagination. The chips could carry huge amounts of data on an individual, such as health insurance details, blood type and blood pressure allowing information to be communicated to online doctors over the Internet. The chips helped business – individuals with implants could be clocked in and out of their office automatically. The exact location of any employee and whom they were with was known at all times. It was then easier to contact them for an urgent meeting or in the event of a terrorist attack. Human microchips were extremely useful for car security. For example, unless a car recognised the unique signal from its owner, it would remain immobilised.

Other uses were managing livestock and other farm-related animals; pinpointing the location of valuable stolen property; managing the commodity supply chain; preventing the unauthorised use of firearms; and providing a tamper-proof means of identification for enhanced e-commerce security. The list just grew exponentially.

THE FICTION

In an environment of heightened suspicion, terror and fear, the media glorified human microchipping for its security – there was little danger in losing an implant or having it stolen and it made life so much safer. The microchip television advertising campaign was necessarily emotional. One ad featured grandpa saved by the chip after a stroke. Another depicted a lost child reunited with an emotional mother. In 2004, a new chip, the SafeTChip came on the market. SafeTChip could send and receive data in an instant. It became indispensable for high flyers and political types who could send a distress call to a satellite if they were in trouble. Response by a UTTF (Universal Terror Task Force) was guaranteed within thirty seconds. Peace of mind became a big selling point. The ad for SafeTChip won a Golden Globe award.

The Personal Safeguard Technology market boomed as society embraced the idea of using microchips for human identification. By 2005, the market for implantable chips reached $70 billion per year. Although the whole system was voluntary, the chip became a defacto global ID card. At first, the chips were used widely on prisoners and members of the military who couldn’t really refuse. Then, as the war on terror turned to the home front, it became a matter of voluntary compliance. People without the chip were investigated, ostracised and discriminated against. Many had their assets confiscated because of their alleged terrorist links. Law-abiding citizens who were chipped could access services provided by the State. No chip, no service. Taxpayers without the chip paid the highest marginal rate. And so it went on.

The crux came after the 2003 terrorist food-tampering alert. Although not a single person was hospitalised as a result of the scare, millions lined up for the implant procedure. In a global televised hook-up, the food multinationals joined with world leaders to declare that supermarkets and shopping malls were the new frontier for terror attacks. Only those who could prove they were not terrorists, by virtue of having the implant, were allowed to purchase food. In the capital cities, supermarkets were shut down because of the threat and food was rationed out, again, only to those who were chipped. People that grew their own food were obviously terrorists and were relocated to the AW’s (Agricultural Wastelands) where they were forced to work for the multinationals growing genetically engineered supercrops.

Mobile Chip Implantation Clinics travelled the country stopping at local schools and sports grounds. People receiving the chip got special rewards points, a Chips-R-US bumper sticker and a one off government payment in return for helping to stamp out terrorism. By 2004, 94% of the population had been chipped. The other 6% were loose cannons. Many had returned to the land, banding together on remote communities to make a new start. Life was harsh and they lived in constant fear of being spotted by the Big Bird spy satellites. A mini ionospheric heater onboard the Big Bird simply irradiated their plot of land, withering their crops and killing their animals. The people themselves died a slow and painful death from cancerous lesions.

In the cities, life became a manufactured utopia. The chip enabled the ultimate fusion of humans and machine. The rich had many of their human parts replaced with chip based spares. It became fashionable to have a robotic hand instead of a real human hand. The lower classes marvelled at their new virtual apartments with walls and appliances that interacted with the implanted chip, turning on the TV, chilling beer and heating up instant dinners in the microwave. Some signed on for the microchip brain implant IntelEct, enabling negative thoughts to be zapped away by a remote computer. Others picked custom made implants to perform certain functions. A popular implant was the ShagChip for male impotency. It was supplied with a software package enabling users to choose from several levels of performance.

POSTSCRIPT

Need we go on? Thankfully, it’s not 2005. New Dawn magazine hasn’t been shut down by WORLD PATRIOT and human microchipping hasn’t spread very far – yet! What is happening, though, is a quickening of the sort of police state control and surveillance systems that New Dawn has been reporting on for more than a decade. Much of the enabling technology, described in this article is a reality. Microchips are being sold by Applied Digital Solutions and humans are lining up to be implanted. It’s just a matter of time before that Mobile Chip Implant Clinic becomes a reality. We can continue to live like robots in this emerging police state utopia, or do something about it now while the chips are still down. Once the chips are up and functioning, there will be nowhere to run and nowhere to hide.


Post je objavljen 06.10.2005. u 16:51 sati.