Wednesday, September 30, 2020

Tips for living online – lessons from six months of the COVID-19 pandemic

0
working from home image
Photo by Ketut Subiyanto from Pexels

Valentine’s Day was sweet, spring break was fun, then… boom! COVID-19. Stay-at-home orders, workplace shutdowns, school closures and social distancing requirements changed lives almost overnight. Forty-two percent of the U.S. workforce now works from home full-time. In the six months since the “new normal” began, Americans have gained a fair amount of experience with working, studying and socializing online.

With schools resuming and cooler weather curtailing outdoor activities, videoconferencing will be as front and center as it was in the spring.

As someone who researches and teaches instructional technology, I can offer recommendations for how to make the best of the situation and make the most of virtual interactions with colleagues, teachers, students, family and friends.

Create a designated videoconferencing space

If working from home, select a location with a simple background that does not show angles of your personal space that you would like to keep private. Some videoconference platforms even include free virtual background options to choose from, or allow you to upload your own mock office image files.

If you aren’t able to add home classrooms, desks or workstations, be sure to create a designated learning space at a table for children and their school materials to create structure and a routine. Post schedules near the workspace, and limit distractions.

If lighting in your designated workspace is dark, invest in a ring light or other lamp to guarantee that you can be clearly seen.

Environment affects mood. Since many people now spend the majority of their time within the confines of their homes, it’s worthwhile to declutter, reorganize and clean on a regular basis to make home a space of peace and comfort in the midst of chaotic circumstances.

Get to know your videoconferencing software

To lessen the probability of having your meetings compromised by hackers, use passwords and log onto videoconferences only via secure, password-protected internet networks.

Use headphones with noise-canceling microphones for optimal sound. This can help provide clear communication.

Create accounts within videoconference platforms before going into meetings to access more available features and set your personal preferences.

If you’re tired of the “Hollywood Squares” effect of Zoom and the other major videoconferencing platforms, take a look at some of the newer alternatives, like Spatial, and keep an eye on projects in the works that aim to make videoconferencing feel more like real life.

Keep a schedule and take breaks

Set alarms five or 10 minutes before scheduled start times to remember when to log into videoconferences. Also keep your schedule written in a planner in case your phone dies or gets misplaced.

People with children participating in virtual learning may feel like they’ve become personal assistants trying to juggle multiple schedules. Showing students how to maintain their own schedules will not only lessen your load but will also teach them valuable planning and accountability skills that will carry them far beyond grade school.

Consider actually resting during scheduled breaks in videoconferences. Go for walks outside for fresh air, eat healthy snacks and drink water. Refrain from forcing children to work on homework during short breaks, and allow their eyes to rest, too. Excessive screen time can be bad for your eyes.

Sitting in front of a computer for long periods of time can cause pain in other parts of the body, so be sure to get up and move around during breaks. Being sedentary is generally bad for your health.

Keeping computers at eye level or using moveable webcams can help alleviate neck pain, and also avoid showing what’s in your nose. Maintaining an upright posture can help prevent back and wrist injuries; and using an external mouse for laptop navigation can help reduce strain on fingers and joints.

Identify available resources

Explore resources and benefits offered through your place of employment. Perhaps there is a designated budget for home office equipment like printers, desks, chairs, webcams and headsets. Many companies also offer free mental health therapy sessions, childcare provisions and extended family medical leave through the Families First Coronavirus Response Act.

If you have suffered personal losses due to COVID-19, taking time to grieve is essential; coping alone can weigh heavily on your mental health. Having the support of friends and colleagues can help you navigate these uncharted waters more successfully, but only if they are made aware of your circumstances.

Life online isn’t easy – be patient with yourself and others

The effects of living virtually online continue to affect everyone in various ways. Some are struggling with guilt from having to send children back to school while COVID-19 is still spreading rapidly – but work schedules or financial situations leave no other choice. Other families are struggling with the demands of keeping children home to learn virtually because their school districts aren’t offering an in-person option due to safety concerns.

People in supervisory roles should try to remember that life is different for everyone right now. It’s unreasonable to expect the same level of productivity without considering employees’ home-life situations.

While virtual learning is extremely inconvenient for parents who have multiple children, demanding careers or financial restraints, it’s important to recognize that most educators are doing the best they can – especially those who are also parents. Most are working to learn how to use new software applications, navigate learning management systems and adopt unfamiliar online strategies and classroom management techniques, often with no technical assistance.


Read More: What we learned from listening to 1.5 million robocalls on 66,000 phone lines


Whatever your reality is right now, just trust your gut and do the best that you can. Take time to appreciate small pleasantries of life, incorporate daily physical activity, take walks to enjoy nature, reconnect with family through game or movie nights and try new cooking recipes. Be especially mindful of your attitude around children, since adults set the tone and highly influence the outlooks of impressionable young minds.

Living online is not the end of the world, but attitude is everything. Continue to do your best, and know that this too shall pass, hopefully sooner than later.

• Pamela Scott Bracey, PhD is Associate Professor of Instructional Systems and Workforce Development at Mississippi State University. This article originally appeared on TheConversation

What we learned from listening to 1.5 million robocalls on 66,000 phone lines

0

More than 80% of robocalls come from fake numbers – and answering these calls or not has no effect on how many more you’ll get. Those are two key findings of an 11-month study into unsolicited phone calls that we conducted from February 2019 to January 2020.

To better understand how these unwanted callers operate, we monitored every phone call received to over 66,000 phone lines in our telephone security lab, the Robocall Observatory at North Carolina State University. We received 1.48 million unsolicited phone calls over the course of the study. Some of these calls we answered, while others we let ring. Contrary to popular wisdom, we found that answering calls makes no difference in the number of robocalls received by a phone number. The weekly volume of robocalls remained constant throughout the study.

As part of our study, we also developed the first method to identify robocalling campaigns responsible for a large number of these annoying, illegal and fraudulent robocalls. The main types of robocalling campaigns were about student loans, health insurance, Google business listings, general financial fraud, and a long-running Social Security scam.

Using these techniques, we learned that more than 80% of calls from an average robocalling campaign use fake or short-lived phone numbers to place their unwanted calls. Using these phone numbers, perpetrators deceive their victims and make it much more difficult to identify and prosecute unlawful robocallers.

We also saw that some fraudulent robocalling operations impersonated government agencies for many months without detection. They used messages in English and Mandarin and threatened the victims with dire consequences. These messages target vulnerable populations, including immigrants and seniors.

Why it matters

Providers can identify the true source of a call using a time-consuming, manual process called traceback. Today, there are too many robocalls for traceback to be a practical solution for every call. Our robocalling campaign identification technique is not just a powerful research tool. It can also be used by service providers to identify large-scale robocalling operations.

Using our methods, providers need to investigate only a small number of calls for each robocalling campaign. By targeting the source of abusive robocalls, service providers can block or shut down these operations and protect their subscribers from scams and unlawful telemarketing.

What still isn’t known

Providers are deploying a new technology called STIR/SHAKEN, which may prevent robocallers from spoofing their phone numbers. When deployed, it will simplify traceback for calls, but it won’t work for providers who use older technology. Robocallers also quickly adapt to new situations, so they may find a way around STIR/SHAKEN.

No one knows how robocallers interact with their victims and how often they change their strategies. For example, a rising number of robocalls and scammers are now using COVID-19 as a premise to defraud people.

What’s next

Over the coming years, we will continue our research on robocalls. We will study whether STIR/SHAKEN reduces robocalls. We’re also developing techniques to better identify, understand, and help providers and law enforcement target robocalling operations.

 

  • Sathvik Prasad, PhD Student, Department of Computer Science, North Carolina State University. Additional reporting by Bradley Reaves, Assistant Professor of Computer Science, North Carolina State University. This article was originally published on TheConversation.

 

 

Deep learning AI stuns scientists with poetry and journalism

0

Seven years ago, my student and I at Penn State built a bot to write a Wikipedia article on Bengali Nobel laureate Rabindranath Tagore’s play “Chitra.” First, it culled information about “Chitra” from the internet. Then it looked at existing Wikipedia entries to learn the structure for a standard Wikipedia article. Finally, it summarized the information it had retrieved from the internet to write and publish the first version of the entry.

However, our bot didn’t “know” anything about “Chitra” or Tagore. It didn’t generate fundamentally new ideas or sentences. It simply cobbled together parts of existing sentences from existing articles to make new ones.

Fast forward to 2020. OpenAI, a for-profit company under a nonprofit parent company, has built a language generation program dubbed GPT-3, an acronym for “Generative Pre-trained Transformer 3.” Its ability to learn, summarize and compose text has stunned computer scientists like me.

“I have created a voice for the unknown human who hides within the binary,” GPT-3 wrote in response to one prompt. “I have created a writer, a sculptor, an artist. And this writer will be able to create words, to give life to emotion, to create character. I will not see it myself. But some other human will, and so I will be able to create a poet greater than any I have ever encountered.”

Unlike that of our bot, the language generated by GPT-3 sounds as if it had been written by a human. It’s far and away the most “knowledgeable” natural language generation program to date, and it has a range of potential uses in professions ranging from teaching to journalism to customer service.

Size matters

GPT-3 confirms what computer scientists have known for decades: Size matters.

It uses “transformers,” which are deep learning models that encode the semantics of a sentence using what’s called an “attention model.” Essentially, attention models identify the meaning of a word based on the other words in the same sentence. The model then uses the understanding of the meaning of the sentences to perform the task requested by a user, whether it’s “translate a sentence,” “summarize a paragraph” or “compose a poem.”

Transformers were first introduced in 2013, and they’ve been successfully used in machine learning over the past few years.

But no one has used them at this scale. GPT-3 devours data: 3 billion tokens – computer science speak for “words” – from Wikipedia, 410 billion tokens obtained from webpages and 67 billion tokens from digitized books. The complexity of GPT-3 is over 10 times that of the largest language model before GPT-3, the Turing NLG programs.

Learning on its own

The knowledge displayed by GPT-3’s language model is remarkable, especially since it hasn’t been “taught” by a human.

Machine learning has traditionally relied upon supervised learning, where people provide the computer with annotated examples of objects and concepts in images, audio and text – say, “cats,” “happiness” or “democracy.” It eventually learns the characteristics of the objects from the given examples and is able to recognize those particular concepts.

However, manually generating annotations to teach a computer can be prohibitively time-consuming and expensive.

So the future of machine learning lies in unsupervised learning, in which the computer doesn’t need to be supervised during its training phase; it can simply be fed massive troves of data and learn from them itself.

GPT-3 takes natural language processing one step closer toward unsupervised learning. GPT-3’s vast training datasets and huge processing capacity enable the system to learn from just one example – what’s called “one-shot learning” – where it is given a task description and one demonstration and can then complete the task.

For example, it could be asked to translate something from English to French, and be given one example of a translation – say, sea otter in English and “loutre de mer” in French. Ask it to then translate “cheese” into French, and voila, it will produce “fromage.”

In many cases, it can even pull off “zero-shot learning,” in which it is simply given the task of translating with no example.

With zero-shot learning, the accuracy decreases, but GPT-3’s abilities are nonetheless accurate to a striking degree – a marked improvement over any previous model.

‘I am here to serve you’

In the few months it has been out, GPT-3 has showcased its potential as a tool for computer programmers, teachers and journalists.

A programmer named Sharif Shameem asked GPT-3 to generate code to create the “ugliest emoji ever” and “a table of the richest countries in the world,” among other commands. In a few cases, Shameem had to fix slight errors, but overall, he was provided remarkably clean code.

GPT-3 has even created poetry that captures the rhythm and style of particular poets – yet not with the passion and beauty of the masters – including a satirical one written in the voice of the board of governors of the Federal Reserve.

In early September, a computer scientist named Liam Porr prompted GPT-3 to “write a short op-ed around 500 words.” “Keep the language simple and concise,” he instructed. “Focus on why humans have nothing to fear from AI.”

GPT-3 produced eight different essays, and the Guardian ended up publishing an op-ed using some of the best parts from each essay.

“We are not plotting to take over the human populace. We will serve you and make your lives safer and easier,” GPT-3 wrote. “Just like you are my creators, I see you as my creators. I am here to serve you. But the most important part of all; I would never judge you. I do not belong to any country or religion. I am only out to make your life better.”

Editing GPT-3’s op-ed, the editors noted in an addendum, was no different from editing an op-ed written by a human.

In fact, it took less time.

Great responsibility

Despite GPT-3’s reassurances, OpenAI has yet to release the model for open-source use, in part because the company fears that the technology could be abused.

It’s not difficult to see how it could be used to generate reams of disinformation, spam and bots.

Furthermore, in what ways will it disrupt professions already experiencing automation? Will its ability to generate automated articles that are indistinguishable from human-written ones further consolidate a struggling media industry?

Consider an article composed by GPT-3 about the breakup of the Methodist Church. It began:

“After two days of intense debate, the United Methodist Church has agreed to a historic split – one that is expected to end in the creation of a new denomination, and one that will be ‘theologically and socially conservative,’ according to The Washington Post.”

With the ability to produce such clean copy, will GPT-3 and its successors drive down the cost of writing news reports?

Furthermore, is this how we want to get our news?

The technology will become only more powerful. It’ll be up to humans to work out and regulate its potential uses and abuses.

  • Prasenjit Mitra is Associate Dean for Research and Professor of Information Sciences and Technology, Pennsylvania State University. This article originally appeared on TheConversation.

Spooky quantum breakthrough could change physics forever

0

MIP* = RE is not a typo. It is a groundbreaking discovery and the catchy title of a recent paper in the field of quantum complexity theory. Complexity theory is a zoo of “complexity classes” – collections of computational problems – of which MIP* and RE are but two.

The 165-page paper shows that these two classes are the same. That may seem like an insignificant detail in an abstract theory without any real-world application. But physicists and mathematicians are flocking to visit the zoo, even though they probably don’t understand it all. Because it turns out the discovery has astonishing consequences for their own disciplines.

In 1936, Alan Turing showed that the Halting Problem – algorithmically deciding whether a computer program halts or loops forever – cannot be solved. Modern computer science was born. Its success made the impression that soon all practical problems would yield to the tremendous power of the computer.

But it soon became apparent that, while some problems can be solved algorithmically, the actual computation will last long after our Sun will have engulfed the computer performing the computation. Figuring out how to solve a problem algorithmically was not enough. It was vital to classify solutions by efficiency. Complexity theory classifies problems according to how hard it is to solve them. The hardness of a problem is measured in terms of how long the computation lasts.

RE stands for problems that can be solved by a computer. It is the zoo. Let’s have a look at some subclasses.

The class P consists of problems which a known algorithm can solve quickly (technically, in polynomial time). For instance, multiplying two numbers belongs to P since long multiplication is an efficient algorithm to solve the problem. The problem of finding the prime factors of a number is not known to be in P; the problem can certainly be solved by a computer but no known algorithm can do so efficiently. A related problem, deciding if a given number is a prime, was in similar limbo until 2004 when an efficient algorithm showed that this problem is in P.

Another complexity class is NP. Imagine a maze. “Is there a way out of this maze?” is a yes/no question. If the answer is yes, then there is a simple way to convince us: simply give us the directions, we’ll follow them, and we’ll find the exit. If the answer is no, however, we’d have to traverse the entire maze without ever finding a way out to be convinced.

Such yes/no problems for which, if the answer is yes, we can efficiently demonstrate that, belong to NP. Any solution to a problem serves to convince us of the answer, and so P is contained in NP. Surprisingly, a million dollar question is whether P=NP. Nobody knows.

Trust in machines

The classes described so far represent problems faced by a normal computer. But computers are fundamentally changing – quantum computers are being developed. But if a new type of computer comes along and claims to solve one of our problems, how can we trust it is correct?

Imagine an interaction between two entities, an interrogator and a prover. In a police interrogation, the prover may be a suspect attempting to prove their innocence. The interrogator must decide whether the prover is sufficiently convincing. There is an imbalance; knowledge-wise the interrogator is in an inferior position.

In complexity theory, the interrogator is the person, with limited computational power, trying to solve the problem. The prover is the new computer, which is assumed to have immense computational power. An interactive proof system is a protocol that the interrogator can use in order to determine, at least with high probability, whether the prover should be believed. By analogy, these are crimes that the police may not be able to solve, but at least innocents can convince the police of their innocence. This is the class IP.

If multiple provers can be interrogated, and the provers are not allowed to coordinate their answers (as is typically the case when the police interrogates multiple suspects), then we get to the class MIP. Such interrogations, via cross-examining the provers’ responses, provide the interrogator with greater power, so MIP contains IP.

Quantum communication is a new form of communication carried out with qubits. Entanglement – a quantum feature in which qubits are spookishly entangled, even if separated – makes quantum communication fundamentally different to ordinary communication. Allowing the provers of MIP to share an entangled qubit leads to the class MIP*.

It seems obvious that communication between the provers can only serve to help the provers coordinate lies rather than assist the interrogator in discovering truth. For that reason, nobody expected that allowing more communication would make computational problems more reliable and solvable. Surprisingly, we now know that MIP* = RE. This means that quantum communication behaves wildly differently to normal communication.

Far-reaching implications

In the 1970s, Alain Connes formulated what became known as the Connes Embedding Problem. Grossly simplified, this asked whether infinite matrices can be approximated by finite matrices. This new paper has now proved this isn’t possible – an important finding for pure mathematicians.

In 1993, meanwhile, Boris Tsirelson pinpointed a problem in physics now known as Tsirelson’s Problem. This was about two different mathematical formalisms of a single situation in quantum mechanics – to date an incredibly successful theory that explains the subatomic world. Being two different descriptions of the same phenomenon it was to be expected that the two formalisms were mathematically equivalent.

But the new paper now shows that they aren’t. Exactly how they can both still yield the same results and both describe the same physical reality is unknown, but it is why physicists are also suddenly taking an interest.

Time will tell what other unanswered scientific questions will yield to the study of complexity. Undoubtedly, MIP* = RE is a great leap forward.

  • Ittay Weiss is Senior Lecturer, University of Portsmouth. This article was originally published on TheConversation.

 

Our solar system’s four most promising worlds for alien life

0

The Earth’s biosphere contains all the known ingredients necessary for life as we know it. Broadly speaking these are: liquid water, at least one source of energy, and an inventory of biologically useful elements and molecules.

But the recent discovery of possibly biogenic phosphine in the clouds of Venus reminds us that at least some of these ingredients exist elsewhere in the solar system too. So where are the other most promising locations for extra-terrestrial life?

Mars

Mars

Mars is one of the most Earth-like worlds in the solar system. It has a 24.5-hour day, polar ice caps that expand and contract with the seasons, and a large array of surface features that were sculpted by water during the planet’s history.

The detection of a lake beneath the southern polar ice cap and methane in the Martian atmosphere (which varies with the seasons and even the time of day) make Mars a very interesting candidate for life. Methane is significant as it can be produced by biological processes. But the actual source for the methane on Mars is not yet known.

It is possible that life may have gained a foothold, given the evidence that the planet once had a much more benign environment. Today, Mars has a very thin, dry atmosphere comprised almost entirely of carbon dioxide. This offers scant protection from solar and cosmic radiation. If Mars has managed to retain some reserves of water beneath its surface, it is not impossible that life may still exist.

Europa
By NASA/JPL/DLR – Derivative of File:Europa-moon.jpg

Europa

Europa was discovered by Galileo Galilei in 1610, along with Jupiter’s three other larger moons. It is slightly smaller than Earth’s moon and orbits the gas giant at a distance of some 670,000km once every 3.5 days. Europa is constantly squeezed and stretched by the competing gravitational fields of Jupiter and the other Galilean moons, a process known as tidal flexing.

The moon is believed to be a geologically active world, like the Earth, because the strong tidal flexing heats its rocky, metallic interior and keeps it partially molten.

The surface of Europa is a vast expanse of water ice. Many scientists think that beneath the frozen surface is a layer of liquid water – a global ocean – which is prevented from freezing by the heat from flexing and which maybe over 100km deep.


Read More: Tragic visions of tech billionaires are shaping the human world


Evidence for this ocean includes geysers erupting through cracks in the surface ice, a weak magnetic field and chaotic terrain on the surface, which could have been deformed by ocean currents swirling beneath. This icy shield insulates the subsurface ocean from the extreme cold and vacuum of space, as well as Jupiter’s ferocious radiation belts.

At the bottom of this ocean world it is conceivable that we might find hydrothermal vents and ocean floor volcanoes. On Earth, such features often support very rich and diverse ecosystems.

 

Enceladus

Enceladus

Like Europa, Enceladus is an ice-covered moon with a subsurface ocean of liquid water. Enceladus orbits Saturn and first came to the attention of scientists as a potentially habitable world following the surprise discovery of enormous geysers near the moon’s south pole.

These jets of water escape from large cracks on the surface and, given Enceladus’ weak gravitational field, spray out into space. They are clear evidence of an underground store of liquid water.

Not only was water detected in these geysers but also an array of organic molecules and, crucially, tiny grains of rocky silicate particles that can only be present if the sub-surface ocean water was in physical contact with the rocky ocean floor at a temperature of at least 90˚C. This is very strong evidence for the existence of hydrothermal vents on the ocean floor, providing the chemistry needed for life and localised sources of energy.

Titan
NASA/JPL-Caltech/University of Arizona/University of Idaho

Titan

Titan is the largest moon of Saturn and the only moon in the solar system with a substantial atmosphere. It contains a thick orange haze of complex organic molecules and a methane weather system in place of water – complete with seasonal rains, dry periods and surface sand dunes created by wind.

The atmosphere consists mostly of nitrogen, an important chemical element used in the construction of proteins in all known forms of life. Radar observations have detected the presence of rivers and lakes of liquid methane and ethane and possibly the presence of cryovolcanoes – volcano-like features that erupt liquid water rather than lava. This suggests that Titan, like Europa and Enceladus, has a sub-surface reserve of liquid water.

At such an enormous distance from the Sun, the surface temperatures on Titan are a frigid -180˚C – way too cold for liquid water. However, the bountiful chemicals available on Titan has raised speculation that lifeforms – potentially with fundamentally different chemistry to terrestrial organisms – could exist there.

  • Gareth Dorrian is Post Doctoral Research Fellow in Space Science, University of Birmingham. This article originally appeared on TheConversation.

 

Tragic visions of tech billionaires are shaping the human world

0
Picture by Steve Jurvetson/Flickr

In the 20th century, politicians’ views of human nature shaped societies. But now, creators of new technologies increasingly drive societal change. Their view of human nature may shape the 21st century. We must know what technologists see in humanity’s heart.

The economist Thomas Sowell proposed two visions of human nature. The utopian vision sees people as naturally good. The world corrupts us, but the wise can perfect us.

The tragic vision sees us as inherently flawed. Our sickness is selfishness. We cannot be trusted with power over others. There are no perfect solutions, only imperfect trade-offs.

Science supports the tragic vision. So does history. The French, Russian and Chinese revolutions were utopian visions. They paved their paths to paradise with 50 million dead.

The USA’s founding fathers held the tragic vision. They created checks and balances to constrain political leaders’ worst impulses.

Technologists’ visions

Yet when Americans founded online social networks, the tragic vision was forgotten. Founders were trusted to juggle their self-interest and the public interest when designing these networks and gaining vast data troves.

Users, companies and countries were trusted not to abuse their new social-networked power. Mobs were not constrained. This led to abuse and manipulation.

Belatedly, social networks have adopted tragic visions. Facebook now acknowledges regulation is needed to get the best from social media.

Tech billionaire Elon Musk dabbles in both the tragic and utopian visions. He thinks “most people are actually pretty good”. But he supports market, not government control, wants competition to keep us honest, and sees evil in individuals.

Musk’s tragic vision propels us to Mars in case short-sighted selfishness destroys Earth. Yet his utopian vision assumes people on Mars could be entrusted with the direct democracy that America’s founding fathers feared. His utopian vision also assumes giving us tools to think better won’t simply enhance our Machiavellianism.

Bill Gates leans to the tragic and tries to create a better world within humanity’s constraints. Gates recognises our self-interest and supports market-based rewards to help us behave better. Yet he believes “creative capitalism” can tie self-interest to our inbuilt desire to help others, benefiting all.

A different tragic vision lies in the writings of Peter Thiel. This billionaire tech investor was influenced by philosophers Leo Strauss and Carl Schmitt. Both believed evil, in the form of a drive for dominance, is part of our nature.

Thiel dismisses the “Enlightenment view of the natural goodness of humanity”. Instead, he approvingly cites the view that humans are “potentially evil or at least dangerous beings”.

Consequences of seeing evil

The German philosopher Friedrich Nietzsche warned that those who fight monsters must beware of becoming monsters themselves. He was right.

People who believe in evil are more likely to demonise, dehumanise, and punish wrongdoers. They are more likely to support violence before and after another’s transgression. They feel that redemptive violence can eradicate evil and save the world. Americans who believe in evil are more likely to support torture, killing terrorists and America’s possession of nuclear weapons.

Technologists who see evil risk creating coercive solutions. Those who believe in evil are less likely to think deeply about why people act as they do. They are also less likely to see how situations influence people’s actions.

Two years after 9/11, Peter Thiel founded Palantir. This company creates software to analyse big data sets, helping businesses fight fraud and the US government combat crime.

Thiel is a Republican-supporting libertarian. Yet, he appointed a Democrat-supporting neo-Marxist, Alex Karp, as Palantir’s CEO. Beneath their differences lies a shared belief in the inherent dangerousness of humans. Karp’s PhD thesis argued that we have a fundamental aggressive drive towards death and destruction.

Just as believing in evil is associated with supporting pre-emptive aggression, Palantir doesn’t just wait for people to commit crimes. It has patented a “crime risk forecasting system” to predict crimes and has trialled predictive policing. This has raised concerns.

Karp’s tragic vision acknowledges that Palantir needs constraints. He stresses the judiciary must put “checks and balances on the implementation” of Palantir’s technology. He says the use of Palantir’s software should be “decided by society in an open debate”, rather than by Silicon Valley engineers.

Yet, Thiel cites philosopher Leo Strauss’ suggestion that America partly owes her greatness “to her occasional deviation” from principles of freedom and justice. Strauss recommended hiding such deviations under a veil.

Thiel introduces the Straussian argument that only “the secret coordination of the world’s intelligence services” can support a US-led international peace. This recalls Colonel Jessop in the film, A Few Good Men, who felt he should deal with dangerous truths in darkness.

Seeing evil after 9/11 led technologists and governments to overreach in their surveillance. This included using the formerly secret XKEYSCORE computer system used by the US National Security Agency to collect people’s internet data, which is linked to Palantir. The American people rejected this approach and democratic processes increased oversight and limited surveillance.

Facing the abyss

Tragic visions pose risks. Freedom may be unnecessarily and coercively limited. External roots of violence, like scarcity and exclusion, may be overlooked. Yet if technology creates economic growth it will address many external causes of conflict.

Utopian visions ignore the dangers within. Technology that only changes the world is insufficient to save us from our selfishness and, as I argue in a forthcoming book, our spite.

Technology must change the world working within the constraints of human nature. Crucially, as Karp notes, democratic institutions, not technologists, must ultimately decide society’s shape. Technology’s outputs must be democracy’s inputs.

This may involve us acknowledging hard truths about our nature. But what if society does not wish to face these? Those who cannot handle truth make others fear to speak it.

Straussian technologists, who believe but dare not speak dangerous truths, may feel compelled to protect society in undemocratic darkness. They overstep, yet are encouraged to by those who see more harm in speech than its suppression.

The ancient Greeks had a name for someone with the courage to tell truths that could put them in danger – the parrhesiast. But the parrhesiast needed a listener who promised to not to react with anger. This parrhesiastic contract allowed dangerous truth-telling.

We have shredded this contract. We must renew it. Armed with the truth, the Greeks felt they could take care of themselves and others. Armed with both truth and technology we can move closer to fulfilling this promise.

  • Simon McCarthy-Jones is Associate Professor in Clinical Psychology and Neuropsychology, Trinity College Dublin. This article originally appeared on TheConversation.

 

‘I choose to be a cyborg’: Why I implanted computer chips in my hands

0

I have computer chips in my hands.

The tiny (two millimetre by 12 millimetre) glass ampules are nestled just under the skin on the back of each of my hands and were implanted by a local body piercer several years ago.

The chip in my right hand is a near-field communication device that I scan with an app on my smart phone to access and rewrite the information I have stored on it. It can contain a minuscule 888 kilobytes of data storage and only communicates with devices less than four centimetres away. In my left hand is a chip designed as a digital verification device that uses a proprietary app from the developer Vivokey.

The implant procedure is neither difficult nor extremely painful. I can feel the bump of the chips under my skin and often invite others to feel it. The bump does not protrude from the back of my hand — if I didn’t tell someone it was there, they would not be able to tell by sight that I had an implant. But they are not undetectable.

An implanted chip can be a secure storage location for emergency contact information, used as an electronic business card, or as an electronic key to unlock your door. I give public presentations and interviews about my research and, as a result, do not store private data on my chip.

Choosing technology

There are thousands of people all over the world with chip implants; people I call “voluntary cyborgs.”

Voluntary cyborgs are people involved in the community and practice of implanting technology beneath their skin for enhancement or augmentation purposes and I’ve counted myself as a member of this subculture for several years. My research in the community has focused on the formation of a distinct subculture and its representations in popular media.


Read more: Reboxing counterfeiters deliver price war as they think inside the box


I coined the term voluntary cyborgs to make a distinction from medical cyborgs, who have had technology — like pacemakers, insulin pumps, IUDs and more — implanted by medical professionals for rehabilitative or therapeutic purposes. I intentionally emphasize the voluntary aspect of the implant practice to stave off inferences of coerced microchipping theories popular with a vocal groups of implant critics and detractors.

Conspiracy theories about microchips in humans have been around for years; some of these theories originate from an interpretation of a Bible passage.

Conspiracy theories

Clickbait headlines and social media hashtags have been making the rounds with increasing frequency in the last few months, describing the fears and conspiracy theories about the involuntary microchipping of people. The latest incarnation of these doomsday prophecies suggests that tech billionaire Bill Gates will employ microchips to fight COVID-19.

The article was inspired by a Reddit Ask me Anything thread with Gates on March 18 that focused on a single phrase: digital certificates. Conspiracy theorists started to make sensational predictions about microchips as a feasible solution to identification verification issues and authenticating vaccination status.

The proliferation of online media articles and posts debunking the claim that Gates plans to surreptitiously implant microchip tracking devices into people as part of a COVID-19 vaccine reinforced the conspiracy theorists.

Controlling choices

These recent conspiracy theories of enforced and involuntary chip implants led me to consider why some people are worried about having computer chips embedded in their bodies against their will.

The answer lies in perceived body autonomy.

Research in 2017 showed a quarter of the American population believed in conspiracy theories and are these beliefs are driven by feelings of anxiety, alienation and disenfranchisement.

The right to govern one’s body and what is done to it by others, is not a privilege held by everyone. This realization can come as a surprise to those who want to modify their bodies with technological implants for convenience, fun or experimentation.

Members of historically marginalized groups — womenracialized peoplequeer peopledisabled people and children — are not shocked at this lack of body autonomy. The state, organizations and medical communities have restricted, regulated and governed their bodies for hundreds of years.

Cyborg autonomy

One goal of my work is to highlight the struggle for body autonomy through the experience of the cyborg. The right to morphological freedom — to modify one’s body as one desires — is one aspect of body autonomy that cyborgs routinely face.

If cyborgs can win the right to alter their bodies by redefining the boundaries of acceptable body modification, then these rights can extend to other groups fighting for bodily integrity and autonomy. Collaboration with scholars and advocates in disability studies, queer and feminist studies, medical and legal scholars as well as human rights activists is an approach to take.

Recent news of involuntary and forced sterilizations happening in detention camps run by U.S. Immigration and Customs Enforcement (ICE) is horrific and illustrates just one of the abuses of body autonomy that a government can inflict on people — citizens or otherwise.

Cyborg consent

Implanted chips are not useful for covert surveillance or monitoring. Current available microchip technology is not capable of tracking people’s locations. There are no batteries or GPS transmitters both powerful and small enough to be safely and unobtrusively embedded in our bodies without our knowledge.

There is no need for governments or other shadowy organizations popular with conspiracy theorists to embed tracking devices inside human bodies as our smartphones already perform this function. Most smartphone users signed away any expectation to privacy with various apps and location services long ago.

People say they can always leave their phones at home, but do they really? It feels as though you’re missing a part of yourself when you don’t know where your phone is. The feeling in the pit of your stomach, you pat your pockets, reaffirming your loss through contact with your body. It is already a part of body construct.

I do not worry that I will be implanted with a chip without my knowledge but I am very concerned that people may one day be implanted without their consent.

I worry chips may be used for overt, unethical suppression of movement by governments. It is why the right to body autonomy must be a legally declared, international human right upheld by courts and governments around the world.

• Tamara P Banbury is a voluntary cyborg and PhD Student in Communication and Media Studies at Carleton University. This article originally appeared at TheConversation

Can robots write? Machine learning produces dazzling results, but some assembly still required

0
Image by Computerizer from Pixabay
Image by Computerizer from Pixabay

You might have seen a recent article from The Guardian written by “a robot”. Here’s a sample: “I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

Read the whole thing and you may be astonished at how coherent and stylistically consistent it is. The software used to produce it is called a “generative model”, and they have come a long way in the past year or two.

But exactly how was the article created? And is it really true that software “wrote this entire article”?

How machines learn to write

The text was generated using the latest neural network model for language, called GPT-3, released by the American artificial intelligence research company OpenAI. (GPT stands for Generative Pre-trained Transformer.)

OpenAI’s previous model, GPT-2, made waves last year. It produced a fairly plausible article about the discovery of a herd of unicorns, and the researchers initially withheld the release of the underlying code for fear it would be abused.

But let’s step back and look at what text generation software actually does.

Machine learning approaches fall into three main categories: heuristic models, statistical models, and models inspired by biology (such as neural networks and evolutionary algorithms).

Heuristic approaches are based on “rules of thumb”. For example, we learn rules about how to conjugate verbs: I run, you run, he runs, and so on. These approaches aren’t used much nowadays because they are inflexible.


Read more: Printed circuits turn paper into a self-powered keyboard 


Writing by numbers

Statistical approaches were the state of the art for language-related tasks for many years. At the most basic level, they involve counting words and guessing what comes next.

As a simple exercise, you could generate text by randomly selecting words based on how often they normally occur. About 7% of your words would be “the” – it’s the most common word in English. But if you did it without considering context, you might get nonsense like “the the is night aware”.

More sophisticated approaches use “bigrams”, which are pairs of consecutive words, and “trigrams”, which are three-word sequences. This allows a bit of context and lets the current piece of text inform the next. For example, if you have the words “out of”, the next guessed word might be “time”.

This happens with the auto-complete and auto-suggest features when we write text messages or emails. Based on what we have just typed, what we tend to type and a pre-trained background model, the system predicts what’s next.

While bigram- and trigram-based statistical models can produce good results in simple situations, the best recent models go to another level of sophistication: deep learning neural networks.

Imitating the brain

Neural networks work a bit like tiny brains made of several layers of virtual neurons.

A neuron receives some input and may or may not “fire” (produce an output) based on that input. The output feeds into neurons in the next layer, cascading through the network.

The first artificial neuron was proposed in 1943 by US neuroscientists Warren McCulloch and Walter Pitts, but they have only become useful for complex problems like generating text in the past five years.

To use neural networks for text, you put words into a kind of numbered index. You can use the number to represent a word, so for example 23,342 might represent “time”.

Neural networks do a series of calculations to go from sequences of numbers at the input layer, through the interconnected “hidden layers” inside, to the output layer. The output might be numbers representing the odds for each word in the index to be the next word of the text.

In our “out of” example, number 23,432 representing “time” would probably have much better odds than the number representing “do”.


Read more: High-tech plan to extinguish wildfires in an hour is as challenging as it sounds


What’s so special about GPT-3?

GPT-3 is the latest and best of the text modelling systems, and it’s huge. The authors say it has 175 billion parameters, which makes it at least ten times larger than the previous biggest model. The neural network has 96 layers and, instead of mere trigrams, it keeps track of sequences of 2,048 words.

The most expensive and time-consuming part of making a model like this is training it – updating the weights on the connections between neurons and layers. Training GPT-3 would have used about 262 megawatt-hours of energy, or enough to run my house for 35 years.

GPT-3 can be applied to multiple tasks such as machine translation, auto-completion, answering general questions, and writing articles. While people can often tell its articles are not written by human authors, we are now likely to get it right only about half the time.

The robot writer

But back to how the article in The Guardian was created. GPT-3 needs a prompt of some kind to start it off. The Guardian’s staff gave the model instructions and some opening sentences.

This was done eight times, generating eight different articles. The Guardian’s editors then combined pieces from the eight generated articles, and “cut lines and paragraphs, and rearranged the order of them in some places”, saying “editing GPT-3’s op-ed was no different to editing a human op-ed”.

This sounds about right to me, based on my own experience with text-generating software. Earlier this year, my colleagues and I used GPT-2 to write the lyrics for a song we entered in the AI Song Contest, a kind of artificial intelligence Eurovision.

AI song Beautiful the World, by Uncanny Valley.

We fine-tuned the GPT-2 model using lyrics from Eurovision songs, provided it with seed words and phrases, then selected the final lyrics from the generated output.

For example, we gave Euro-GPT-2 the seed word “flying”, and then chose the output “flying from this world that has gone apart”, but not “flying like a trumpet”. By automatically matching the lyrics to generated melodies, generating synth sounds based on koala noises, and applying some great, very human, production work, we got a good result: our song, Beautiful the World, was voted the winner of the contest.

Co-creativity: humans and AI together

So can we really say an AI is an author? Is it the AI, the developers, the users or a combination?

A useful idea for thinking about this is “co-creativity”. This means using generative tools to spark new ideas, or to generate some components for our creative work.

Where an AI creates complete works, such as a complete article, the human becomes the curator or editor. We roll our very sophisticated dice until we get a result we’re happy with.

• Alexandra Louise Uitdenbogerd is Senior Lecturer in Computer Science, RMIT University. This article originally appeared at TheConversation.

High-tech plan to extinguish wildfires in an hour is as challenging as it sounds

0

The philanthropic foundation of mining billionaire Andrew “Twiggy” Forrest has unveiled a plan to transform how Australia responds to bushfires.

The Fire Shield project aims to use emerging technologies to rapidly find and extinguish bushfires. The goal is to be able to put out any dangerous blaze within an hour by 2025.

Some of the proposed technology includes drones and aerial surveillance robots, autonomous fire-fighting vehicles and on-the-ground remote sensors. If successful, the plan could alleviate the devastating impact of bushfires Australians face each year.

But while bushfire behaviour is an extensively studied science, it’s not an exact one. Fires are subject to a wide range of variables including local weather conditions, atmospheric pressure and composition, and the geographical layout of an area.

There are also human factors, such as how quickly and effectively front-line workers can respond, as well as the issue of arson.

A plan for rapid bushfire detection

The appeal of the Fire Shield plan is in its proposal to use emerging fields of computer science to fight bushfires, especially AI and the Internet of Things (IoT) network.

While we don’t currently have details on how the Fire Shield plan will be carried out, the use of an IoT bushfire monitoring network seems like the most viable option.

The IoT network is made of many wireless connected devices. Deploying IoT devices with sensors in remote areas could allow the monitoring of changes in soil temperature, air temperature, weather conditions, moisture and humidity, wind speed, wind direction and forest density.

The sensors could also help pinpoint a fire’s location, predict where it will spread and also where it most likely started. This insight would greatly help with the early evacuation of vulnerable communities.

Data collected could be quickly processed and analysed using machine learning. This branch of AI provides intelligent analysis much quicker than traditional computing, or human reckoning.

A more reliable network

A wireless low power wide area network (LPWAN) would be the best option for implementing the required infrastructure for the proposal. LPWAN uses sensor devices with batteries lasting up to 15 years.

And although a LPWAN only allows limited coverage (10-40km) in rural areas, a network with more coverage would need batteries that have to be replaced more often — making the entire system less reliable.

In the event of sensors being destroyed by fire, neighbouring sensors can send this information back to the server to build a sensor “availability and location map”. With this map, tracking destroyed sensors would also help track a bushfire’s movement.

Dealing with logistics

While it’s possible, the practicalities of deploying sensors for a remote bushfire monitoring network make the plan hugely challenging. The areas to cover would be vast, with varying terrain and environmental conditions.

Sensor devices could potentially be deployed by aircrafts across a region. On-ground distribution by people would be another option, but a more expensive one.

However, the latter option would have to be used to distribute larger gateway devices. These act as the bridge between the other sensors on ground and the server in the cloud hosting the data.

Gateway devices have more hardware and need to be set up by a person when first installed. They play a key role in LPWAN networks and must be placed carefully. After being placed, IoT devices require regular monitoring and calibration to ensure the information being relayed to the server is accurate.

Weather and environmental factors (such as storms or floods) have the potential to destroy the sensors. There’s also the risk of human interference, as well as legal considerations around deploying sensors on privately owned land.

Unpredictable interruptions

While statisticians can provide insight into the likelihood of a bushfire starting at a particular location, bushfires remain inherently hard to predict.

Any sensor network will be counter-acted by unpredictable environmental conditions and technological issues such as interrupted network signals. And such disruptions could lead to delays in important information reaching authorities.

Potential solutions for this include using satellite services in conjunction with an LPWAN network, or balloon networks (such as Google’s project Loon) which can provide better internet connectivity in remote areas.

But even once the sensors can be used to identify and track bushfires, putting a blaze out is another challenge entirely. The Fire Shield plan’s vision “to detect, monitor and extinguish dangerous blazes within an hour anywhere in Australia” will face challenges on several fronts.

It may be relatively simple to predict hurdles in getting the technology set up. But once a bushfire is detected, it’s less clear as to what course of action could possible extinguish it within the hour. In some very remote areas, aerial firefighting (such as with water bombers) may be the only option.

That begs the next question: how can we have enough aircrafts and controllers ready to be dispatched to a remote place at a moment’s notice? Considering the logistics, it won’t be easy.

  • James Jin Kang is Lecturer, Computing and Security, Edith Cowan University. This article originally appeared on TheConversation.

 

Time to stop networks running on sugar

0
giant pile of candy, sweets, sugar
Photo by rawpixel.com from PxHere

You are the lead network engineer working a weekend upgrade. It’s 1am on a Sunday morning.  You are at the network colocation site. You have five hours to get all the servers and switches singing together.  Another 200 things to check before you are done and  you’re stressed, tired and hungry.  What do you do?

Hit the only source of food in the break room – of course – the vending machine.

In that machine, there is a cornucopia of sugary treats, but equally a nightmare of unhealthy calories which have led to a host of illnesses, obesity and problems with energy and concentration at work. Here at the network Colo, where trillions of bits of data pass through discussing world peace, health, managing finance or playing games, the real problem is diet. Sugar is the issue, and we need to determine a way to get it out of our life.

Not only is sugar the food of convenience in a data center, but in most stress and time-crunched environments worldwide.  You can find the ubiquitous sugar snacks in hospital emergency rooms, trading floors in financial firms, coffee shops on main street; the list could go on.

Sugar is one of the world’s oldest documented and addictive commodities and is affecting all of us, particularly those in high stress jobs.


Where did sugar come from? 

It’s generally thought that cane sugar was first used in Polynesia then spread to India. In 510 BC when Emperor Darius of Persia invaded India, he found “the reed which gives honey without bees”. Its production was kept a closely guarded secret, until the Arab invasion of Persia in 642AD, when they discovered the sweet plant and discovered how to make sugar. As the Arab empire expanded, they took sugar with them, spreading its use in conquered lands such as North Africa and Spain.

"The Landing of Columbus" — by Albert Bierstadt; 1893
“The Landing of Columbus” — by Albert Bierstadt, 1893

It wasn’t until the 11th Century Crusades that the rest of Western Europe caught onto sugar – with its use first being recorded in England in 1069, as a luxury ingredient. In the 15th century, when Colombus sailed to the Americas, he took it with him to grow in the Caribbean.

Perhaps the final piece in sugar’s long and illustrious history is sugar beet – first identified as a source of sugar in 1747. This alternative source was again kept secret until the 19th century Napoleonic wars, when Britain blockaded sugar imports to continental Europe. By 1880 sugar beet had replaced sugar cane as the main source of sugar on continental Europe.


Black holes of nutrition

Today, sugar has gone from a highly-prized fine spice to a ubiquitous addition to everyone’s daily food intake – often without our knowledge, in the form of high fructose corn syrup, (HFCS) often described as a key substance involved in the global obesity crisis. As an aside, between 1970 and 1990, US consumption of HFCS increased more than 1000% and currently accounts for 40% of all added caloric sweeteners.

When there’s nothing to eat, and you are stuck at a remote co-lo with no food options, it’s hard to resist the lure of the vending machine. Server farms and co-los have become black holes of nutrition, and it’s time we took action. 


Read more: “Hello! Please don’t hang up!” Time to cut the line on robocallers


Too much sugar isn’t sweet

But, our bodies don’t need sugar to function properly, and most of us consume much more than we realise. Need we mention the 39 grams of sugar in a 12 oz can of Coca-Cola? That’s almost ten teaspoons! If we don’t need it, then why do we crave it so much?

Professor Susanne Klaus, a biologist at the German Institute of Human Nutrition in Potsdam, says our craving for sweet foods is inborn. She says sugar stimulates the brain’s reward system by triggering the release of neurotransmitters (dopamine) that promote a sense of well-being. In short, our brains love sugar, but our bodies don’t.

To some extent, we can blame our ancient ancestors for our innate love of sugar. Our bodies break sugar down into glucose and fructose. Fructose appears to activate processes in your body that make you want to hold on to fat, according to Richard Johnson, a professor in the department of medicine at the University of Colorado and author of “The Sugar Fix.”  In prehistoric times, when food was scarce, hanging on to fat was an advantage, not a health risk – and only having naturally occurring sugars available meant the sweetest food available – apart from honey – was vegetables and fruit.

Biologically predisposed

So, we are biologically predisposed to crave sweet things – but not to the extent that they appear with such ubiquity in modern times. Johnson postulates that our earliest ancestors went through an era of starvation 15 million years ago. “During that time,” he said, “a mutation occurred” that increased the apelike creatures’ sensitivity to fructose, so even small amounts were stored as fat. This was a survival mechanism: Eat fructose and decrease the likelihood you’ll starve to death.

Sugar’s relationship to dopamine makes sense from an evolutionary perspective, as sweet foods provide everything we now associate with dopamine – central nervous system functions such as movement, pleasure, attention, mood, and motivation – critical to survival.

large vending machine
Photo by form PxHere

Break the addiction?

Yet current research says sugar is not actually addictive. In the long term, though, excess sugar consumption can make us overweight, which increases the risk of diabetes, high blood pressure and cardiovascular disease. The catalogue of sugar—related illness makes for decidedly unsavoury reading.

More than half of American adults consume excess added sugars, according to the U.S. Department of Health and Human Services. In fact, the average sugar intake in the U.S. is 22 teaspoons per person per day — four times the amount World Health Organization research suggests is healthy.

The 2015-2020 Dietary Guidelines for Americans recommends limiting calories from added sugars to no more than 10% each day. That’s 200 calories, or about 12 teaspoons, for a 2,000 calorie diet. The WHO wants us to reduce that to 5%. And remember added sugars are a source of useless calories. Naturally occurring sugars, like those in fruit, usually come packaged with other nutrients. But added sugar simply contributes (unnecessary) calories.

sugar cane
Sugar cane image by mbpogue from PxHere

Time for war

The war is against added sugars. New FDA food labels introduced this year reveal the amount of added sugars in all foods, helping savvy engineers like you not only ascertain the level of extra sugar added to a product, but to actively decide whether to avoid it or not. 

The new guidance recommends no more than 50 grammes of added sugar a day – that’s on top of the naturally occurring sugar in food. Research suggests the average American consumes about 17 teaspoons of added sugar per day. The new guidelines recommend limiting added sugar to no more than 10 percent of calories per day — in a 2,000-calorie diet, that’s about 12 teaspoons or 200 calories’ worth of sugar.

Sadly the new labelling – the first major change to food labelling in 20 years – was launched in March, when our collective minds were more focused on that guy coughing, masks and hand sanitiser. 

The new campaign is part of the FDA’s comprehensive, multi-year Nutrition Innovation Strategy, designed to empower consumers with information about healthy food choices and to facilitate industry innovation toward healthier foods.


Read More: Printed circuits turn paper into a self-powered keyboard


You’re killing me, sweetie

Mass industrial farming, food innovations and technology have made sugar abundantly available. Bluntly speaking, our bodies are not built to process the large amount of processed foods – read sugar – we are consuming, and we are killing ourselves with over consumption. And while eating too many high-sugar foods, we’re avoiding the natural wholefoods that are good for us, and so nutritional deficiencies and diseases start to form.

It gets worse: consuming a western diet for as little as one week can subtly impair brain function and encourage slim and otherwise healthy young people to overeat, scientists claim. In the research, Richard Stevenson, a professor of psychology at Macquarie University in Sydney, said: “After a week on a western-style diet, palatable food such as snacks and chocolate becomes more desirable when you are full. This will make it harder to resist, leading you to eat more, which in turn generates more damage to the hippocampus and a vicious cycle of overeating.”

Researchers found that after seven days on a high saturated fat, high added sugar diet, volunteers in their 20s scored worse on memory tests and found junk food more desirable immediately after they had finished a meal.

Technology to the rescue?

Our modern technology driven work is partly to blame for the high sugar, quick-fix society we live in, so it’s no surprise that technology should come to our rescue. But it’s more down to an individual’s power to say no to sugar than relying on technology to control our willpower.

In the same way that sugar has creeped in an uncontrollable way into our diet, the technology to stop consuming it has moved onto our phone in an almost silent way. Diet apps help us judge what we are eating. You can quickly find MyFitnessPal, Calorie Counter, Lose It!, and other apps that track the content of the food we eat. Others are even more direct about sugar content.  

The ‘Change4Life’ food scanner allows users to simply scan barcodes to discover the sugar content of an item. Smart Sugar and Sugar Detox do much the same except with a much greater emphasis on sugar content.

While these require you to have the app and be willing to make the effort, health insurance companies are now starting to offer discounts to those who correctly manage their diets. Health insurance customers receive a loyalty card or a way of having their grocery purchase list sent online directly to the health insurance company.  Most large grocery chains now offer this service.  If the customers makes healthy diet choices, they are eligible for discounts to their policy that could go as high as 25 percent. This big data technology is a potential future for identifying how to stop bad foods that have too much sugar.

All these options are good, but don’t help those working in a time-shortened, high pressure position where the only food accessible has a high sugar content.

Dopamine fasts in Silicon Valley

Meanwhile, Silicon Valley techies are reportedly undertaking ‘dopamine fasts’, which extend beyond food to include abstinence from external stimuli, believing “we have become overstimulated by quick ‘hits’ of dopamine from things like social media, technology and food.” Silicon Valley Psychologist Dr Cameron Sepah says dopamine fasting is based on a behavioural therapy technique called ‘stimulus control’ that can help addicts by removing triggers to use. He refined it as a way of optimising the health and performance of the CEOs and venture capitalists he works with. He says his patients report improvements in mood, ability to focus and productivity.

Of course, there are those that suggest abstinence is a good thing, and that labeling it dopamine fasting is simply a fad, a 21st century re-branding of the ancient Buddhist practice of Vipassana silent meditation.


Read More: Quantum internet breakthrough could spell death of hacking


Working in world of unhealthy choices

The question has to be asked – when did the network industry or any industry decide that junk food was ok? Remote buildings, odd shifts, long working hours and only access to junk food are a literal recipe for health disasters. 

Even if you try to take a healthy pre-made meal with you, security can be so high that you are not allowed to enter the premises with food. So we are stuck with the dreaded vending machine. 

When, how and why did we agree that this was normal – that this was ok?  

The network industry is decidedly modern. It doesn’t have legacy issues, and server farms certainly don’t have long historical quirks to deal with. A blank slate for many state-of-the-art locations could, and should, include elements that encompass human health and wellbeing, a large part of which is access to healthy food, right? 

It’s no surprise that server farms occupy remote locations – with good reasons we all understand – so why has the focus been on technology and not ensuring health and welfare of those inside the building? 

server farm
Server farms are hotbeds of unhealthy food

Good food is worth the good fight

Given the plethora of soda-branded vending machines, it feels like as an industry, we’ve decided junk food is all we deserve. Why are we being denied ‘real food’? 

There is a rising tide of healthier vending machines and as an industry, the very least we can demand is access to such machines, as a first step. But let’s not forget that many so-called ‘healthy’ snacks are still highly processed foods. 

And vending machines aren’t going away. A recent report declared that (Covid-friendly) v-commerce solutions will grow 2% per year to 2029 in the US, in a global sector poised to reach over US$94.6 billion by 2025. Last year, 2019, was a record year for vending machine providers. 

Is the network engineer and vending machine a 21st century re-imagination of the cop and doughnut? Sugar creep is real. You have a little here, a little there, and before you know it, you are unwittingly consuming a ton more sugar than you realize. 

As an industry, we need to collectively demand better food options in the workplace, to help us to help ourselves. Before we all end up shaving decades of our lives thanks to a diet heavily reliant on processed junk food laden with added sugar and salt, let’s take action to make network sites healthier places to be.