Musk’s Engineering Philosophy

Musk overviewed his five step engineering process, which must be completed in order:

  1. Make the requirements less dumb. The requirements are definitely dumb; it does not matter who gave them to you. He notes that it’s particularly dangerous if an intelligent person gives you the requirements, as you may not question the requirements enough. “Everyone’s wrong. No matter who you are, everyone is wrong some of the time.” He further notes that “all designs are wrong, it’s just a matter of how wrong.”
  2. Try very hard to delete the part or process. If parts are not being added back into the design at least 10% of the time, not enough parts are being deleted. Musk noted that the bias tends to be very strongly toward “let’s add this part or process step in case we need it.” Additionally, each required part and process must come from a name, not a department, as a department cannot be asked why a requirement exists, but a person can.
  3. Simplify and optimize the design. This is step three as the most common error of a smart engineer is to optimize something that should not exist.
  4. Accelerate cycle time. Musk states “you’re moving too slowly, go faster! But don’t go faster until you’ve worked on the other three things first.”
  5. Automate. An important part of this is to remove in-process testing after the problems have been diagnosed; if a product is reaching the end of a production line with a high acceptance rate, there is no need for in-process testing.

From: https://everydayastronaut.com/starbase-tour-and-interview-with-elon-musk/

Entanglement

A poem by Jane Hirshfield

A librarian in Calcutta and an entomologist in Prague
sign their moon-faced illicit emails,
“ton entanglée.”

No one can explain it.
The strange charm between border collie and sheep,
leaf and wind, the two distant electrons.

There is, too, the matter of a horse race.
Each person shouts for his own horse louder,
confident in the rising din
past whip, past mud,
the horse will hear his own name in his own quickened ear.

Desire is different:
desire is the moment before the race is run.

Has an electron never refused
the invitation to change direction,
sent in no knowable envelope, with no knowable ring?

A story told often: after the lecture, the widow
insisting the universe rests on the back of a turtle.
And what, the physicist
asks, does the turtle rest on?

Very clever, young man, she replies, very clever,
but it’s turtles all the way down.

And so a woman in Beijing buys for her love,
who practices turtle geometry in Boston, a metal trinket
from a night-market street stall.

On the back of a turtle, at rest on its shell,
a turtle.
Inside that green-painted shell, another, still smaller.

This continues for many turtles,
until finally, too small to see
or to lift up by its curious, preacherly head
a single un-green electron
waits the width of a world for some weightless message
sent into the din of existence for it alone.

Murmur of all that is claspable, clabberable, clamberable,
against all that is not:

You are there. I am here. I remember.

Jane Hirshfield, a current chancellor of the Academy of American Poets, is the author of The Beauty, a book of poems, and Ten Windows, a book of essays.

http://discovermagazine.com/2016/jul-aug/entanglement

Ghost in The Shell

A series of tweets by Jon Tsuei:

 

I’ve been seeing a lot of defenses for the ScarJo casting that seem to lack a nuanced understanding of a Ghost In The Shell as a story.

The manga came out in 1989, the first film 1995. An era when Japan was considered the world leader in technology.

Everything hot in that era came out of Japan. Cars, video games, walkmans, all of that. Japan was setting a standard.

This is a country that went from poised to conquer to the Pacific to forcibly disarmed. They poured their resources into their economy.

And as a country that was unable to defend themselves, but was a world leader in tech, it created a relationship to tech that is unique.

Ghost In The Shell plays off all of these themes. It is inherently a Japanese story, not a universal one.

This casting is not only the erasure of Asian faces but a removal of the story from its core themes.

You can “Westernize” the story if you want, but at that point it is no longer Ghost In The Shell because the story is simply not Western.

Understand that media from Asia holds a dear place in the hearts of many Asians in the west, simply because western media doesn’t show us.

Ghost In The Shell, while just one film, is a pillar in Asian media. It’s not simply a scifi thriller. Not to me, not to many others.

Respect the work for what it is and don’t bastardize it into what you want it to be.

 

 

The Real Thing

It’s to do with knowing and being known. I remember how it stopped seeming odd that in biblical Greek, knowing was used for making love. Whosit knew so-and-so. Carnal knowledge. It’s what lovers trust each other with. Knowledge of each other, not of the flesh but through the flesh, knowledge of self, the real him, the real her, in extremis, the mask slipped from the face.

Every other version of oneself is on offer to the public.

We share our vivacity, grief, sulks, anger, joy… we hand it out to anybody who happens to be standing around, to friends and family with a momentary sense of indecency perhaps, to strangers without hesitation. Our lovers share us with the passing trade. But in pairs we insist that we give ourselves to each other. What selves? What’s left? What else is there that hasn’t been dealt out like a deck of cards?

Carnal knowledge.

Personal, final, uncompromised. Knowing, being known. I revere that. Having that is being rich, you can be generous about what’s shared — she walks, she talks, she laughs, she lends a sympathetic ear, she kicks off her shoes and dances on the tables, she’s everybody’s and it don’t mean a thing, let them eat cake; knowledge is something else, the undealt card, and while it’s held it makes you free-and-easy and nice to know, and when it’s gone everything is pain. Every single thing. Every object that meets the eye, a pencil, a tangerine, a travel poster. As if the physical world has been wired up to pass a current back to the part of your brain where imagination glows like a filament in a lobe no bigger than a torch bulb.

Pain.

Source: Tom Stoppard 1982 play, The Real Thing.

A Robot That Takes Walks and Plays Tennis

A robot can’t decide to go for a walk on its own, said Rodney Brooks, an artificial intelligence pioneer and founder of Rethink Robotics, “It doesn’t have the intent a dog has.” (Rethink makes factory robots that don’t need cages, and can detect big changes in their work environment. “Is that scientifically hard? No. People in labs would have done that 20 years ago,” said Brooks. “But it’s gotta work 100 percent of the time.”)

Giving a machine intention is a difficult challenge. Software programmers can simulate the problem they’re trying to solve on computers, and progress doesn’t depend on physical movement–it’s about how fast a computer can simulate those movements.

Google’s DeepMind AI software played hundreds of thousands of rounds of the board game Go in a matter of months. It would take a lot longer to test drive robots taking hundreds of thousands of walks in the woods.

To develop robots, you have two options: You can either simulate an environment and robot with software and hope the results are accurate enough that you can load it into a machine and watch it walk. Or you can skip the simulation and tinker directly on a robot, hoping you can learn things from the real world– but that’s awfully slow.

Google faces this problem with its self-driving cars, and it tests them both ways. It has real cars drive a few thousand miles a week on real roads, and at the same time it simulates millions of miles a week driven by virtual cars on virtual roads. The idea is that the simulator can test out different scenarios to see how the cars react, and the real world can give Google data — and problems — that virtual cars don’t encounter. One time, a car confronted a man in a wheelchair chasing a turkey with a broom. This was not something Google had simulated.

The problem with robots is that they tend to be more advanced than cars. Instead of wheels, you have legs– and arms, necks, knee joints, and fingers. Simulating all of that accurately can be extremely difficult, but testing out all the different ways you can move the machine in flesh-and-blood reality takes years.

“Rosie the robot, you can’t have it knock over your furniture a hundred thousand times to learn,” said Gary Marcus, chief executive officer of a startup AI company called Geometric Intelligence.

Sergey Levine recently worked on a project to tackle this problem at Google. The company programmed 14 robotic arms to spend 3,000 hours learning to pick up different items, teaching each other as they went. The project was a success, but it took months, and it used robot arms rather than an entire body.

“In order to make AI work in the real world and handle all the diversity and complexity of realistic environments, we will need to think about how to get robots to learn continuously and for a long time, perhaps in cooperation with other robots,” said Levine. That’s probably the only way to get robots who can handle the randomness of everyday tasks.

Source: Bloomberg.

What is mentioned in the above article is called parallel learning. A couple of weeks ago, Sergey Levine writes in his post,

However, by linking learning with continuous feedback and control, we might begin to bridge that gap, and in so doing make it possible for robots to intelligently and reliably handle the complexities of the real world.

Consider for example this robot from KAIST, which won last year’s DARPA robotics challenge. The remarkably precise and deliberate motions are deeply impressive. But they are also quite… robotic. Why is that? What makes robot behavior so distinctly robotic compared to human behavior? At a high level, current robots typically follow a sense-plan-act paradigm, where the robot observes the world around it, formulates an internal model, constructs a plan of action, and then executes this plan. This approach is modular and often effective, but tends to break down in the kinds of cluttered natural environments that are typical of the real world. Here, perception is imprecise, all models are wrong in some way, and no plan survives first contact with reality.

In contrast, humans and animals move quickly, reflexively, and often with remarkably little advance planning, by relying on highly developed and intelligent feedback mechanisms that use sensory cues to correct mistakes and compensate for perturbations. For example, when serving a tennis ball, the player continually observes the ball and the racket, adjusting the motion of his hand so that they meet in the air. This kind of feedback is fast, efficient, and, crucially, can correct for mistakes or unexpected perturbations. Can we train robots to reliably handle complex real-world situations by using similar feedback mechanisms to handle perturbations and correct mistakes?

While serving and feedback control have been studied extensively in robotics, the question of how to define the right sensory cue remains exceptionally challenging, especially for rich modalities such as vision. So instead of choosing the cues by hand, we can program a robot to acquire them on its own from scratch, by learning from extensive experience in the real world. In our first experiments with real physical robots, we decided to tackle robotic grasping in clutter.

A human child is able to reliably grasp objects after one year, and takes around four years to acquire more sophisticated precision grasps. However, networked robots can instantaneously share their experience with one another, so if we dedicate 14 separate robots to the job of learning grasping in parallel, we can acquire the necessary experience much faster.

Vision modality, or sight intelligence is a different breed than the kind of intelligence used for playing chess or Go, and much more difficult to design. Unlike humans though, the ability to share whatever they’ve learnt in full and instantaneously will allow for exponential progress in time.

Maybe in my lifetime I can see two robots playing a tennis match against each other. Wouldn’t that be cool? Or a team of basketball or football robot players. I would pay good money to see that too.

Giorgione

tempogiorgione

But Giorgione is still visible behind all this and your eyes are now sufficiently attuned to see him. That is one of the show’s gifts: it invites the viewer to look at many paintings made in the same place at around the same time and recognise Giorgione’s outstanding originality. For that is surely the telling trait. Here was a man who painted himself as David holding the severed head of Goliath that inspired Caravaggio a century later. Here was a man who could paint the most beguiling of all puzzle paintings and produce the portrait that deservedly appears as the climax of this show: of a most vivacious old lady. Her hair is greying, her cheeks wrinkled, the hand at her breast is supposedly saying I am old, as you will one day be, too. Or at least that is the warning written on the slip of paper tucked in her cuff: “With time”. But look at her profoundly subtle face, in all its charm and intelligence, and the message (surely another late addition) is undermined. This is not the kind of allegory of age everyone else was painting but a portrait of a real woman, endearing, humorous, with the light of experience bright in her eye.

http://www.theguardian.com/artanddesign/2016/mar/13/age-of-giorgione-exhibition-royal-academy-london-review

AlphaGo Shows True Strength

After being compelled to flex its muscles for a short time and gaining the upper hand, AlphaGo began to play leisurely moves. By now, most observers know that this is a feature of the ruthlessly efficient algorithm which guides AlphaGo’s play. Unlike humans, AlphaGo doesn’t try to maximize its advantage. Its only concern is its probability of winning.

The machine is content to win by half a point, as long as it is following the most certain path to success. So when AlphaGo plays a slack looking move, we may regard it as a mistake, but perhaps it is more accurately viewed as a declaration of victory?

[…]

Lee soldiered on with commendable fighting spirit, probing the computer’s weaknesses here and there. He tried a clever indirect attack against White’s center dragon with Black 77, but AlphaGo’s responses made it feel like it knew exactly what Lee’s plan was.

Next he tried a cunning probe inside White’s territory, with move 115, attempting to break a ladder in sente or live inside White’s territory, but White responded firmly. He attempted to make good of his probe by living inside White’s territory with sharp tactics, but White was unperturbed. Finally, he even tried forcing a complicated ko. At this point, AlphaGo once again showed just how strong and detached its play is by ignoring the ko fight to play honte at White 148.

This move also removed any possibility of a double ko after Black at 148 (which may have been what Lee was planning). Having answered many questions about AlphaGo’s strengths and weakness, and exhausting every reasonable possibility of reversing the game, Lee was tired and defeated.

He resigned after 176 moves.

More here.