Wednesday, December 28, 2016

Machine learning ND 3, reinforcement learning

Reinforcement learning

This project is easier than I expect. It only use very simplied version of Bellman equation. Use a dictionary to implement the Q-learning greatly reduces the complexity.

Smartcab

cd Desktop/Udacity/MLND/course_material/projects/smartcab/

pygame

pip install pygame
python smartcab/agent.py
warning: don’t use conda install! It wasted me 2 hours to realize what’s the problem.
With GUI open, a single trial has about 120 steps and takes about 4.5 minutes.

code structure

agent.py{
  class LearningAgent(env.Agent):{
    __int__(env,learning,epsilon,alpha)
    reset()
    build_state()
    get_maxQ(state)
    createQ(state)
    choose_action(state)
    learn(state,action,reward)
    update()
  }
  run()
}

simulator.py{
  class Simulator(){
      __init__(env,size,update_delay,
      display,log_metrics,optimiazed){
      / line 90-110 write header
      }
      run(tolerance,n_test){
      /line 133 "total_trials>20"for training number control
      /line 229-245 write data to csv file
      }
      render_text(trial,testing){}
      render(trial,testing){}
      pause(){}

  }
}

environment.py{
  class TrafficLight(){}
  class Environment(){
    __init__(verbose,num_dummies,grid_size){}
    create_agent(agent_class,*args,**kwargs){}
    set_primary_agent(agent,enforce_deadline){}
    reset(testing){}
    step(){}
    sense(agent){}
    get_deadline(agent){}
    act(agent,action){}
    compute_dist(a,b){}
   class Agent(){}
   class DummyAgent(Agent){}
  }
}

key implementation:

with GUI off, this runs super fast.
revise run() by:
env = Environment(verbose = False, num_dummies = 100, grid_size = (8, 6))
agent = env.create_agent(LearningAgent, learning = True, epsilon = 1,  alpha = 0.3)
env.set_primary_agent(agent, enforce_deadline = True)
sim = Simulator(env, size = None, update_delay = 0.01, display = False, log_metrics = True, optimized = True)
sim.run(tolerance = 0.05, n_test = 10)
implement function by:
def reset(self, destination=None, testing=False):
    self.planner.route_to(destination)
    #self.epsilon -= 0.05  # decaying function for question 6
    self.epsilon *= 0.95 # for question 7
    if testing:
        self.epsilon, self.alpha = 0, 0
    return None

def build_state(self):
    waypoint = self.planner.next_waypoint() 
    inputs = self.env.sense(self)          
    deadline = self.env.get_deadline(self) 
    state = (waypoint,tuple([inputs[item] for item in inputs]))
    return state

def get_maxQ(self, state):
    maxQ = float('-inf')
    for key,value in self.Q[state].iteritems():
        maxQ = max(maxQ, value)
    return maxQ

def createQ(self, state):
    if not self.Q or state not in self.Q:
        self.Q[state] ={None:0.01,   'left':0.0, 'right':0.0,    'forward':0.0}  # give idle a slightly priority
    return

def choose_action(self, state):
    self.state = state
    self.next_waypoint = self.planner.next_waypoint()
    # action = None
    waypoint = self.next_waypoint
    import random
    actions = [None, 'left','right', 'forward']  

    if not self.learning:
        action = random.choice(actions)
    else:
        highest = self.get_maxQ(state)
        action_dict = self.Q[state]

        coin = random.random()
        if coin <= self.epsilon:
            if action_dict[waypoint] == 0.0:
                action = waypoint
            else:
                action = random.choice(actions)
        else:
            for key,value in action_dict.iteritems():
                if value == highest:
                    return key
    return action

def learn(self, state, action, reward):
    if self.learning:
        self.Q[state][action] += self.alpha*reward   
    return

Sunday, December 25, 2016

Machine learning ND 1, supervised learning, material collection


the original post has merged into github: https://github.com/jychstar/NanoDegreeProject/tree/master/MachineND/p2_Finding%20Donors

Monday, December 12, 2016

Religion for the nonreligious

Original article by Tim Urban. Here are my excerpt
Society at large focuses on shallow things that it doesn’t stress the need to take real growth seriously. Religions make salvation the end goal instead of self-improvement.
Hundreds of millions of years of evolution has created a zoo of small-minded emotions and motivations in our heads: fear, pettiness, jealousy, greed, instant-gratification.
Over the past 6 million years, human have taken a big step up the consciousness staircase, which we can call “Higher being” . It’s brilliant, big-thinking and rational, but he’s a very new resident in our heads.
So the human brain is a strange world: a combination of the Higher being and the low-level animals.
It was not until the higher being realized “we’re going to die“ that he can take over the control over other animals.
The collective force of the animals is what I call “the fog“. The more the animals are running the show and making us deaf and blind to the thoughts and insights of the Higher Being, the thicker fog is around our head, which make us only see a few inches in front of us.
so our problem : the battle of the Higher Being against the animals is the core internal human struggle: try to see through the fog to clarity
Typical struggles:
  • Rational Decision Maker vs. Instant Gratification Monkey
  • Authentic Voice vs. Social Survival Mammoth
The difficult thing is : when you are in the fog, you don’t know you are in the fog. The thickest fog, the less you are aware of its existence. So the key is being aware that the fog exists and learning how to recognize it.
consciousness staircase

step 1: our lives in the fog

Higher being: high-minded, love-based, advanced emotions
brain animals: small-minded, fear-based, primitive emotions
tribalism makes us hate people different than us

The power of fog

  • bend and loosen your integrity, for tiny insignificant gains which affect nothing in the long term
  • let the fear of what others might think dictate the way you live, but actually everyone is buried in their own lives without really thinking about you anyway.
  • keep them in the wrong relationship, job, city, apartment, friendship.
  • promise fake future happiness which fade away quickly due to Hedonic Treadmill. The fog itself is the source of unhappiness.

step 2: thinning the fog to reveal context

  • broaden perspectives: education, travel, life experience
  • active reflection: journal, therapy, ask questions like “what would I do if money is no problem”, “how would I advise someone else on this”, “will I regret not doing this when I’m 80? “ ask your Higher Being’s opinion on something without the animals realizing what’s going on. These tricks help the animals stay calm so the Higher Being can actually talk.
  • Activities that help quiet brain’s unconscious chatter: meditation, exercise, yoga
Most easiest way to thin out the fog is be aware of it. look at the whole context keeps you conscious, aware of reality.
examples:
The cashier is rude to me.
This dude’s in a dark place. Maybe his day has sucked.
life is so unfair.
Look at so many things I have.
Everything is amazing forever.
It’s part of a rocking curve.
Life is bad from here forward.
It’s part of a rocking curve.
Everything is scary. why did I say that? why am I so embarrassing?
Sometimes human brains great out and think these shitty things. I can feel my brain doing that right now. Oh brains
I don’t know what’s going on. I will get rid of these weird things.
I see the consequences and I will take actions.
When we’re on step 2, this broader scope and increased clarity makes us feel calmer and less fearful of things that aren’t actually scary. and the animal who gain strength form fear become ridiculous.
It’s extremely hard to stay on step 2 for long. But you can get better at noticing when it’s thick and develop effective strategies for thinning it out.

step 3 shocking reality

Our brain can’t handle the vastness of space, eternity of time, or the tininess of atoms. You can do it if you focus, but it’s a strain and you can’t hold it for very long.
A whoa moment is like being at the Grand Canyon. It’s difficult to maintain for very long, but only in a whoa moment does your brain actually wrap itself around true reality.
They make me feel ridiculously, profoundly humble. In those moments, all those words religious people use — awe, worship, miracle, eternal connection— make perfect sense. I want to get on my knees and surrender. This is when I feel spiritual.
In those fleeting moments, there’s no fog. My Higher being is in full flow and can see everything in perfect clarity. The animals become the sad little creatures with no fog to obscure things.
Each time you humiliate the animals, a little bit of their future power over you is diminished.
From Step 1 view to see an atheist, life on Earth is taken for granted.
From step 3’ view, life itself is more than enough to make me excited ,lucky and loving. How cool it is that I’m a group of atoms that can think about atoms.

Step 4, the great unknown

Carl Sagan:
science is not only compatible with spirituality, it is a profound source of spirituality.
If we ever reach the point where we think we thoroughly understand who we are and where we came from, we will have failed.
What if a more intelligent species tried its hardest to explain something to us? We may be not able to grasp anything.
Remember the powerful humility mentioned in step 3? Multiply by 100. That’s step 4. If I’m just a molecule floating around an ocean I can’t understand, I might as well just enjoy it.

what next?

nothing clears fog like a deathbed.
There are plenty of good things in the religious world, but it’s something happening in spite of religion and not because of it.
A Truthism knows:
  • where to put my focus
  • what to be wary of
  • how to evaluate my progress, which will help me make sure I’m actually improving and lead to quicker growth
several questions:
  • what’s the goal that you want to evolve towards? why that goal?
  • what does the path look like that gets you there?
  • what’s in your way? How do you overcome those obstacles?
  • what are your practices on day-to-day level?
  • what should your progress look like year-to-year?
  • Most importantly, how do you stay strong and maintain the practice for years and years, not four days?
Articulate it helps clarify it in your head.

Thursday, December 8, 2016

Machine Learning ND 0, Model Evaluation, Kaggle, machine intelligence 3.0

Udacity also provides a free “Intro to Machine Learning” course, see my previous post
a quick feeling of machine learning algorithm:
  • Decision Tree
  • Naive Bayes
  • Gradient Descent
  • Linear Regression
  • support vector machines
  • neural network
  • k-means clustering
  • hierarchical clustering
Tips:
  1. stick to your schedule. work regularly.
  2. be relentless in searching for an answer on your own. The struggle and search is where you learn the most. If you come across a term you don’t get, spend time reading up on it.
  3. be active member of your community.

Model Evaluation and Validation

(some course materials are from “introduction to machine learning”)

statistical analysis

Interquartile range (IQR)= Q3-Q1
outlier <Q_1-1.5*IQR
outlier >Q_3 +1.5*IQR

Evaluation Metrics

from sklearn.metrics import mean_absolute_error
from sklearn.metrics import f1_score
from sklearn.metrics import mean_squared_error
In many cases such as disease diagnosis, we don’t care so much about the true negative because it is too common. We care more about the true positive. This can be further divided into postivie predictive value and sensitivity.
e.g. When a search engine) returns 30 pages only 20 of which were relevant while failing to return 40 additional relevant pages, its precision is 20/30 = 2/3 while its recall is 20/60 = 1/3.In this case, Precision is how useful is the result, recall is how complete is the result.
F_\beta =\frac{ (1+\beta^2)  \times precision * recall)} { \beta^2precision + recall}
F_1 =\frac{ 2  \times precision * recall)} { precision + recall}
F1 is criticized because recall and precision are evenly weighted. Therefore, F0.5 focus more on precision, and F2 weights more on recall.

Causes of Error

Bias-variance dilemma:

high bias

a model being unable to represent the complexity of the underlying data
pay little attention to data, oversimplified, high error on trainning set

high variance

a model being overly sensitive to the limited data it has been trained on
pay too much attention to data, overfit, higher error on test set than training set

Kaggle

Ben Hammer:
  • I learn best through creative projects, not lectures.
  • I learn R/Python analytics API’s best through well-crafted examples, not docs
  • Join the competition can be quite addictive. When you make the first submission, you see everyone above you, and then that makes you ask the question what are these guys above me doing? how can I do better than them? And that question keeps striving you to try to do better and better at our problems, and it really forces you to explore the scope of supervised machine learning and different methodologies and approaches that you can use. That means you are not trying one or two ways, you are trying a thousand ways to figure out which make a really big difference in the model performance.
  • 2 catogories for people to approach problems. One spend 2 months trying to develope the great idea, implement it in code and see how it performs. It doesn’t work out.
  • The correct mindset is: I have a lot of different ideas that I think might work out. I want to experiment with them and explore to see how they work. I want to get through as many of these idea as possible, to find the couple really matter and make a big difference. One common pattern amonge winners is that they make 100 to 1000 submissions, they get through athe iterative loop of making a new results. Learning from how it performed very quickly. They optimize their environments and workflows to get through the iterative loop as fast as possible.
The gaming world offers a perfect place to start machine intelligence work (e.g., constrained environments, explicit rewards, easy-to-compare results, looks impressive)—especially for reinforcement learning.
Moral of the story: use a simple model you can understand. Only then move onto something more complex, and only if you need to.
An applied science lab is a big commitment. Data can often be quite threatening to people who prefer to trust their instincts. R&D has a high risk of failure and unusually high levels of perseverance are table stakes. Do some soul searching - will your company really accept this culture?
Data is not a commodity, it needs to be transformed into a product before it’s valuable. Many respondents told me of projects which started without any idea of who their customer is or how they are going to use this “valuable data”. The answer came too late: “nobody” and “they aren’t”

Warm up communication before AI nanodegree

Sebastian Thrun: Ask me anything!
2016-12-8
host: Lisbeth Ortega
Ben Kamphaus: “ML and AI algorithms have a role in structuring the way the internet shapes people’s behavior. At present, a lot of this behavior shaping is negative. We see powerful reinforcement for the consumption of false information, distractedness, short-term rewards and less ability to delay gratification. As AI engineers and researchers, what can we do to address this and make technology a better influence on people’s behavior?”monolith education vs. microservices education — decouple the content from the degree and evolve each part individually
I don’t really agree with the premise of this question. Perhaps because I am an optimist. Take Google, for example. Google search machine is a huge powerful AI system. It has given everyone with online access the ability to access an amazing amount of information. While some of this information is clearly false and misleading, I would not want to live in a world without Google.
The use of intentionally misleading fake news has been discussed a lot after the most recent presidential election in the US. But I am an optimist. When we build new technologies, we often don’t get everything right in the beginning. But then we improve. FB for example is now working in leveraging AI to filter out false news items.
“Where do you see ND graduates fitting into the AI industry, which currently seems to be filled with people holding advanced degrees?”
The AI Industry - so to speak - is very large and growing exponentially. Clearly with a Udacity Nanodegree certificate you won’t be as skilled as - say - someone with a PhD from Stanford or MIT. I think of the Nanodegree program much more like a Master’s degree. You get a specific skill set that is highly sought after in Silicon Valley today.
The nice thing about the Nanodegree program is that it is much more up-to-date and relevant than most Master’s programs. This is because the content is directly build with the companies who seek to hire in these areas, like Amazon or IBM Watson.
I firmly believe that AI will be able to make any office worker more efficient. 75% of working Americans work in offices. I would say most of that office work is highly repetitive. Like lawyers when they draft legal documents, or physicians when they diagnose skin diseases.
Examples are Go (AlphaGo), driving cars (Google Self Driving Car), and some of my own Stanford work on finding skin cancer with an iPhone app. (will come out in Nature early next year).
In agriculture, we used to farm fields manually. Hundreds of years ago, nearly all Europeans worked in the farms. One farmer could feed around 4 people or so max. Today a modern farmer can feed hundreds of people, thanks for efficiency gains.
“For someone interested in specializing in computer vision, between AIND and SDCND, which is the right path? In particular, how would you compare the differences in the computer vision aspect of the AIND and SDCND?”
the self driving car Nanodegree is really focused on self driving cars. The computer vision projects all deal with camera data obtained from a car. Very specific, but also highly desired. In AI we take a broader perspective. Computer vision is used in so many other settings, like document scanning, scene analysis, face finding in Snapchat!
There is of course overlap. There is also overlap with out Machine Learning ND, in that machine learning is such a critical component of AI and of Self-Driving Cars.
“I find it hard to believe AI will ever be able to contain creativity or inspiration. Do you think AI will ever develop humour? Do you think this is actually a possibility? Or is AI really just going to be intelligent decision making?”
More broadly: The breakthrough I really want is an AI that sits in my brain and watches me going about my daily activities. And not just me, but perhaps a million other people. And then after watching me for a while, I want AI to simulate me and see if this simulation can fool - say - my family or my coworkers. That would be amazing.
The link between supervised and unsupervised remains vastly underexplored. Many of my Stanford students prime supervised learning tools with massive amounts of unlabeled data. Sometimes they use the algorithms to self-label data, or they pull out different labels only weakly related to the task at hand. In my opinion, most data is kind of unsupervised. But there are still amazing things to be learned from unlabeled data. Look for example at your screen and then move your head around. You are getting training examples on how perspective works. Right there. Wish the field of machine learning spent more time in integrating unlabeled data into supervised learning.
I broadly believe coding will be replaced by training. We will still need people designing architecture, but fewer of us need to be down in the weeds. Instead we will have rich sets of tools. Programming AI will be more like putting those tools together in a meaningful way, and sourcing vast data sets to train these tools.
AI used to be very dogmatic. Researchers in AI would subscribe to one style of AI (eg, non-monotonic logic) and then spend their entire professional career defending it.
This is changing. AI is becoming much more ==pragmatic==. The methods themselves are not as important any more. More important is the architecture and the leverage of data. It’s terrific to see.
Many years ago I revcreated the then-defunct Stanford AI Lab. At that time, Stanford’s AI faculty was divided into subgroups, each of which defended their specific style of AI (eg, knowledge-based systems versus probabilistic networks). This is now a matter of the past. Today the faculty is united and focuses on solving big important societal problem. That’s so great to see.
“In the 20 years since your paper on NeuroChess what leap forward in ML/AI has been the most enlightening to you, for example, Backpropagation, Deep Neural Networks, the link from NNs to Boltzmann - Hamiltonian in physics, Autoencoders or Generative Adversarial Networks?”
First, don’t read my paper on NeuroChess. It’s not my strongest paper :slightly_smiling_face: But more seriously….
wrote by undergrad thesis (Diplomarbeit) on back-propagation and neural networks. I feel the field has come full circle. Neural networks were “out” for a decade. We used to joke at NIPS (Neural Information Processes Conference - the best one out there among the scientific conferences) that the use of “Neural Network” in the title of a submission was highly correlated with a rejection decision. Now it’s hot again. What changed? More data, faster machines. So amazing seeing Yan LeCon’s work receiving so much attention.
I think the original vision - that machines should be trained instead of being programmed - is finally becoming reality. It took 20+ years!!!
If you want to start a strong company: Pick your favorite office job and see if you can leverage AI to make people 10x as efficient. Radiology for example. Totally ripe for disruption.
Impact in small and medium business: Big times IMHO. In the past only very large companies - like Amazon - had the resources to leverage AI. But this is changing now. I remember meeting the founder of Victoria Secret (Les Wexner) and he told me for years he has used machine learning to learn what sells and what doesn’t. He showed me two under garments with a slight different tone in color, and told me one sells at twice the rate of the other. Small and medium business owners often utilize the data they have. But YOU, our Udacity AI students, can change all this!!!!
In general I recommend Russell/Norvig’s AI book. It expensive, but it’s the text every university uses.
Jarek Hirniak: Got it, read it, good, but code is outdated. There was great project on Google Summer of Code to rewrite it in modern language - not sure if it happened.
code for this book: https://github.com/aimacode
Agree. This is one of the many reasons why we work with top companies in creating our Nanodegree programs.
I think the biggest revolution will indeed be AI. Most of us do highly repetitive, mindless work all day, for most parts. With the right AI, we can delegate the boring work to the machines, and spend more time being creative.
overlap between Self-Driving car and AI nano degree?
Actually surprisingly little overlap, since the self-driving car ND program is developed by a separate team and focused on the application. The biggest overlap will be machine learning/deep learning and specifically image understanding.
As a potential student of the AI nanodegree, should I read that book before? After? Or it would be just a reference manual at that point?
Don’t read the book before. It’s really just a good reference.
The new version of the book?
Yes he is. I don’t know when it’s coming out. I hear it will be a substantial revision, not just a minor one. With the recent success of machine learning, much of the other content has become less relevant.
It’s mentioned that IBM Watson and Didi Chuxing are your hiring partners. Could you elaborate more on that? How you make hiring decisions?
We told a few “friends” about this program, and pretty much every company we talk to is eager to hire. Hiring partners are willing to let us say so publicly, and use their logo etc. In return they get preferred access to our student CVs (when students opt in). No company guarantees jobs here - but our partners do pay attention to the Nanodegree credential. To be totally honest here: Udacity’s intent here is to create a new pipeline for individuals to find a job in the tech industry. Not everyone is admitted into MIT or CMU. Since we started working with hiring partners and partner companies, hundreds of students have found jobs already. Over 20 now at Google! So it seems to be working …. knock on wood … still a long way to go ….
But to be clear: our hiring partners make the final hiring decisions.
“Ideally in the future, you’ve envisioned nanodegrees supplanting traditional university models by supplying targeted, specialized skillsets oriented for application. How do you envision university models changing in your vision for the future if nanodegrees take off?”
I believe NDs are complimentary to college education. The typical ND student is a lifelong learner (although we also have many college students who want contemporary skills). I believe higher education in general should be with us people for our entire professional lives, not just a short initial phase. Turns out past college degree, Udacity is perhaps the best way to get contemporary education and start a new career - well - I might be a bit biased here….