Sunday, 9 April 2017

sending data from Arduino over ESP 13 sheild (win/ linux/ mac)

I was working on my new project which include Esp13 sheild. But it turned out that I, like many people, couldn't simply "turn it on and expect it to work". So to make the task simple for you I had written this blog.

To work with ESP13 WIFI Sheild follow the steps.


  • Connect the Wifi sheild to Arduino .

                    Arduino uno and Esp 13 wifi sheild.

               Arduino mega and Esp 13 wifi sheild.

     

     

     

  • NOW Connect the wifi from your laptop or mobile

     

     If wifi is not connecting press key button on esp13 for 2 seconds and connect to wifi

     

     

  • Then open any web browser and Type " 192.168.4.1 " in the address bar and hit enter.                     


  • Then configure the Wifi module :


    • Lets leave wifi open for now
    • Ip = 192.168.4.1
    •   9600
    •   8
    • none
    •   1
    • TCP
    • Server

When all settings are done click submit button the module will restart after 5 seconds

  


  • Now Upload Test Program to arduino.

     

      void setup()
    {
      Serial.begin(9600);
     
    }
    void loop()
    {
      delay(1000);

      Serial.println(" - hello ESP8266 WiFi"); //output the serial data
    }


                  Note: SW switch should be OFF while uploading the program to arduino otherwise it will not upload the code because Arduino uses Tx , Rx to upload program.

  • Now Switch the SW to on and connect to wifi

  •  
     
  • Now Write the python program to receive the data from TCP Socet.

     

    import socket
    import sys
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    server_address = ('192.168.4.1', 9000)
    #print >>sys.stderr, 'connecting to %s port %s'% server_address
    sock.connect(server_address)
    print  ("Connection made!")

    try:
     while 1:
      data = sock.recv(16)
      print  ('%s' % data)

    finally:
      print  ('closing socket')
      sock.close()
 
 
 
  • Run the code the output will be like this.

     

Wednesday, 11 January 2017

How to make Image classifier in 5 min




In this ML lab, we will be using transfer learning, which means we are starting with a model that has been already trained on another problem. We will then be retraining it on a similar problem. Deep learning from scratch can take days, but transfer learning can be done in short order.
We are going to use the Inception v3 network. Inception v3 is a trained for the ImageNet Large Visual Recognition Challenge using the data from 2012, and it can differentiate between 1,000 different classes, like Dalmatian or dishwasher. We will use this same network, but retrain it to tell apart a small number of classes based on our own examples.




Things we need:-
  1. Linux desktop / VM
  2. Docker
  3. Tensorflow
  4. Image Dataset (large)
What you will learn
  1. How to install and run TensorFlow Docker images
  2. How to use Python to train an image classifier
  3. How to classify images with your trained classifier

Get Started

  In this ML Lab, you will learn how to install and run TensorFlow on a single machine, and will train a simple classifier to classify images of flowers. First you will need the large set of images of which you have to classify. We have 5 types of directory which contains different flowers. You can Download it from here or

cd $HOME
mkdir tf_files
cd tf_files
curl -O http://download.tensorflow.org/example_images/flower_photos.tgz
tar xzf flower_photos.tgz

# On OS X, see what's in the folder:
open flower_photos























All directory contains images of flower and labels are important to classify. Add this directories into one main directory. In my case it is "Flower_photos".

Install Docker in Linux.
  1. Log into your Ubuntu installation as a user with sudo privileges.
  2. Update your APT package index.
    $ sudo apt-get update
     
    
  3. Install Docker.
    $ sudo apt-get install docker-engine
     
    
  4. Start the docker daemon.
    $ sudo service docker start 
     
  5.  Verify that docker is installed correctly by running the hello-world image.
    $ sudo docker run hello-world
    

Install Tensorflow in Docker.


  1. Start a terminal using Terminal.app
          docker run -it gcr.io/tensorflow/tensorflow:latest-devel

        Check to see if your TensorFlow works by invoking Python from the       container's command line (you'll see "root@xxxxxxx#"):

     2. Exit docker by "ctrl+d".

    2.Start Docker with local files available

The TensorFlow Docker image doesn't contain the flower data, so we'll make it available by linking it in virtually.
  
docker run -it -v $HOME/tf_files:/tf_files  gcr.io/tensorflow/tensorflow:latest-devel

At the Docker prompt, you can see it's linked as a top level directory.
  • ls /tf_files/
    # Should see: flower_photos  flower_photos.tgz   
    
    

    Retrieving the training code

    The Docker image you are using contains the latest GitHub TensorFlow tools, but not every last sample. You need to retrieve the full sample set this way.
  • cd /tensorflow
  • git pull  

  1. (Re)training Inception

    At this point, we have a trainer, we have data, so let's train! We will train the Inception v3 network.
    As noted in the introduction, Inception is a huge image classification model with millions of parameters that can differentiate a large number of kinds of images. We're only training the final layer of that network, so training will end in a reasonable amount of time.
    Start your image retraining with one big command:
     
    # In Docker
    python tensorflow/examples/image_retraining/retrain.py \
    --bottleneck_dir=/tf_files/bottlenecks \
    --how_many_training_steps 500 \
    --model_dir=/tf_files/inception \
    --output_graph=/tf_files/retrained_graph.pb \
    --output_labels=/tf_files/retrained_labels.txt \
    --image_dir /tf_files/flower_photos
     

    If you have plenty of time remove

    how_many_training_steps 500
     
     

    Using the Retrained Model

    The retraining script will write out a version of the Inception v3 network with a final layer retrained to your categories to tf_files/output_graph.pb and a text file containing the labels to tf_files/output_labels.txt.
    These files are both in a format that the C++ and Python image classification examples can use, so you can start using your new model immediately.

    Classifying an image

    Here is Python that loads your new graph file and predicts with it.

    label_image.py

      import tensorflow as tf, sys

    image_path = sys.argv[1]

    # Read in the image_data
    image_data = tf.gfile.FastGFile(image_path, 'rb').read()

    # Loads label file, strips off carriage return
    label_lines = [line.rstrip() for line
                       in tf.gfile.GFile("/tf_files/retrained_labels.txt")]

    # Unpersists graph from file
    with tf.gfile.FastGFile("/tf_files/retrained_graph.pb", 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        _ = tf.import_graph_def(graph_def, name='')

    with tf.Session() as sess:
        # Feed the image_data as input to the graph and get first prediction
        softmax_tensor = sess.graph.get_tensor_by_name('final_result:0')
       
        predictions = sess.run(softmax_tensor, \
                 {'DecodeJpeg/contents:0': image_data})
       
        # Sort to show labels of first prediction in order of confidence
        top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
       
        for node_id in top_k:
            human_string = label_lines[node_id]
            score = predictions[0][node_id]
            print('%s (score = %.5f)' % (human_string, score))

 Or you can download it:-

  • curl -L https://goo.gl/tx3dqg > $HOME/tf_files/label_image.py 
    
    
    
    

Restart your Docker image:
docker run -it -v $HOME/tf_files:/tf_files  gcr.io/tensorflow/tensorflow:latest-devel 
 
Now, run the Python file you created, first on a daisy:
# In Docker
python /tf_files/label_image.py /tf_files/flower_photos/daisy/21652746_cc379e0eea_m.jpg
 
 
 
 
  • Thank you guys, In my next post I will show how to use tensorflow with Webcam images using opencv