Friday, October 14, 2011

Alles Auf Anfang, o la cancion para empezar tu vida

Tenia meses que queria compartir esta cancion. Es una cancion alemana de uno de mis grupos favoritos: Wir sind Helden. (Nosotros somos los heores!) . Esta cancion me encanta la melodia, es de esas cancione que me pone de buen humor escucharla, me dan ganas de bailar. Pero lo mejor que tiene esta cancion es la letra. El mensaje que yo entiendo es que debes tomar accion! Si hay cosas que no te agradan de la vida, esta en tus manos cambiarlas. Ponte las pilas. Da todo cada dia! Alles auf anfang! Da todo en este principio que viene! Venga Pumas vamos!

Aqui esta la cancion. Enjoy!


Version Alemana:


23.55: Alles auf Anfang
Du wirst zahnlos geboren und ohne Zähne gewogen
Kriegst sie bis Mitte zwanzig, schon wieder gezogen
Bist oh so verschüchtert, verzagt und vernagelt
Kein Licht dringt zu dir, so geplagt bist du, sternhageldicht
Was dich runterzieht, sind deine schweren Arme
Wer schleicht, dem wird leicht kalt, darum schleichst du ins Warme
Du nennst es Weltschmerz, ich nenn' es Attitüde
Es ist erst fünf vor zwölf und du bist schon so müde

Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Fünf vor zwölf, alles auf Anfang"
Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Fünf vor zwölf, alles auf Anfang"

Nimm deine Zähne, leg sie unter dein Kissen
Und sag der Fee du möchtest folgendes wissen:
"Warum sinkt mir mein Herz in meine schweren Beine?
Ich kann kein Ende sehen von meiner langen Leine"
Das was dich so beschwert, das sind die dicken Bären
die sie dir aufbinden, du könntest dich beschweren
Ob das von Bein haut, das wäre nun zu klären
Wenn die kleinlauten, kleinen Leute im Kleinen deutlich lauter wären

Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Fünf vor zwölf, alles auf Anfang"
Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Fünf vor zwölf, alles auf Anfang"

Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Fünf vor zwölf, alles auf Anfang"
Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Fünf vor zwölf, alles auf Anfang"

Wer "A" sagt muss auch "B" sagen
Nach dem ganzen "ABC" fragen
Wer "ach" sagt muss auch wehklagen
Wer "ja" sagt auch "ach nee" sagen

Fühlst du dich mutlos? Fass endlich Mut, los!
Fühlst du dich hilflos? Geh' raus und hilf, los!
Fühlst du dich machtlos? Geh' raus und mach, los!
Fühlst du dich haltlos? Such Halt und lass los!

Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Fünf vor zwölf, alles auf Anfang"
Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Fünf vor zwölf, alles auf Anfang"

Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Fünf vor zwölf, alles auf Anfang"
Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Fünf vor zwölf, alles auf Anfang"

Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Fünf vor zwölf, alles auf Anfang"
Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Vier vor zwölf, alles auf Anfang"
Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Drei vor zwölf, alles auf Anfang"
Ihr sagt: "Kein Ende in Sicht"
Wir sagen: "Zwei, eins, auf die Zwölf"



Version En Espa~ol!

23.55: Listos para empezar!
Naciste sin dientes y sin ellos te pesaron ,
Haz que lleguen hasta tus veinti-tantos
Con buena fe arragantelos.
Estas tan intimidado, tan fracasado y atrapado,
Ningun rayo de luz llega a ti, estas tan molesto

Lo que a ti te cansa, son tus brazos pesados,
quien anda a hurtillas, a escondidas, estara un poco friolento, por eso entras tu al calor a escondidas.
Tu lo llamas "Cansancio de estar vivo", yo lo llamo 'Actitud'
Ya son 5 para las doce, y tu ya estas cansado

Ustedes dicen: "No se ve el fin"
Nosotros decimos: "Son las cinco para las doce, vamos a dar el todo en este nuevo dia"
Ustedes dicen: "No se ve el fin"
Nosotros decimos: "Son las cinco para las doce, vamos a dar el todo en este nuevo dia

Toma tus dientes y dejalos debajo de tu cojin,
y preguntale al raton de los dientes, todo lo que quieras saber:
"Por que mi corazon se cae hasta mis pesadas piernas?
No puedo ver el fin desde mi larga linea"
Lo que a ti te pesa es que te estan haciendo pendejo,te puedes ir a quejar,
talvez sea un problema con la piel de tu pierna, se puede eso aclarar,
cuando la gente chiquita y docil se une, es mucho mas fuerte.

Ustedes dicen: "No se ve el fin"
Nosotros decimos: "Son las cinco para las doce, vamos a dar el todo en este nuevo dia
Ustedes dicen: "No se ve el fin"
Nosotros decimos: "Son las cinco para las doce, vamos a dar el todo en este nuevo dia

Ustedes dicen: "No se ve el fin"
Nosotros decimos:"Son las cinco para las doce, vamos a dar el todo en este nuevo dia
Ustedes dicen: "No se ve el fin"
Nosotros decimos: "Son las cinco para las doce, vamos a dar el todo en este nuevo dia

Quien dice A tiene tambien que decir B
Pide por todo el alfabeto
Quien dice "ahh": tambien debe empezar a llorar
Quien dice "si" debe tambien decir "duh!, obvio!"

te sientes sin animos? Vamos animate!
Te sientes sin ayuda? Vamos sal a ayudar a la gente!
Te sientes impotente? Vamos sal y hazte cargo!
Te sientes desorientado? Vamos sal y orientate!

Ustedes dicen: "No se ve el fin"
Nosotros decimos: "Son las cinco para las doce, vamos a dar el todo en este nuevo dia
Ustedes dicen: "No se ve el fin"
Nosotros decimos: "Son las cinco para las doce, vamos a dar el todo en este nuevo dia

Ustedes dicen: "No se ve el fin"
Nosotros decimos: "Son las cinco para las doce, vamos a dar el todo en este nuevo dia
Ustedes dicen: "No se ve el fin"
Nosotros decimos: "Son las cinco para las doce, vamos a dar el todo en este nuevo dia

Ustedes dicen: "No se ve el fin"
Nosotros decimos: "Son las cinco para las doce, vamos a dar el todo en este nuevo dia
Ustedes dicen: "No se ve el fin"
Nosotros decimos: "Son las cinco para las doce, vamos a dar el todo en este nuevo dia

Ustedes dicen: "No se ve el fin"
Nosotros decimos:"Son las cinco para las doce, vamos a dar el todo en este nuevo dia
Ustedes dicen: "No se ve el fin"
Nosotros decimos: dos, uno...vamos por el doce!




Algo que me agrado de hacer esta traduccion, fue que aprendi una frase en aleman nueva:
" jmd. einen Bären aufbinden", es como bromear con alguien, yo lo tome como vacilarlo, hacerlo pendejo etc. Se me hace una frase rara porque Bären es oso. O.o

Thursday, October 06, 2011

El arte de negociar, y la diferencia de generos




El dia de hoy mi universidad invito a la autora de Ask For It: How Women Can Use the Power of Negotiation To Get What They Really Want y Women Don’t Ask:The High Cost of Avoiding Negotiation and Positive Strategies for Change, Sara Laschever. Pense en hacer un peque~o post respecto a lo que aprendi en esta platica y compartirlo con mis lectores, porque creo que para muchos (no solo mujeres) el pedir por las cosas es un acto dificil.

Dentro de la platica, Sara hablo de como los hombres ven el negociar muy diferente a como lo ven las mujeres. Para los hombres. negociar es algo agradable es como un juego de baseball donde debes tener estrategias. Para las mujeres, el negociar es algo tedioso, algo horrible, como ir al dentista. La autora dijo que esta diferencia de percepcion, radicaba en la diferencia de crianza que existia entre ni~os y ni~as. A las nenas, usualmente se les da juegos que involucran el cuidado de los demas: les dan bebes de juguete, sets de cocina etc. Mientras que a los ni~os, se les dan juegetes donde tienen que explorar su propio ingenio para salir adelante: se les da sets de trenes, donde deben construir rutas y ver como saltar obstaculos etc. A las ni~as tambien se les suele dar tareas diferentes a la de los ni~os. A las ni~as las tareas que se les da son relacionadas con cuidar bebes o a sus herman@s peque~os, ayudar en la cocina. Usualmente todas las tareas en las que se involucran a las nenas hay un adulto supervisando, mientras que a los ni~os, las tareas que se les asigna en el hogar tienen que ver con lavar el coche, quitar la nieve de la acera, arreglar el jardin, sacar la basura etc. Los ni~os reciben menos supervision que las ni~as en las tareas que se les da, y en muchos casos a los ni~os se les paga por el trabajo que ejecutaran: hey te dare 10 pesos si lavas el coche etc. Desde chicos, los ni~os aprenden a negociar las cosas, porque comienzan a decirle a sus padres: Solo 10 pesos? Pero es un auto grande y ademas lo aspirare, dame mejor 15 pesos! Mientras que las ni~as se acostumbran a hacer sus quehaceres por amor. "Por amor cuidare a mis hermanos."
Adicionalmente la sociedad, ve mal a las mujeres demandantes, mandonas, e interesadas en el dinero
Estas cosas provocan que cuando crezcan, los hombres y las mujeres tengan muy distintos sentimientos respecto al acto de negociar. Esto explica porque, mientras el 65% de los hombres pide un incremento de salario, solo el 12% de las mujeres lo hace.
El no negociar o pedir las cosas, hace que uno tenga grandes perdidas. Porque la persona que pidio las cosas, tiene ya un mejor CV que la persona que no pidio nada. La autora hablo de casos, donde los hombres pedian a su Universidad dinero para asisitir a conferencias. La Universidad les daba el dinero y los hombres hacian grandes conexiones por haber podido asistir. Las mujeres, como nunca preguntaron si era posible que la universidad les pagara el viaje, perdian la oportunidad de asisitir a la conferencia y expandir sus horizontes.

la autora hablo de verios puntos para mejorar la negociacion. Algunos de ellos son:
  • Asume que TODO es negociable
  • Piensa que el mundo es tu ostia ( Tu tesoro). Todo es una oportunidad.
  • Vuelvete mas chingona. ( crea conexiones con gente que esta en el poder, estudia una segunda carrera para tener mejor CV, obten diferentes asesores, gente que te puede dar consejos)
  • Haz tu investigacion (Obten informacion de cuanto puedes pedir, hay recurso en internet que te muestran salarios promedios de diferentes compa~ias, pregunta con tus amistades.Es importante estar bien informado)

Por ultimo, algo que dijo la autora que me gusto, es que si aceptas un mal salario, es aceptar que eres chafa. Es como el vino, usualmente si ves un vino barato que cuesta 20 pesos, no esperas mucha calidad de el, en cambio si ves un vino de 200 pesos, es probable que consideres que es de mucha mejor calidad y sabor. Entonces cuando aceptas salarios bajos, estas comunicando algo de ti, estas diciendo que eres el vino de mala calidad de 20 varos. Lo cual no es algo que quieres! Acepta siempre buenos tratos de buena calidad. Tu lo vales! ( Ja comercial loreal ;)

Y algo curioso que dijo la autora, es que para poder persuadir a las personas, es importante que la mujer sea amigable. (Esto por lo mismo que se menciono anteriormente, que la sociedad ve mal que la mujer sea mandona y agresiva).

Por ultimo, me gustaria escribir sobre el consejo que uno de mis amigos me dio, respecto a pedir cosas: La persona a quien le pediras X cosa, es un adulto que sabe decir NO. Entonces si no peude dartelo, sabe decir NO. no tienes por que preocuparte, no es una situacion incomoda para la otra persona. Y si lo pides, estas mejor que si no lo pides, porque en el peor de los casos, estas donde empezaste.

Friday, July 01, 2011

Getting Images laid (pause) over a video in android

This post is a short tutorial on how to overlay images on video in android. I created this tutorial, after making an android application that plays a video and with certain user interactions displays images on top of the video. This Image-Video effect can also be achieved through action script, but in this tutorial we avoid any extra programming tools and stick to working with the android API.

Before we begin, we need to giver a quick overview of concepts:

Most of the user interface components on Android are Views. A View represents a rectangular area on the screen and it is responsible for drawing and event handling. The ImageView class displays an arbitrary image, such as an icon. The VideoView class displays a video file. The ViewGroup class is considered a special view that can contain other views (called children.) This class is the base class for layouts.

In Android, a layout holds all the elements that appear to the user, and defines where they will be placed. The layout can be declared in an XML file or can be programmatically defined by creating View Objects. A particular type of layout is RelativeLayout. This Class holds the concept that each component in the interface can be described in relation to each other or to its parent.

The overall idea is that the image on video overlay can be accomplished by using RelativeLayout and placing the VideoView as the first child of the RelativeLayout and the ImageView as the second child. This way in the camera preview, the ImageView will appear to be "on top of" the VideoView.

The step-by-step instructions are as follows:

1. In Eclipse, create a Simple Android Project From Scratch (Make sure to have created a main activity).

2. Under the res folder in your project go to the drawable folder (if you don't have a folder titled "drawable" in res, create it) and add all of the images you plan on working with there.

3. Add the video or videos you plan on working with to the SD memory card of your android phone. (This can be done by connecting your phone via USB to your PC and on the phone, selecting "notifications", then "USB connected", and in the new window that appears clicking: "Turn on USB storage". After a few seconds a new removable disk should appear on your PC. Copy to it your videos. The video format of my videos was mp4)

4. Back in your android project in your layout folder, add a new xml file with the name of your choice (for example video_over_image.xml) .In this xml we will define the elements and the layout of our application. For this particular application, we want the following layout:
A text-box in the upper part of the window, where the user types the name of the video they wish to play. The space below the text-box is where the video will be displayed.
The image overlaid on the video will appear on the upper portion of the video. (But it is possible to place it wherever one desires).
Our XML file to accomplish this layout is as follows:


<?xml version="1.0" encoding="utf-8"?>

<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent">

<TextView android:id="@+id/label"
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="Type video name here:" />
<EditText
android:id="@+id/edittext"
android:layout_width="fill_parent"
android:layout_height="wrap_content"/>

<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent">

<Button android:id="@+id/topBtn"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Top"
android:layout_centerHorizontal="true">
</Button>

<VideoView android:id="@+id/surface_view"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
/>

<ImageView android:id="@+id/overlayImage"
android:layout_width="wrap_content"
android:layout_height="wrap_content"

android:layout_below="@+id/topBtn"
/>

</RelativeLayout>

</LinearLayout>



An interesting point to notice about this layout is that we added a dummy button to it. Because the video is defined right after this dummy button, the video will completely "cover" the button, so it will not appear on the interface. This button helps in positioning our image; Our image is set relative to this button. In this case, because we sought for the image to appear in the "mid-top" portion of the video the image's layout was set to be below this button. It is also important to note, that the image was declared after the video, because this permits the image to be displayed "on top of" the video.


5. In our java activity file in the onCreate method, we now need to establish that we will be using this layout. We also need to establish a listener for the textbox, which after the user has typed the name of the video to play and pressed "enter" will start playing the desired video. Furthermore it is also necessary to establish what images will be overlaid and when that will occur.
To facilitate this example, we will establish that when the user types 1, video A (which should already be on the phone's SD card) will be played and image c1 will be overlaid on the video. Similarly, when the user types 2, video B will be played and image c2 will now be overlaid on the video. We will also add some effects to the image, in specific alpha blending.
In the following, we will present all the code to accomplish this task and subsequently review each part of it:



package com.example.android.videooverimage;


import android.app.Activity;
import android.media.MediaPlayer;
import android.media.MediaPlayer.OnCompletionListener;
import android.os.Bundle;
import android.util.Log;
import android.widget.MediaController;
import android.widget.VideoView;
import android.net.Uri;
import android.widget.EditText;
import android.view.KeyEvent;
import android.view.View.OnKeyListener;
import android.view.View;
import android.content.res.Resources;
import android.widget.ImageView;



public class VideoOverImageActivity extends Activity
{



public VideoOverImageActivity()
{

}


public void onCreate(Bundle icicle)
{

super.onCreate(icicle);
setContentView(R.layout.video_over_image_activity);
final EditText edittext = (EditText) findViewById(R.id.edittext);

edittext.setOnKeyListener(new OnKeyListener()
{
public boolean onKey(View v, int keyCode, KeyEvent event)
{
// If the event is a key-down event on the "enter" button
if ((event.getAction() == KeyEvent.ACTION_DOWN) && (keyCode == KeyEvent.KEYCODE_ENTER)) {
// Perform action on key press

int aInt=0;
try
{
aInt = Integer.parseInt(edittext.getText().toString());
}
catch (NumberFormatException e)
{
Log.e("debug","Error finding Image: "+e.getMessage());

}

VideoView videoHolder = (VideoView) findViewById(R.id.surface_view);
MediaController mc=new MediaController(VideoOverImageActivity.this);
Boolean returnValue=true;

switch (aInt)
{

case 1:


startPlaying(videoHolder,mc,"file:///sdcard/video1.mp4",0);
break;

case 2:
startPlaying(videoHolder,mc,"file:///sdcard/video2.mp4",1);
break;

default:
returnValue=false;
break;

}


return returnValue;


}

return false;
}
});

}

public void startPlaying(VideoView videoHolder,MediaController mc,String nameVideo, int song)
{

Resources res = VideoOverImageActivity.this.getResources();


ImageView image = (ImageView) findViewById(R.id.overlayImage);

try
{
R.drawable.class.getField("b" + song).getInt(0);
image.setImageDrawable(res.getDrawable(R.drawable.class.getField("b" + song).getInt(0)));
image.getDrawable().setAlpha(55);
}
catch (Exception e)
{
Log.e("debug","Error finding Image: "+e.getMessage());

}

videoHolder.setMediaController(mc);
videoHolder.setVideoURI(Uri.parse(nameVideo));
videoHolder.requestFocus();
videoHolder.start();

videoHolder.setOnCompletionListener(new OnCompletionListener()
{
public void onCompletion(MediaPlayer arg0)
{
try
{
Log.e("debug","MediaPlayer seek to 0...");
arg0.seekTo(0);
Log.e("debug","MediaPlayer start...");
arg0.start();
Log.e("debug","MediaPlayer started");
}

catch(Exception ex)
{
Log.e("debug","MediaPlayer error: "+ex.toString());
}
}
});



}


}

Thursday, June 30, 2011

Android SDK on Windows for Dummies: A focus on the debuggin part


I have recently started developing android apps on Windows 7 (life sucks, everyday I wish I were on Linux, but it is what it is). Today for testing my apps I was given a very unique smart phone from a dubious manufacturer. Since it was not the typical android phone, the normal procedures from : http://developer.android.com/sdk/win-usb.html#WinUsbDriver did not seem to work :( I followed all of Google's instructions. But when I tried to update the driver I encountered a few problems: windows asked me where to search for the driver software, and I selected: "Browse my Computer for driver software", then clicked "Browse" and Explore to C:\Android\usb_driver. I also checked the "Include Subfolders", clicked Next and... I got the following error message: "Windows was unable to install your android phone"

After hours and hours of working around it, I finally found a solution to the problem and thought I'd share it, so people can avoid some of the pitfalls I encountered and the installation will hopefully not be as challenging as it was for me.

Basically what worked for me was to download PdaNet for android from : http://www.junefabrics.com/android/download.php . I installed PdaNet with the phone connected to the PC and android was up and running (and not suspended).

PdaNet is technically a tool for supplying Internet access to an unconnected device from a device (such as a mobile phone) which does have Internet access. I believe PdaNet is useful in this case, because it automatically sets up all of the environment for having communication between the computer and the phone.

Once PdaNet has been successfully installed, I ran from a windows command propt "adb.exe" and "fastboot.exe". Now, when I ran the latter, I received a message stating that a .dll file was not found, I search for that file, and added its location to my path.
Here it might be important to state that adb.exe is the "android debug bridge", a tool that can deal with the emulator or the device. Fastboot on the other hand, is a diagnostic protocol used primarily to modify the flash file system in Android smart phones from another computer over a USB connection.

With this,I had communication with my android phone!

One can test it out, by typing in a command propt: adb devices and obtain a list of connected devices, including the phone. With that communication between our device and our computer is achieved, so running and testing our application is now a cinch.
happy hacking :)

Monday, May 16, 2011

LDA is not Ladies Ditching Apes

I have recently been working on Topic Modelling and thought I'd do a brief tutorial on how to automatically divide a text into a series of relevant topics.

Before we dive into our coding, let's give a brief overview of the topic so we are all on the same page:

Topic Modelling is all about automatically finding the thematic structure of a document or a series of documents.

Topic modelling specifies a probabilistic method through which documents can be created. Initially a distribution over topics is selected. For example, the topics of Love and Mexico could be chosen, but each assigned a certain weight or probability. If it was sought for the article to have a greater political inclination, Mexico would be assigned a greater weight than Love. Whereas if the purpose was to write a romantic novel, Love would have a much higher weight or probability assigned than Mexico. Once the topics along with their corresponding probabilities have been assigned, a topic is chosen randomly according to the distribution, and a word from that topic is drawn. This process of randomly choosing a word from a topic is done iteratively until the system has finished "writing" the article.

Besides creating documents automatically ( Hasta la vista estudiantes de Literatura :P) Topic Modelling can also infer the set of topics responsible for generating a collection of documents.
We care about Topic Modelling because it can enhance search in large archives of texts, it also permits for better similarity measures: given two documents exactly how similar are they?

Different algorithms exist for finding the thematic structure of a document. Today we will focus on one particular algorithm called Latent Dirichlet Allocation (LDA). Which is a "...generative probabilistic model for text corpora...".
The intuition behind LDA is that a document is conformed of a series of different topics, and each topic is a probability distribution over words. Each document is a random mixture of corpus-wide topics, where each word of a document is drawn from one of these topics. LDA intents to infer how the documents are divided according to these topics, and what the topics are. The only information LDA has, are the documents.

In the following, we will use THE FORMAL notation of LDA, (mathematical style!) to make things a bit more clearer:
P(z) denotes the topic distribution z of a particular document. P( w | z ) is the probability distribution of words w given topic z. LDA assumes each word wi in a document (where the index refers to the ith word token) is generated by first sampling a topic from the topic distribution, then choosing a word from the topic-word distribution. We write P( zi = j ) as the probability that the jth topic was sampled for the ith word token and P( wi | zi = j ) as the probability of word wi under topic j.

LDA assumes that the topics present a Dirichlet distribuition, i.e. the mixture weights θ are generated by a Dirichlet prior on θ. Each topic is modelled as a multinomial distribution
over words.
Hopefully this brief overview will allow us to have some Python coding fun for our next post!

Saturday, March 05, 2011

How to use Machine Learning to boost up your Parallel Computing

In the past, programmers would find reassurance to their problems of running extremely large programs in the yearly speed up of computer processors. Every year or so, the speed of computers would double and a faster computer would be in the market which would be able to rapidly execute their large sized code. But, in today's world this is not the case, analysing the GHz speed of the processors in a desktop computer of 2 years ago, versus the speed new desktop computers present, proves that it has barely, if any, increased, this mainly due to the fact that it is very difficult to build reliable low-cost processors that run significantly faster. This is the reason why, the solution to running faster code has focused on doubling the number of processors that exist on a single chip. Actually researchers believe, that in the following years we will have systems, which present twice the number of cores with every new technology generation.

It is important to note, that these multi-processors have laid the road for using parallel computing, in which a large program can be divided into smaller programs, each of which is then assigned to a processor with shared or independent resources. Parallelism is what is generally used today, for providing Performance improvement.

This approach although generally highly functional, has shown in some cases to degrade the performance considerably! The problem is that the scheduling of parallel jobs is a very complicated task which is highly dependent on a series of different factors: the workload, the blocking algorithm being utilised, the local operating system, the parallel programming language and the machine architecture. Expert humans are who tend to make the design specifications for these highly complicated tasks,and as a result they tend to be somewhat rigid and unsophisticated [1].

Because of this, machine learning techniques have in recent years provided a solution to this problem. Machine learning is a field which intends to build computer systems that automatically improve with experience. Researchers have been applying machine learning algorithms to problems of resource allocation, scheduling, load balancing, and design space exploration, among other things.

Such is the work done, in Cost-Aware Parallel Workload Allocation Approach based on Machine Learning Techniques. Here the authors tackle the problem of finding adequate workload allocations in a cost-aware manner, by learning from training examples how to allocate parallel workload among Java threads.
Given a program, their system computes its feature vectors, and utilising a nearest neighbour approach finds from the training examples, the best parallel scheme for this new program.

One may initially wonder, what type of training examples were utilised for this problem and how were they generated?
The training examples came from a series of programs coded in java, which presented different for loops. From each for loop its corresponding feature vector was calculated along with its associated label, this conformed each training example. In this case, the feature vector corresponded simply to the workload the for loop presented, and the label to the optimal number of threads that should be utilised with that specific workload.
The programs which were utilised for the training examples were manually selected, each had the purpose of bringing a certain workload variety to the training pool.The labels were set by an automatic program, which tested each workload (loop description) with a different number of threads, and then calculated what was the optimal thread number required for that specific workload to achieve optimal performance. It is important to note, that in this approach the computation cost of calculating the feature vectors was diminished by calculating an implicit estimate of the workload, the features which conformed the workload were: 1) loop depth; 2) loop size; 3) number of arrays used; 4) number of statements within the loop body, and 5) number of array references.

Since not all program features play an equal role in workload estimation different weights were assigned to different features during classification, with higher weights given to feature 1), 2) and 4). Within the paper it was not clearly explained how the values of these weights were assigned or what their values were. It might have been adequate to also utilise a learning algorithm which was capable of finding the most adequate weights given a certain training example, because it might be the case that under certain conditions a feature might be weaker for classification than an other, and therefore other weights need to be utilised. A broader explanation on the weight manner could have provided more insight and restricted this speculation, but it is interesting to ponder none the less.

On the other hand, in this example the authors opted for a supervised learning approach, where each training example that was handed to the system was manually selected and labelled. This is clearly a tedious task to do, and at times may not be the most optimal, since manually finding which examples provide more information for the learning process in comparison to other possible training examples is difficult and non-trivial. One therefore wonders if an unsupervised learning algorithm could have provided better results. In this type of approach, the machine can be "thrown into the wild" and through observations discover previously unknown structures or relationships between instances or their components, this could eliminate the problems mentioned previously, but has the shortcoming that if the learning phase is done online the algorithm might take much longer than if an instance based learning approach had been taken.



Furthermore it does not seem that their approach accounts for any long-term consequences. Each decision within a for loop was done independently of what had been decided for the other for loops within the program, this might mean, the decision to use X amount of threads for that workload might be locally optimal but not globally optimal, this could in the long run deteriorate the performance. This situation seems to suggest that for this problem instead of using instant based learning, a better approach would have been to use Reinforcement Learning. In Reinforcement Learning, the machine is not told which actions to take, but rather must infer them, by analysing what yields the best reward. The following figure presents an overview of how reinforcement learning works


For this particular case, if they authors had used this other learning method, it could have been possible for the machine to analyse the program as a whole, and decide then what the best long-term thread allocation for each workload would be. In specific, the machine would have interacted with the "environment" (in this case the supplied java program) over a discrete set of time steps. In each step the machine would have "sensed" the environment's current state (which would match the number of threads being used in each for loop) and executed an action(an action would correspond to assigning or removing more threads to certain for loops). This action would modify the environment’s state (which the machine could sense in the next time step) and produced an immediate reward (The reward would be the overall performance obtained for that particular thread assignation).
The machine’s objective would be to maximise its long-term cumulative reward by learning an optimal policy that maps states to actions.
In its most basic form, Reinforcement Learning brings a knowledge-free trial-and-error methodology in which the machine intents various actions in numerous system states, and learns from the consequences of each action.
From this, it is clear that the advantage of using this learning method is that no explicit model of either the computing system being managed or of the external process that generates workload or traffic are necessary.Additionally, Reinforcement Learning is capable of treating dynamical phenomena in the environment, as mentioned before, it can analyse how current decisions may have delayed consequences in both future rewards and future observed states.

Now, while this can sound very promising, it is necessary to also take into consideration, the challenges which Reinforcement Learning faces in the real world. Firstly, Reinforcement Learning can suffer from poor scalability in large state spaces, furthermore in times the performance obtained during online training can be below average, due to the lack of domain knowledge or good heuristics. In addition, because reinforcement learning procedures need to include "exploration" of actions, the selection of actions can be exceedingly costly to implement in a live system. This is the reason why, many modern applications that utilise reinforcement learning in order to address the above practical limitations,take a hybrid approach. Such an example, is the work done in A Hybrid Reinforcement Learning Approach to Autonomic Resource Allocation. Here the authors propose for the machine to have an offline training phase. They suggest that given enough training examples which follow a certain optimisation policy , the learner (machine) using reinforcement learning will be able to converge to the correct value function, it will be able to find a new policy which greedily maximises the value function and is able to improve the original policy that was given. In this form, the poor performance that is obtained by using live online training is avoided. Another benefit of their method is that multiple iterations can be done: Through training a new policy, which is the improved version of the original policy, is obtained. This improved policy can then be feed into the system again, acting as the original non-optimal policy, with this second policy a second data set is collected, which can then be used to train a further improved policy. This enables the possibility of running the algorithm iteratively till a desired "reward" is obtained.

It was mentioned before that reinforcement learning, presents the problem of having expensive exploration of actions, the authors tackled this problem by replacing the generally used lookup table for representing the value function with a nonlinear function approximator, in particular a neural network. A function approximator provides a solution to the mentioned issue, because it is mechanism for generalising training experience across states, therefore it is no longer necessary to visit every state in the state space. It also allows for generalisation across actions, so that the need for exploratory off-policy actions is also greatly reduced.

Their hybrid Reinforcement Learning approach was tested on realistic prototype Data Center, which dynamically allocates servers among multiple web applications so as to maximise the expected sum of SLA (service level agreement) payments in each application.

Although their proposed solution resolves most of the problems encountered with reinforcement learning, we can observe an aspect of their work, that might call for improvement: In their algorithm with each iteration, the model of the system is modified. They always assume the model "learned" from the use of a certain set of policies can never be applied to a set conformed of other policies. The authors never explored if this is always the case, could a model learned with certain policies still be valid under other policies which hold a degree of similarity to the original policies, or is it always necessary to learn from scratch the model, as a result of changes to an active set of policies?
Additionally, the authors utilised a neural network for finding the states to explore, and although this did solve the exploratory problem mentioned before, because the neural network has hidden states it is not possible to determine beforehand exactly how many states will be explored given the current used policies, knowing beforehand this number could improve computational costs as better planning can be done. In the work done in "An Adaptive Reinforcement Learning Approach to Policy-driven Automatic Management", the authors addressed this problems and show how a Reinforcement Learning Model can be adapted to accommodate this.
The authors analysed how previously learned information about the use of policies can be effectively used in a new scenario. For this, they consider policy modifications as well as the amount of time used to form the model before the changes. Similarly to the work in Hybrid Reinforcement Learning Approach to Autonomic Resource Allocation, a state transition model, which uses a set of active expectation policies is defined, but in difference to Hybrid Reinforcement Learning Approach to Autonomic Resource Allocation, instead of using a neural network, the authors capture the management system's behaviour through a state-transition graph, what their system is lacking and could be beneficial in the future is mapping directly how changes in policies effect the state-transition models

Monday, January 24, 2011

Soñando con Tacos en la ciudad de las estrellas de cine rodeada de angeles

(Este post es dedicado a mi lectora favorita!)
Mi lectora favorita, (ya que parece ser la unica que tengo..jajaja :P) Me recomendo ayer una cancion alemana ochentera. En general odio la musica ochentera, I'm all about the sixties man! Pero dado que tenia un estilo peculiar y fue recomendada por mi lectora favorita, decidi hacer un post de Musica Alemana al alcanze Mexicano!
Mi traduccion de D.A.F. KEBAB TRäUME!

Version Alemana:


Kebabträume in der Mauerstadt,
Türk-Kültür hinter Stacheldraht
Neu-Izmir ist in der DDR,
Atatürk der neue Herr.
Miliyet für die Sowjetunion,
in jeder Imbißstube ein Spion.
Im ZK Agent aus Türkei,
Deutschland, Deutschland, alles ist vorbei.

Kebabträume..

Miliyet...

Kebabträume...

Miliyet...

Wir sind die Türken von morgen.
Wir sind die Türken von morgen..


Version En Espa~ol!

Sue~os de Kebabs en la ciudad del Muro (Los Kebabs son un platillo tipico turco, usualmente llamado por ellos Döner kebab, la ciudad del Muro se podria referir a Berlin. )
Cultura turca atras de ese alambre de puas.
La nueva capital Turca esta en el este de alemania
Atatürk el nuevo Se~or ( Atatürk fue el primer presidente de Turquia!)
"nacionalidad" para la Union Sovietica. ( La palabra Miliyet no esta en aleman, sino en turco y significa Nacionalidad)
en cada cafeteria un espia
La administracion de los partidos comunitas regidos por alguien de Turquia. ( En la cancion usan la abreviacion ZK que es Zetralkomitee, el cual representaba el cuerpo administrativo de los partidos comunistas en Alemania- De acuerdo a Wikipedia)
Alemania Alemania, Todos esta perdido.
Sue~os con Kebabs...
Nacionalismo (en turco)
Sue~os con Kebabs...
Nacionalismo (en turco)
Nosotros somos los Turcos de Ma~ana,
Nosotros somos los Turcos de Ma~ana....


...wow...debo admitir que ME ENCANTO hacer esta traducccion! Muchas gracias a quien la recomendo!
No solo me sirivio para recordar el aleman, sino para aprender un poco de historia.

Considero que es duro como se refieren los turcos que viven en Alemania a ALemania: " Todo esta perdido." No se si yo me podria atrever a decir algo similar de un pais viviendo alli.
Muchos Mexicanos viven en EUA, pero no se si piensen o canten: EUA todo esta perdido...EUA todo esta perdido. Es una cancion muy nacionalista turca, que hace menos a la cultura alemana. Los demas que opinan?

Creo que es dificil ser extranjero en Alemania, se que en los trenes los policias tienen derecho a interrogar y pedir boletos a los que vean sospechosos y usualmente la selecion se hace de modo racial. Entonces ha de ser incomodo, no tener los ojos claros y el pelo rubio y verse como un tipico aleman. Talvez de alli viene ese sentimiento de enojo hacia Alemania y decirle que esta acabado, que quienes tienen el poder son ellos.
Alguien mas siente que la cancion es extremadamnete agresiva hacia los alemanes?

A veces pienso que si me gusta gritarle a los extranjeros el amor que tengo por Mexico, por nuestros tacos al pastor, la barbacoa, los corridos, los sones jaroches, por todas las cositas que son Mexico. Pero no se si me iria al extremo de decirles que su pais esta terminado. Se que por ejemplo, varios federales de EU han matado de modo violenta a la juventud mexicana. Pero aun no siento en mi sangre, tanto odio para cantarles que su pais ya cayo, ya termino.
Los Mexicanos que opinan?
Sue~o con Tacos..Sue~o con Tacos en la ciudad llena de estrellas de cine y rodeada de Angeles...