Most read articles
|1||Will a cashless society soon become reality in Switzerland?|
|2||Interview with Prof. Dr. Jenewein, Academic Director EMBA HSG|
|3||What did we learn from a recent Credit Suisse expert panel discussion?|
These are the questions I wanted to be able to answer when I enrolled for a course on “Machine Learning with R – An Introduction”. The fact that it would be impossible for me to withdraw from this enrolment – a new rule in Ph.D. studies – only dawned on me later. This meant that I had to complete the course successfully at all costs.
The book our teacher selected for the course was written by a so-called data scientist at the University of Michigan. I found it online and began to browse through it: “If science fiction stories are to be believed, the invention of artificial intelligence inevitably leads to apocalyptic wars between machines and their makers. In the early stages, computers are taught to play simple games of tic-tac-toe and chess. Later, machines are given control of traffic lights and communications, followed by military drones and missiles. The machines’ evolution takes an ominous turn once the computers become sentient and learn how to teach themselves. Having no more need for human programmers, humankind is then ‘deleted’. Thankfully, at the time of this writing, machines still require user input.” As I’m a big fan of the ‘Person of Interest’ series that deals with a similar scenario, Brett Lanz inspired me with his introductory words, even though the book gradually became more complex and increasingly nerdier towards the end.
I downloaded the R program following the instructions in the book and copied some source texts into it – it seemed to work. Fortunately, we were given the entire code for the book as a source text in R already, and so even I was able to follow the lessons reasonably well. The course lasted a week, and we got to understand how machines really learn, what data they require to do so and how the data must be prepared. Indeed, data constitutes the core of machine learning. If the data is inconsistent or incomplete, then you get – I think you can already guess – an error message.
However, programming the various models requires a fair amount of work. Even though the code works with the example in the book, this doesn’t even begin to mean that it can also be applied to my own data. What the course taught me: you need a keen eye and have to take pleasure in experimentation in order to be able to program such a model. Incidentally, different models require different data, and different models are better or less well suited to certain data.
Thus our final product, which we were able to work on in a group, turned out to be quite a challenge. We decided to analyse a huge data set of recipe assessments with the help of machine learning. Specifically, we asked what ingredients are likely to result in a recipe being rated good or very good. Also, we wanted to find out which of the machine learning models we had learnt in the course was best suited to an analysis of this data. And this is how we proceeded.
To enable a machine to learn, you divide the data set into two parts, of which the first 30% of data are only used to train the machine. On the basis of the training data, the model learns how to behave and applies the findings derived from this to the remaining 70% of the data. In our project, one of the models recognised that whenever sugar is used as an ingredient, the recipe is rated as at least good. This pattern derived from the training data was then used by the machine to make forecasts about the assessments of the remaining recipes. As an aside – I should have realised as early as this stage that something couldn’t have been right, for I haven’t found all the cake recipes to be really delicious so far – despite the sugar!
So far, so good. You will have guessed by now: unfortunately, I have to report that although we applied everything correctly, we were ultimately unable to come up with any reliable forecasts – irrespective of which model we chose. From K nearest Neighbours to Decision Trees (which we even boosted) and a Random Forest Model – nothing worked reliably. Nonetheless, we completed the course successfully because we learnt that when you deal with data, it is equally important to recognise that a result can be poor. As they say in another field: operation successful, but patient still died!
Despite the unsuccessful conclusion of our project, I’m still fascinated. I have come to a view that certain applications of machine learning are indeed fit for everyday life. If you are proficient in this area and have good data, future results can actually be predicted with a certain degree of probability. One well-known example is the analysis of human cells, in which, on the basis of relevant criteria, a computer can determine with a very high degree of probability whether a cell will develop a malignant cancer or not.
Executive School Programmes:
As long as machines don’t gain the upper hand Hollywood style, we’ll still be able to greatly benefit from the numerous possibilities of machine learning and for a long time to come.
Here you can read the previous article: