Sådan opbygges et simpelt billedgenkendelsessystem med TensorFlow (del 1)

Dette er ikke en generel introduktion til kunstig intelligens, maskinindlæring eller dyb læring. Der er allerede mange gode artikler, der dækker disse emner (for eksempel her eller her).

Og dette er ikke en diskussion om, hvorvidt AI vil trælbinde menneskeheden eller blot stjæle alle vores job. Du kan finde masser af spekulationer og nogle for tidlige frygtmænd andre steder.

I stedet er dette indlæg en detaljeret beskrivelse af, hvordan man kommer i gang med maskinindlæring ved at opbygge et system, der (noget) er i stand til at genkende, hvad det ser i et billede.

Jeg er i øjeblikket på en rejse for at lære om kunstig intelligens og maskinindlæring. Og den måde, jeg lærer bedst på, er ikke kun at læse ting, men ved faktisk at bygge ting og få praktisk erfaring. Og det er, hvad dette indlæg handler om. Jeg vil vise dig, hvordan du kan opbygge et system, der udfører en simpel computervisionsopgave: genkendelse af billedindhold.

Jeg hævder ikke at være ekspert selv. Jeg lærer stadig, og der er meget at lære. Jeg beskriver, hvad jeg har spillet med, og hvis det er noget interessant eller nyttigt for dig, er det fantastisk! Hvis du derimod finder fejl eller har forslag til forbedringer, så lad mig det vide, så jeg kan lære af dig.

Du har ikke brug for nogen tidligere erfaring med maskinlæring for at kunne følge med. Eksempelkoden er skrevet i Python, så en grundlæggende viden om Python ville være stor, men kendskab til ethvert andet programmeringssprog er sandsynligvis nok.

Hvorfor billedgenkendelse?

Billedgenkendelse er en god opgave til udvikling og test af maskinlæringsmetoder. Vision er uden tvivl vores mest magtfulde sans og kommer naturligt for os mennesker. Men hvordan gør vi det faktisk? Hvordan oversætter hjernen billedet på vores nethinden til en mental model for vores omgivelser? Jeg tror ikke nogen ved nøjagtigt.

Pointen er, det er tilsyneladende let for os at gøre - så let, at vi ikke engang behøver at lægge nogen bevidst indsats i det - men vanskeligt for computere at gøre (faktisk er det måske ikke så let for os heller, måske vi Jeg er bare ikke klar over, hvor meget arbejde det er. Mere end halvdelen af ​​vores hjerne ser ud til at være direkte eller indirekte involveret i synet).

Hvordan kan vi få computere til at udføre visuelle opgaver, når vi ikke engang ved, hvordan vi selv gør det? Det er her maskinlæring spiller ind. I stedet for at forsøge at komme med detaljerede trinvise instruktioner om, hvordan man fortolker billeder og oversætter det til et computerprogram, lader vi computeren finde ud af det selv.

Målet med maskinlæring er at give computere mulighed for at gøre noget uden at blive eksplicit fortalt, hvordan man gør det. Vi leverer bare en form for generel struktur og giver computeren mulighed for at lære af erfaring, svarende til hvordan vi mennesker også lærer af erfaring.

Men inden vi begynder at tænke på en fuldstændig løsning på computersyn, lad os forenkle opgaven noget og se på et specifikt underproblem, som er lettere for os at håndtere.

Billedklassifikation og CIFAR-10 datasættet

Vi vil forsøge at løse et problem, der er så enkelt og lille som muligt, mens vi stadig er vanskelige nok til at lære os værdifulde lektioner. Alt, hvad vi ønsker, at computeren skal gøre, er følgende: når det præsenteres med et billede (med specifikke billeddimensioner), skal vores system analysere det og tildele det en enkelt etiket. Det kan vælge mellem et fast antal etiketter, der hver er en kategori, der beskriver billedets indhold. Vores mål er, at vores model vælger den rigtige kategori så ofte som muligt. Denne opgave kaldes billedklassificering.

Vi bruger et standardiseret datasæt kaldet CIFAR-10. CIFAR-10 består af 60.000 billeder. Der er 10 forskellige kategorier og 6.000 billeder pr. Kategori. Hvert billede har en størrelse på kun 32 x 32 pixels. Den lille størrelse gør det undertiden vanskeligt for os mennesker at genkende den rigtige kategori, men det forenkler tingene for vores computermodel og reducerer den beregningsbelastning, der kræves for at analysere billederne.

Den måde, vi indtaster disse billeder på i vores model, er ved at give modellen en hel række tal. Hver pixel er beskrevet af tre flydende numre, der repræsenterer de røde, grønne og blå værdier for denne pixel. Dette resulterer i 32 x 32 x 3 = 3.072 værdier for hvert billede.

Bortset fra CIFAR-10 er der masser af andre billeddatasæt, der ofte bruges i computersynssamfundet. Brug af standardiserede datasæt tjener to formål. For det første er det meget arbejde at oprette et sådant datasæt. Du skal finde billederne, behandle dem, så de passer til dine behov, og mærke dem alle individuelt. Den anden grund er, at brug af det samme datasæt giver os mulighed for objektivt at sammenligne forskellige tilgange med hinanden.

Derudover har standardiserede billeddatasæt ført til oprettelsen af ​​computersynslister og konkurrencer med høj score. Den mest berømte konkurrence er sandsynligvis Image-Net Competition, hvor der er 1000 forskellige kategorier at opdage. 2012's vinder var en algoritme udviklet af Alex Krizhevsky, Ilya Sutskever og Geoffrey Hinton fra University of Toronto (teknisk papir), som dominerede konkurrencen og vandt med stor margin. Dette var første gang, at den vindende tilgang benyttede et nedviklet neuralt netværk, som havde stor indflydelse på forskningsmiljøet. Konvolutionsneurale netværk er kunstige neurale netværk, der er løst modelleret efter den visuelle cortex, der findes hos dyr. Denne teknik havde eksisteret i et stykke tid, men på det tidspunkt så de fleste endnu ikke potentialet til at være nyttigt.Dette ændrede sig efter Image-Net konkurrencen i 2012. Pludselig var der stor interesse for neurale netværk og dyb læring (dyb læring er bare det udtryk, der bruges til at løse maskinlæringsproblemer med flerlags neurale netværk). Denne begivenhed spiller en stor rolle i starten af ​​den dybe læringsbom i de sidste par år.

Overvåget læring

Hvordan kan vi bruge billedsættet til at få computeren til at lære alene? Selvom computeren udfører læringsdelen af ​​sig selv, er vi stadig nødt til at fortælle den, hvad den skal lære, og hvordan den skal gøres. Den måde, vi gør dette på, er ved at specificere en generel proces for, hvordan computeren skal evaluere billeder.

Vi definerer en generel matematisk model for, hvordan man kommer fra inputbillede til outputetiket. Modelens konkrete output for et specifikt billede afhænger derefter ikke kun af selve billedet, men også af modelens interne parametre. Disse parametre leveres ikke af os, men de læres af computeren.

Det hele viser sig at være et optimeringsproblem. Vi starter med at definere en model og levere startværdier til dens parametre. Derefter føder vi billedsættet med dets kendte og korrekte etiketter til modellen. Det er træningsstadiet. I denne fase ser modellen gentagne gange på træningsdata og ændrer fortsat værdierne på dens parametre. Målet er at finde parameterværdier, der resulterer i, at modelens output er korrekt så ofte som muligt. Denne form for træning, hvor den rigtige løsning bruges sammen med inputdata, kaldes overvåget læring. Der er også uovervåget læring, hvor målet er at lære af inputdata, for hvilke der ikke findes nogen etiketter, men det er uden for dette indlæg.

Når træningen er afsluttet, ændres ikke modelens parameterværdier længere, og modellen kan bruges til at klassificere billeder, der ikke var en del af dens træningsdatasæt.

TensorFlow

TensorFlow er et open source-softwarebibliotek til maskinlæring, der blev udgivet af Google i 2015 og er hurtigt blevet et af de mest populære maskinlæringsbiblioteker, der bruges af forskere og praktikere over hele verden. Vi bruger det til at gøre det numeriske tunge løft til vores billedklassificeringsmodel.

Opbygning af modellen, en Softmax-klassifikator

Den fulde kode for denne model er tilgængelig på Github. For at bruge det skal du have følgende installeret:

  • Python (koden er testet med Python 2.7, men Python 3.3+ skal også fungere, Link til installationsvejledning)
  • TensorFlow (Link til installationsvejledning)
  • CIFAR-10 dataset: Download the Python version of the dataset from //www.cs.toronto.edu/~kriz/cifar.html or use the direct link to the compressed archive. Place the extracted cifar-10-batches-py/ directory in the directory where you are putting the python source code, so that the path to the images is /path-to-your-python-source-code-files/cifar-10-batches-py/.

Alright, now we’re finally ready to go. Let’s look at the main file of our experiment, softmax.py and analyze it line by line:

The future-Statements should be present in all TensorFlow Python files to ensure compatibility with both Python 2 and 3 according to the TensorFlow style guide.

Then we are importing TensorFlow, numpy for numerical calculations, and the time module. data_helpers.py contains functions that help with loading and preparing the dataset.

We start a timer to measure the runtime and define some parameters. I’ll talk about them later when we’re actually using them. Then we load the CIFAR-10 dataset. Since reading the data is not part of the core of what we’re doing, I put these functions into the separate data_helpers.py file, which basically just reads the files containing the dataset and puts the data in a data structure which is easy to handle for us.

One thing is important to mention though. load_data() is splitting the 60000 images into two parts. The bigger part contains 50000 images. This training set is what we use for training our model. The other 10000 images are called test set. Our model never gets to see those until the training is finished. Only then, when the model’s parameters can’t be changed anymore, we use the test set as input to our model and measure the model’s performance on the test set.

This separation of training and testing data is very important. We wouldn’t know how well our model is able to make generalizations if it was exposed to the same dataset for training and for testing. In the worst case, imagine a model which exactly memorizes all the training data it sees. If we were to use the same data for testing it, the model would perform perfectly by just looking up the correct solution in its memory. But it would have no idea what to do with inputs which it hasn’t seen before.

This concept of a model learning the specific features of the training data and possibly neglecting the general features, which we would have preferred for it to learn is called overfitting. Overfitting and how to avoid it is a big issue in machine learning. More information about overfitting and why it is generally advisable to split the data into not only 2 but 3 different datasets can be found in this video (youtube mirror) (the video is part of Andrew Ng’s great free machine learning course on Coursera).

To get back to our code, load_data() returns a dictionary containing

  • images_train: the training dataset as an array of 50,000 by 3,072 (= 32 pixels x 32 pixels x 3 color channels) values.
  • labels_train: 50000 labels for the training set (each a number between 0 nad 9 representing which of the 10 classes the training image belongs to)
  • images_test: test set (10,000 by 3,072)
  • labels_test: 10,000 labels for the test set
  • classes: 10 text labels for translating the numerical class value into a word (0 for ‘plane’, 1 for ‘car’, etc.)

Now we can start building our model. The actual numerical computations are being handled by TensorFlow, which uses a fast and efficient C++ backend to do this. TensorFlow wants to avoid repeatedly switching between Python and C++ because that would slow down our calculations.

The common workflow is therefore to first define all the calculations we want to perform by building a so-called TensorFlow graph. During this stage no calculations are actually being performed, we are merely setting the stage. Only afterwards we run the calculations by providing input data and recording the results.

So let’s start defining our graph. We first describe the way our input data for the TensorFlow graph looks like by creating placeholders. These placeholders do not contain any actual data, they just specify the input data’s type and shape.

For our model, we’re first defining a placeholder for the image data, which consists of floating point values (tf.float32). The shape argument defines the input dimensions. We will provide multiple images at the same time (we will talk about those batches later), but we want to stay flexible about how many images we actually provide. The first dimension of shape is therefore None, which means the dimension can be of any length. The second dimension is 3,072, the number of floating point values per image.

The placeholder for the class label information contains integer values (tf.int64), one value in the range from 0 to 9 per image. Since we’re not specifying how many images we’ll input, the shape argument is [None].

weights and biases are the variables we want to optimize. But let’s talk about our model first.

Our input consists of 3,072 floating point numbers and the desired output is one of 10 different integer values. How do we get from 3,072 values to a single one? Let’s start at the back. Instead of a single integer value between 0 and 9, we could also look at 10 score values — one for each class — and then pick the class with the highest score. So our original question now turns into: How do we get from 3,072 values to 10?

The simple approach which we are taking is to look at each pixel individually. For each pixel (or more accurately each color channel for each pixel) and each possible class, we’re asking whether the pixel’s color increases or decreases the probability of that class.

Let’s say the first pixel is red. If images of cars often have a red first pixel, we want the score for car to increase. We achieve this by multiplying the pixel’s red color channel value with a positive number and adding that to the car-score. Accordingly, if horse images never or rarely have a red pixel at position 1, we want the horse-score to stay low or decrease. This means multiplying with a small or negative number and adding the result to the horse-score.

For each of the 10 classes we repeat this step for each pixel and sum up all 3,072 values to get a single overall score, a sum of our 3,072 pixel values weighted by the 3,072 parameter weights for that class. In the end we have 10 scores, one for each class. Then we just look at which score is the highest, and that’s our class label.

The notation for multiplying the pixel values with weight values and summing up the results can be drastically simplified by using matrix notation. Our image is represented by a 3,072-dimensional vector. If we multiply this vector with a 3,072 x 10 matrix of weights, the result is a 10-dimensional vector containing exactly the weighted sums we are interested in.

The actual values in the 3,072 x 10 matrix are our model parameters. If they are random/garbage our output will be random/garbage. That’s where the training data comes into play. By looking at the training data we want the model to figure out the parameter values by itself.

All we’re telling TensorFlow in the two lines of code shown above is that there is a 3,072 x 10 matrix of weight parameters, which are all set to 0 in the beginning. In addition, we’re defining a second parameter, a 10-dimensional vector containing the bias. The bias does not directly interact with the image data and is added to the weighted sums. The bias can be seen as a kind of starting point for our scores.

Think of an image which is totally black. All its pixel values would be 0, therefore all class scores would be 0 too, no matter how the weights matrix looks like. Having biases allows us to start with non-zero class scores.

This is where the prediction takes place. We’ve arranged the dimensions of our vectors and matrices in such a way that we can evaluate multiple images in a single step. The result of this operation is a 10-dimensional vector for each input image.

The process of arriving at good values for the weights and bias parameters is called training and works as follows: First, we input training data and let the model make a prediction using its current parameter values. This prediction is then compared to the correct class labels. The numerical result of this comparison is called loss. The smaller the loss value, the closer the predicted labels are to the correct labels and vice versa.

We want to model to minimize the loss, so that its predictions are close to the true labels. But before we look at the loss minimization, let’s take a look at how the loss is calculated.

The scores calculated in the previous step, stored in the logits variable, contains arbitrary real numbers. We can transform these values into probabilities (real values between 0 and 1 which sum to 1) by applying the softmax function, which basically squeezes its input into an output with the desired attributes. The relative order of its inputs stays the same, so the class with the highest score stays the class with the highest probability. The softmax function’s output probability distribution is then compared to the true probability distribution, which has a probability of 1 for the correct class and 0 for all other classes.

We use a measure called cross-entropy to compare the two distributions (a more technical explanation can be found here). The smaller the cross-entropy, the smaller the difference between the predicted probability distribution and the correct probability distribution. This value represents the loss in our model.

Luckily TensorFlow handles all the details for us by providing a function that does exactly what we want. We compare logits, the model’s predictions, with labels_placeholder, the correct class labels. The output of sparse_softmax_cross_entropy_with_logits() is the loss value for each input image. We then calculate the average loss value over the input images.

But how can we change our parameter values to minimize the loss? This is where TensorFlow works its magic. Via a technique called auto-differentiation it can calculate the gradient of the loss with respect to the parameter values. This means that it knows each parameter’s influence on the overall loss and whether decreasing or increasing it by a small amount would reduce the loss. It then adjusts all parameter values accordingly, which should improve the model’s accuracy. After this parameter adjustment step the process restarts and the next group of images are fed to the model.

TensorFlow knows different optimization techniques to translate the gradient information into actual parameter updates. Here we use a simple option called gradient descent which only looks at the model’s current state when determining the parameter updates and does not take past parameter values into account.

Gradient descent only needs a single parameter, the learning rate, which is a scaling factor for the size of the parameter updates. The bigger the learning rate, the more the parameter values change after each step. If the learning rate is too big, the parameters might overshoot their correct values and the model might not converge. If it is too small, the model learns very slowly and takes too long to arrive at good parameter values.

The process of categorizing input images, comparing the predicted results to the true results, calculating the loss and adjusting the parameter values is repeated many times. For bigger, more complex models the computational costs can quickly escalate, but for our simple model we need neither a lot of patience nor specialized hardware to see results.

These two lines measure the model’s accuracy. argmax of logits along dimension 1 returns the indices of the class with the highest score, which are the predicted class labels. The labels are then compared to the correct class labels by tf.equal(), which returns a vector of boolean values. The booleans are cast into float values (each being either 0 or 1), whose average is the fraction of correctly predicted images.

We’re finally done defining the TensorFlow graph and are ready to start running it. The graph is launched in a session which we can access via the sess variable. The first thing we do after launching the session is initializing the variables we created earlier. In the variable definitions we specified initial values, which are now being assigned to the variables.

Then we start the iterative training process which is to be repeated max_steps times.

These lines randomly pick a certain number of images from the training data. The resulting chunks of images and labels from the training data are called batches. The batch size (number of images in a single batch) tells us how frequent the parameter update step is performed. We first average the loss over all images in a batch, and then update the parameters via gradient descent.

If instead of stopping after a batch, we first classified all images in the training set, we would be able to calculate the true average loss and the true gradient instead of the estimations when working with batches. But it would take a lot more calculations for each parameter update step. At the other extreme, we could set the batch size to 1 and perform a parameter update after every single image. This would result in more frequent updates, but the updates would be a lot more erratic and would quite often not be headed in the right direction.

Usually an approach somewhere in the middle between those two extremes delivers the fastest improvement of results. For bigger models memory considerations are very relevant too. It’s often best to pick a batch size that is as big as possible, while still being able to fit all variables and intermediate results into memory.

Here the first line of code picks batch_size random indices between 0 and the size of the training set. Then the batches are built by picking the images and labels at these indices.

Every 100 iterations we check the model’s current accuracy on the training data batch. To do this, we just need to call the accuracy-operation we defined earlier.

This is the most important line in the training loop. We tell the model to perform a single training step. We don’t need to restate what the model needs to do in order to be able to make a parameter update. All the info has been provided in the definition of the TensorFlow graph already. TensorFlow knows that the gradient descent update depends on knowing the loss, which depends on the logits which depend on weights, biases and the actual input batch.

We therefore only need to feed the batch of training data to the model. This is done by providing a feed dictionary in which the batch of training data is assigned to the placeholders we defined earlier.

After the training is completed, we evaluate the model on the test set. This is the first time the model ever sees the test set, so the images in the test set are completely new to the model. We’re evaluating how well the trained model can handle unknown data.

The final lines print out how long it took to train and run the model.

Results

Let’s run the model with with the command “python softmax.py”. Here is how my output looks like:

Step 0: training accuracy 0.14 Step 100: training accuracy 0.32 Step 200: training accuracy 0.3 Step 300: training accuracy 0.23 Step 400: training accuracy 0.26 Step 500: training accuracy 0.31 Step 600: training accuracy 0.44 Step 700: training accuracy 0.33 Step 800: training accuracy 0.23 Step 900: training accuracy 0.31 Test accuracy 0.3066 Total time: 12.42s

What does this mean? The accuracy of evaluating the trained model on the test set is about 31%. If you run the code yourself, your result will probably be around 25–30%. So our model is able to pick the correct label for an image it has never seen before around 25–30% of the time. That’s not bad!

There are 10 different labels, so random guessing would result in an accuracy of 10%. Our very simple method is already way better than guessing randomly. If you think that 25% still sounds pretty low, don’t forget that the model is still pretty dumb. It has no notion of actual image features like lines or even shapes. It looks strictly at the color of each pixel individually, completely independent from other pixels. An image shifted by a single pixel would represent a completely different input to this model. Considering this, 25% doesn’t look too shabby anymore.

What would happen if we trained for more iterations? That would probably not improve the model’s accuracy. If you look at results, you can see that the training accuracy is not steadily increasing, but instead fluctuating between 0.23 and 0.44. It seems to be the case that we have reached this model’s limit and seeing more training data would not help. This model is not able to deliver better results. In fact, instead of training for 1000 iterations, we would have gotten a similar accuracy after significantly fewer iterations.

One last thing you probably noticed: the test accuracy is quite a lot lower than the training accuracy. If this gap is quite big, this is often a sign of overfitting. The model is then more finely tuned to the training data it has seen, and it is not able to generalize as well to previously unseen data.

Dette indlæg har allerede vist sig at være ret langt. Jeg vil gerne takke dig for at have læst det hele (eller for at springe helt ned til bunden)! Jeg håber, du fandt noget af interesse for dig, hvad enten det er, hvordan en maskinindlæringsklassificering fungerer, eller hvordan man bygger og kører en simpel graf med TensorFlow. Selvfølgelig er der stadig en masse materiale, som jeg gerne vil tilføje. Indtil videre har vi kun talt om softmax-klassifikatoren, som ikke engang bruger neurale net.

Mit næste blogindlæg ændrer det: Find ud af, hvor meget brug af en lille neuralt netværksmodel kan forbedre resultaterne! Læs det her.

Tak for læsningen. Du kan også tjekke andre artikler, jeg har skrevet på min blog.