Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added 10 trials.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added 50 trials.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
21 changes: 14 additions & 7 deletions learning_curve.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
""" Exploring learning curves for classification of handwritten digits """

import matplotlib.pyplot as plt
import numpy
from sklearn.datasets import *
Expand All @@ -8,19 +6,28 @@

data = load_digits()
print data.DESCR
num_trials = 10
train_percentages = range(5,95,5)
num_trials = 50
train_percentages = range(1,99,1)
test_accuracies = numpy.zeros(len(train_percentages))

# train a model with training percentages between 5 and 90 (see train_percentages) and evaluate
# the resultant accuracy.
# You should repeat each training percentage num_trials times to smooth out variability
# for consistency with the previous example use model = LogisticRegression(C=10**-10) for your learner

# TODO: your code here
for i, percent in enumerate(train_percentages):
for j in range(num_trials):
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, train_size=percent/100.0)
model = LogisticRegression(C=10**-10)
model.fit(X_train, y_train)
print "Train accuracy %f" %model.score(X_train,y_train)
print "Test accuracy %f"%model.score(X_test,y_test)
test_accuracies[i] += model.score(X_test,y_test)
test_accuracies[i] /= num_trials

fig = plt.figure()
plt.plot(train_percentages, test_accuracies)
plt.plot(train_percentages, test_accuracies*100)
plt.xlabel('Percentage of Data Used for Training')
plt.ylabel('Accuracy on Test Set')
plt.show()
plt.axis([0,100,0,100])
plt.show()
7 changes: 7 additions & 0 deletions questions.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
1. The general trend is upward, as one would expect; the more data you give it to learn from, the better it learns.

2. No, they all seem to have the same noise level. I would expect the right side to have more noise, since there is less data to test the machine with, but it actually comes out just as smooth as the bottom of the graph.

3. The graph converges to a nice shape around 20 trials or so, although it is still easy to see the noise if you take a data point at each percentage, even with 50 trials per.

4. Lower values of C yield slower rates of learning; a steeper learning curve in the colloquial sense. Higher values of C do the opposite. By turning C up way high (I went as far as 10**-1), the machine is able to effectively read handwriting with less than 10% of the data to learn from. However, it also slows the program down considerably.
7 changes: 7 additions & 0 deletions questions.txt~
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
1. The general trend is upward, as one would expect; the more data you give it to learn from, the better it learns.

2. No, they all seem to have the same noise level. I would expect the right side to have more noise, since there is less data to test the machine with, but it actually comes out just as smooth as the bottom of the graph.

3. The graph converges to a nice shape around 20 trials or so, although it is still easy to see the noise if you take a data point at each percentage, even with 50 trials per.

4. Lower values of C yield slower rates of learning; a steeper learning curve in the colloquial sense. Higher values of C do the opposite. By turning C up way high (I went as far as 10**-1), the machine is able to effectively read handwriting with less than 10% of the data to learn from. However, it also slows the