Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added C-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added C-50.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
29 changes: 26 additions & 3 deletions learning_curve.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,20 @@
from sklearn.cross_validation import train_test_split
from sklearn.linear_model import LogisticRegression

""" recognize images of handwritten digits """
# digits = load_digits()
# print digits.DESCR
# fig = plt.figure()
# for i in range(10):
# subplot = fig.add_subplot(5,2,i+1)
# subplot.matshow(numpy.reshape(digits.data[i],(8,8)),cmap='gray')

# plt.show()

""" initial conditions """
data = load_digits()
print data.DESCR
num_trials = 10
# print data.DESCR
num_trials = 100
train_percentages = range(5,95,5)
test_accuracies = numpy.zeros(len(train_percentages))

Expand All @@ -17,10 +28,22 @@
# You should repeat each training percentage num_trials times to smooth out variability
# for consistency with the previous example use model = LogisticRegression(C=10**-10) for your learner

# TODO: your code here
""" partition data into two sets--training set and testing set
vary training set size vs. testing set size and plot resulting curve """
for i in range(len(train_percentages)):
trial_accuracies= []
for j in range(num_trials):
x_train,x_test,y_train,y_test = train_test_split(data.data, data.target, train_size=train_percentages[i]/200.0)
model = LogisticRegression(C=10 ** -10)
model.fit(x_train, y_train)
accur_score = model.score(x_test,y_test)
trial_accuracies.append(accur_score)

test_accuracies[i] = sum(trial_accuracies) / num_trials

fig = plt.figure()
plt.plot(train_percentages, test_accuracies)
plt.xlabel('Percentage of Data Used for Training')
plt.ylabel('Accuracy on Test Set')
plt.show()

9 changes: 9 additions & 0 deletions questions.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
1. What is the general trend in the curve?
As percentage of data used for training increases, so does the accuracy on the test set.
2. Are there parts of the curve that appear to be noisier than others?  Why?
The curve between 35%/40% to 60% is generally noisier than the others. This is probably because when the test size is super small, there is a high chance that the accuracy will be low, just as when the test size is super large, there is a high chance that the accuracy will be high. However, in the middle, there is more uncertainty since the test size cannot determine the accuracy as well (because there are not enough trials in this range), and so the graph demonstrates this uncertainty by being noisier.
3. How many trials do you need to get a smooth curve?
The curve gets pretty smooth by 500 trials, but gets very smooth by 800 trials.
4. Try different values for C (by changingLogisticRegression(C=10**-10)).  What happens? 
When C is larger (ex: C = (10**-1), the curve more closely resembles a logarithmic curve; it's a smoother curve; it reaches a higher accuracy (0.96). When C is smaller (ex: C = (10**-50), the graph doesn't resemble any type of curve; it's noisier all around; it peaks to a smaller and smaller accuracy (0.098).