Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added 100trials.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added 10Trials.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added LargeInverseRegularization.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
22 changes: 13 additions & 9 deletions learning_curve.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,20 +7,24 @@
from sklearn.linear_model import LogisticRegression

data = load_digits()
print data.DESCR
num_trials = 10
#print data.DESCR
num_trials = 100

train_percentages = range(5,95,5)
test_accuracies = numpy.zeros(len(train_percentages))

# train a model with training percentages between 5 and 90 (see train_percentages) and evaluate
# the resultant accuracy.
# You should repeat each training percentage num_trials times to smooth out variability
# for consistency with the previous example use model = LogisticRegression(C=10**-10) for your learner

# TODO: your code here
def trainer(percent, num_trials):
results = []
model = LogisticRegression(C=10**-4)
for i in range(num_trials):
X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, train_size=(percent/float(100)))
model.fit(X_train, y_train)
results.append(model.score(X_test,y_test))
return sum(results)/float(num_trials)

results = [trainer(percent, num_trials) for percent in train_percentages]
fig = plt.figure()
plt.plot(train_percentages, test_accuracies)
plt.plot(train_percentages, results)
plt.xlabel('Percentage of Data Used for Training')
plt.ylabel('Accuracy on Test Set')
plt.show()
4 changes: 4 additions & 0 deletions questions.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
1) The bigger the training set, the more accurate the model is on the test set.
2) The lower end of the curve is noisy, because accuracy is highly dependent on how representative the training set data is of the test set. Regression to the mean causes larger sets to be inherently more "stable" in their evaluation.
3)100 trials produces a much smoother curve.
4)As the inverse regularization coefficient increases, the model tries to regress to more and more accurate levels, creating a more accurate model. (In rough terms). This causes a much more representative accuracy graph.