Skip to content

Instantly share code, notes, and snippets.

@Saurabh7
Last active May 26, 2016 10:34
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Saurabh7/6efb54e9fb0834f4d65bccf6a4426546 to your computer and use it in GitHub Desktop.
Save Saurabh7/6efb54e9fb0834f4d65bccf6a4426546 to your computer and use it in GitHub Desktop.
for (index_t i=0; i <m_num_runs; ++i)
{
results[i]=evaluate_one_run();
}
float64_t CCrossValidation::evaluate_one_run()
{
m_machine->set_store_model_features(true);
for (index_t i=0; i <num_subsets; ++i)
{
//train
m_features->add_subset(inverse_subset_indices);
m_labels->add_subset(inverse_subset_indices);
m_machine->train(m_features);
m_features->remove_subset();
m_labels->remove_subset();
//apply
m_features->add_subset(subset_indices);
m_labels->add_subset(subset_indices);
CLabels* result_labels=m_machine->apply(m_features);
results[i]=m_evaluation_criterion->evaluate(result_labels, m_labels);
}
}
@karlnapf
Copy link

OK, so you want to create new shallow copies here (of features, labels, and machine). Assign subsets, assign copied labels and features to copied machine, and run

@karlnapf
Copy link

oh and actually also the evaluation instance

@karlnapf
Copy link

I suggest we parallelise over folds, so the stuff I mentioned should happen within the folds loop.

@karlnapf
Copy link

Also make sure that each folds runs in a separate thread, but you should also parallelise over the runs, so openmp needs to kind of merge all folds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment