[OTDev] URGENT: Testing & Evaluation - Toxcreate - parameter optimisation, consensus
Christoph Helma helma at in-silico.deWed Oct 26 11:38:54 CEST 2011
- Previous message: [OTDev] URGENT: Testing & Evaluation - REACH models
- Next message: [OTDev] first Bioclipse-OpenTox paper is available
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Wed, 26 Oct 2011 10:52:38 +0200, Barry Hardy <barry.hardy at douglasconnect.com> wrote: Non-text part: multipart/alternative > One other point on this for future work is the approach followed by > people like David Leahy or AZ group, where they run a large numer of > model options to determine best sets of parameters, generate consensus > possibilities, etc. This could be setup for OpenTox with workflows and > cloud approaches. The "non-advanced user" could be automatically > presented the best algorithm/parameter combination for a model and its > predictions. Such a user might also be able to set some overall goals > for the model e.g., optimised for false positives or negatives etc. I have envisaged such a "brute force" approach for ToxCreate in the beginning, but could not try it out because we did not have the superservices in place. My (unvalidated) gut feeling is that such massive approach does not pay off the efforts (and there is the huge risk of voverfitting). I would prefer to obtain heuristics for choosing parameters/algorithms from a large well designed experiment with very diverse datasets and do no or only limited parallel model building when creating actual prediction models. Christoph
- Previous message: [OTDev] URGENT: Testing & Evaluation - REACH models
- Next message: [OTDev] first Bioclipse-OpenTox paper is available
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Development mailing list