![]() Training on the data given above, we want to recommend optimal configurations for new tasks that come our way. Using this terminology we can state what we want from our meta-learner. ![]() A set of evaluations for each configuration’s performance on each task - accuracy, for example.Configurations for the learning algorithms (hyperparameters, pipeline components, network architecture components).The input required for the meta-learner will look like this: Suppose we have data on how a set of learning algorithms has performed on certain tasks. Depending on the type of meta-data employed a meta-learning model can be broadly put into three categories: learning from previous model evaluations, learning from task properties, and learning from the prior models themselves. The biggest challenge of meta-learning is taking abstract “experience” and structuring it in a systemic, data-driven way. Rather, a model can gather previous experience from other algorithm’s performance on multiple tasks, evaluate that experience, and then use that knowledge to improve its performance. The goal isn’t to take one model and focus on training it on one specific dataset. The same goes for AI, and meta-learning has been an increasingly popular topic over the last several years. Whenever we learn any new skill there is some prior experience we can relate to, which makes the learning process easier. Meta-learning simply means “learning to learn”. Like many other Machine Learning concepts, meta-learning is an approach akin to what human beings are already used to doing. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |