Sign In
Not register? Register Now!
Pages:
2 pages/β‰ˆ550 words
Sources:
No Sources
Style:
APA
Subject:
IT & Computer Science
Type:
Other (Not Listed)
Language:
English (U.S.)
Document:
MS Word
Date:
Total cost:
$ 10.8
Topic:

Recent Approaches to Auxiliary Task Selection

Other (Not Listed) Instructions:

Hi, are you free today? I'm trying to do a poster for this literature review, can you help me complete the poster with diagrams and formulas. And I need a 10 minutes speech note with this poster.Here is my current progress.
If you can do this, please tell me how much you want

Other (Not Listed) Sample Content Preview:
Literature Review on Auxiliary Learning
Recent approaches to auxiliary task selection
Introduction
In auxiliary learning, a primary task is accompanied by one or more auxiliary tasks. The main goal is to improve the performance on the primary task by leveraging the knowledge gained from the auxiliary tasks. Auxiliary learning involves adding additional network branches during the training stage and calculating loss terms for these branches to obtain better feature representations. These auxiliary branches are typically not directly related to the parameters of the primary task prediction, but are designed to encourage the development of network features that possess the desired capabilities. During the inference stage, these auxiliary branches are usually ignored, as the focus is on the performance of the primary task.
Gradient Similarity for Auxiliary Losses
Gradient similarity for auxiliary losses was proposed to ensure that the gradients of the auxiliary tasks are aligned with the gradients of the primary task. The gradient similarity can be expressed as:
Where Lmain and Laux represent the losses of the primary and auxiliary tasks, respectively. The gradient similarity ranges between -1 and 1, with positive values indicating that the gradients of the auxiliary task are aligned with the primary task, and negative values indicating they are not aligned. By maximizing the gradient similarity, the auxiliary tasks can better guide the learning process of the primary task.
Auxiliary Modules in Deep Multi-Task Learning (dMTL)
Auxiliary modules in dMTL are additional layers or components added to a neural network architecture that can help improve the performance of multiple tasks simultaneously. These modules can be seen as shared representations that capture general features across different tasks. A common approach to implement auxiliary modules is to include shared layers that are then followed by task-specific layers. This encourages the development of features that are helpful for all tasks while allowing the network to specialize for each task. An example of this approach can be found in the architecture of a dMTL model called Cross-Stitch Networks, where a shared layer learns a unified representation that can be utilized by multiple tasks.
Gated Multi-task Network (GMTN)
A gated multi-task network involves the use of a gating mechanism to control the flow of information between tasks. The gating mechanism learns to decide which features are useful for each task and which features are not. It can be expressed as:
Where yI is the output of the task i, fj(x) is the feature representation of the j-th layer, and gij is the gating mechanism’s output for the task i and layer j. The gating mechanism allows the network to focus on task-...
Updated on
Get the Whole Paper!
Not exactly what you need?
Do you need a custom essay? Order right now:

πŸ‘€ Other Visitors are Viewing These APA Other (Not Listed) Samples:

HIRE A WRITER FROM $11.95 / PAGE
ORDER WITH 15% DISCOUNT!