公告事項演講
  1. 公告事項
  2. Announcement
 
 
 
 
  1. 2018/12/5
  2.  
    1. 演講
    2. 貢三元教授演講
    3. 林志鴻
  3.  
     
     
      1. 貢三元教授 Professor Sun-Yuan Kung

        Department of Electrical Engineering, Princeton University

         

        12/7 ()14001600  致平廳

        Deep Learning Networks I: Systematic Design and Analysis of Deep Learning Networks

         

        Abstract:

         

        Machine learning allows us to induce classification or prediction rules based on empirical training data to facilitate the large-scale data mining. Deep Learning Networks (DLNs) provide a versatile platform so that a lot of parameter may be learned to meet the demand of a broad spectrum of AI applications.

         

        Note that much success of DLN depends on trial-and-error, they all have appeared to be very ad-hoc. It is therefore desirable (if not imperative) to explore into a methodic and analytical design of multi-layer learning models, hopefully paving a way to a new learning paradigm. However, the design/impplementation of deep learning networks is also severely hampered by the curse of depth as the networks become more and more complex.

         

        It is well known that there exist huge discrepancies between optimization and generalization, exemplified primarily by the four key gaps/issues

         

        Therefore, the emphasis of the first part of my talk will be focused on four key design issues: (1) data gap, (2) capacity gap, (3) metric gap, and (4) algorithmic gap.

         

        We shall advocate systematic design of DLNs by bridging these four design gaps.

        --------------------------------------------------------------------------------------------

        12/14 ()13301500 電機系館 R101演講廳

        Deep Learning Networks II: Enhancing Deep BP Learning with Omnipresent-Supervision Training Paradigm

         

         

        Abstract:

         

        The second part of my talk will place emphasis on systematical design of cost-effective deep learning networks.

         

        In BP learning paradigm, the teacher values will be needed (as a reference) only at the output layer to be an effective reference to indicate how correct are the output responses compared with the desired responses. Here, the entire neural network is treated as a black box, since the data needed for training are provided either from the (lowest-layer) input end or from the (top-layer) output end.  There will be no need of any reference values for the hidden layer, especially under the BP learning. Therefore BP represents an external training paradigm which has been popular to both regression and classification problems.

         

        The curse of depth on DLNs has widely been recognized as a cause of serious concern. In order to circumvent the depth problem altogether via a new notion of  Omni-present Supervision(OS) internal training strategy is proposed. The OS learning works exclusively for classification problems, where teacher labels can be metaphorically hidden in “Trojan-horses” and transported (along with the data) from the input layer to all hidden layers. Opening up the embedded Trojan-horses, the teacher labels become direct accessible to each of the hidden-layers.

         

        This leads to an internal OS learning strategy without invoking back-propagation.

         

        Three application scenarios will be highlighted to showcase the merits of incorporating internal OS learning into the external BP learning:

         

        (a)  Direct weight updating on each hidden layer;

        (b)  MIND-Net: designed to monotonically increase the networks discriminant capability; and

        (c) OStrim: trim the network to become more cost-effective in power, storages, and FLOPS.

  4.