DASS at DAC 2019

The Design Automation Summer School (DASS) is a one-day intensive course on research and development in design automation (DA). Each topic in this course will be covered by a distinguished speaker who will define the topic, describe recent accomplishments, and indicate remaining challenges. Interactive discussions and follow-up activities among the participants will be used to reinforce and expand upon the lessons. This program is intended to introduce and outline emerging challenges, and to foster creative thinking in the next generation of EDA engineers. Simultaneously, they also help the students hone their problem solving, programming, and teamwork skills, in addition to fostering long-term collégial relationships. The 2019 SIGDA Design Automation Summer School is co-hosted by A. Richard Newton Young Fellowship Program at ACM/IEEE Design Automation Conference (DAC). DASS program will be co-hosted by DAC RNYS program and will be held on Sunday June 2, 2019 at Room N250 in the North Hall of the Las Vegas Convention Center, from 9 a.m. to 6 p.m. in San Francisco, California. Richard Newton Young Student Fellowship Welcome breakfast is held at the same room from 7:30 am to 8:30 am. All the students receiving the fellowship (excluding the mentors) are required to attend DASS event.
The DASS event complements other educational and professional development activities in design automation including outreach projects such as the SIGDA University Booth, the CADathlon, and the Design Automation Conference (DAC) Ph.D. forum that have met with tremendous success over the past decade. Note that there is no separate call for participation for DASS. Attending DASS is mandatory for all the students receiving the Richard Newton Young Fellowship. The DASS final program will be available in late May 2019.
Organizing Committee:

SIGDA advisory committee for DASS:

DASS Schedule

  • Date: Sunday June 2, 2019
  • Time: 7:30am - 6:00pm
  • Location: Room N250, North Hall, Las Vegas Convention Center, Las Vegas, NV, USA

The detailed schedule is listed below:

Time Speakers Title
7:30am - 9:00am Breakfast and RNYF Networking
9:00am - 11:00am Onur Mutlu (ETH Zürich) Enabling Computation with Minimal Data Movement: Changing the Computing Paradigm for High Efficiency
10:00 - 10:15 am Coffee Break
11:15am - 12:00pm Rasit Onur Topaloglu (IBM) Design for Manufacturability (DFM) & Design-Technology Co-Optimization (DTCO)
12:00pm - 1:00pm Lunch Break
1:00pm - 1:30 pm Grant Martin (Tensilica) Configurable, Extensible Processors: An Overview of Tensilica Technology
1:30pm - 2:00pm Cadence Academic Network Grow your career with Cadence University Program
2:00pm - 2:45pm Michel Kinsy (Boston University) Towards Secure Acceleration of Neural Network Models at the Edge
2:45pm - 3:00pm Coffee Break
3:00pm - 3:45pm Shimeng Yu  Compute-in-Memory from CMOS to Beyond-CMOS
3:45pm - 4:30pm Siddharth Garg (NYU) Security Vulnerabilities in Emerging Machine Learning Systems
4:45pm - 6:00pm Laleh Behjat (Calgary) Give a Winning Presentation: From Idea to Delivery
6:00pm - Onward Reception and Networking: Welcome Reception

Invited Talks

  • Title: Enabling Computation with Minimal Data Movement: Changing the Computing Paradigm for High Efficiency
    Spaeker: Onur Mutlu (ETH Zürich)
    Abstract: Today's systems are overwhelmingly designed to move data to computation. This design choice goes directly against at least three key trends in systems that cause performance, scalability and energy bottlenecks: 1) data access from memory is already a key bottleneck as applications become more data-intensive and memory bandwidth and energy do not scale well, 2) energy consumption is a key constraint in especially mobile and server systems, 3) data movement is very expensive in terms of bandwidth, energy and latency, much more so than computation. These trends are especially severely-felt in the data-intensive server and energy-constrained mobile systems of today.
    At the same time, conventional memory technology is facing many scaling challenges in terms of reliability, energy, and performance. As a result, memory system architects are open to organizing memory in different ways and making it more intelligent, at the expense of slightly higher cost. The emergence of 3D-stacked memory plus logic, the adoption of error correcting codes inside the latest DRAM chips, and intelligent memory controllers to solve the RowHammer problem are an evidence of this trend.
    In this talk, I will discuss some recent research that aims to practically enable computation close to data. After motivating trends in applications as well as technology, we will discuss at least two promising directions: 1) performing massively-parallel bulk operations in memory by exploiting the analog operational properties of DRAM, with low-cost changes, 2) exploiting the logic layer in 3D-stacked memory technology in various ways to accelerate important data-intensive applications. In both approaches, we will discuss relevant cross-layer research, design, and adoption challenges in devices, architecture, systems, applications, and programming models. Our focus will be the development of in-memory processing designs that can be adopted in real computing platforms and real data-intensive applications, spanning machine learning, graph processing, data analytics, and genome analysis, at low cost. If time permits, we will also discuss and describe simulation and evaluation infrastructures that can enable exciting and forward-looking research in future memory systems, including Ramulator and SoftMC.
    Biography: Onur Mutlu is a Professor of Computer Science at ETH Zurich. He is also a faculty member at Carnegie Mellon University, where he previously held the Strecker Early Career Professorship. His current broader research interests are in computer architecture, systems, hardware security, and bioinformatics. A variety of techniques he, along with his group and collaborators, has invented over the years have influenced industry and have been employed in commercial microprocessors and memory/storage systems. He obtained his PhD and MS in ECE from the University of Texas at Austin and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor. He started the Computer Architecture Group at Microsoft Research (2006-2009), and held various product and research positions at Intel Corporation, Advanced Micro Devices, VMware, and Google. He received the inaugural IEEE Computer Society Young Computer Architect Award, the inaugural Intel Early Career Faculty Award, US National Science Foundation CAREER Award, Carnegie Mellon University Ladd Research Award, faculty partnership awards from various companies, and a healthy number of best paper or "Top Pick" paper recognitions at various computer systems, architecture, and hardware security venues. He is an ACM Fellow "for contributions to computer architecture research, especially in memory systems", IEEE Fellow for "contributions to computer architecture research and practice", and an elected member of the Academy of Europe (Academia Europaea). For more information, please see his webpage at http://people.inf.ethz.ch/omutlu/.
  • Title: Design for Manufacturability (DFM) & Design-Technology Co-Optimization (DTCO)
    Speaker:  Rasit Onur Topaloglu - IBM
    Biography: Rasit Onur Topaloglu obtained his B.S. in Electrical and Electronic Engineering from Bogazici University, M.S. in Computer Science, and Ph.D. in Computer Engineering from University of California San Diego. He has worked for companies such as Qualcomm, AMD, GLOBALFOUNDRIES and is currently with IBM. He is a Senior Hardware Developer focusing on Design for Manufacturability and Design-Technology Co-Optimization and a Program Manager responsible with 7nm technology and Power10 microprocessors. He has over sixty peer-reviewed publications and thirty US patents. He serves on IBM Internet of Things patent review board. His latest book on Beyond-CMOS Computing is out now. He serves as Vice Chair and Professional Activities Chair of IEEE Mid-Hudson and Secretary of ACM Poughkeepsie. His latest talks include “Compact Qubits for Quantum Computing” and “ACM Code of Ethics and Implications on Artificial Intelligence.”
  • Title: Configurable, Extensible Processors: An Overview of Tensilica Technology
    Speaker: Grant Martin (Tensilica)
    Abstract: This talk will define the concept of ASIPs: Application-Specific Instruction set Processors, and then use Tensilica configurable, extensible processor generation technology as a way of illustrating ASIP concepts. This will be done by describing the latest Tensilica architecture in some detail, and illustrate some of the possible applications of it to various product design categories.
    Biography: Grant Martin is a Distinguished Engineer in the Tensilica R&D group of Cadence Design Systems. Prior to the Cadence acquisition of Tensilica in April 2013, Grant was a Chief Scientist at Tensilica, Inc. in Santa Clara, California for 9 years. Before that, Grant worked for Burroughs in Scotland for 6 years; Nortel/BNR in Canada for 10 years; and Cadence Design Systems for 9 years, eventually becoming a Cadence Fellow in their Labs. He received his Bachelor's and Master's degrees in Mathematics (Combinatorics and Optimisation) from the University of Waterloo, Canada, in 1977 and 1978.
    Grant is a co-author or co-editor of ten books dealing with SoC design, SystemC, UML, modelling, EDA for integrated circuits and system-level design, including the first book on SoC design published in Russian. His most recent book, “ESL Models and their Application: Electronic System Level Design and Verification in Practice”, written with Brian Bailey, was published by Springer in December 2009.
    He was co-chair of the DAC Technical Program Committee for Methods for 2005 and 2006. His particular areas of interest include system-level design, IP-based design of system-on-chip, platform-based design, baseband processing, application-specific instruction set processors, and embedded software. He is a co-editor of the Springer series on embedded systems. Grant is a Senior Member of the IEEE.
  • Title: Towards Secure Acceleration of Neural Network Models at the Edge
    Speaker: Michel Kinsy (Boston University)
    Abstract: Companies, in their push to incorporate artificial intelligence - in particular, machine learning - into their Internet of Things (IoT), system-on-chip (SoC), and automotive applications, will have to address a number of design challenges related to the secure deployment of artificial intelligence learning models and techniques. Machine learning (ML) models are often trained using private datasets that are very expensive to collect, or highly sensitive, using large amounts of computing power. The models are commonly exposed either through online APIs, or used in hardware devices deployed in the field or given to the end users. This gives incentives to adversaries to attempt to steal these ML models as a proxy for gathering datasets. While API-based model exfiltration has been studied before, the theft and protection of machine learning models on hardware devices have not been explored as of now. In this work, we examine this important aspect of the design and deployment of ML models. We illustrate how an attacker may acquire either the model or the model architecture through memory probing, side-channels, or crafted input attacks, and propose (1) power-efficient obfuscation as an alternative to encryption, and (2) timing side-channel countermeasures.
    Biography: Michel A. Kinsy is an Assistant Professor in the Department of Electrical and Computer Engineering at Boston University (BU), where he directs the Adaptive and Secure Computing Systems (ASCS) Laboratory. He focuses his research on computer architecture, hardware-level security, and neural network accelerator designs. Dr. Kinsy is an MIT Presidential Fellow, the 2018 MWSCAS Myril B. Reed Best Paper Award Recipient, DFT'17 Best Paper Award Finalist, and FPL'11 Tools and Open-Source Community Service Award Recipient. He earned his PhD in Electrical Engineering and Computer Science in 2013 from the Massachusetts Institute of Technology. His doctoral work in algorithms to emulate and control large-scale power systems at the microsecond resolution inspired further research by the MIT spin-off Typhoon HIL, Inc. Before joining the BU faculty, Dr. Kinsy was an assistant professor in the Department of Computer and Information Systems at the University of Oregon, where he directed the Computer Architecture and Embedded Systems (CAES) Laboratory. From 2013 to 2014, he was a Member of the Technical Staff at the MIT Lincoln Laboratory.
  • Title: Compute-in-Memory from CMOS to Beyond-CMOS
    Speaker: Shimeng Yu
    Abstract: Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall problem in the machine learning accelerator. Recent advances in deep neural network (DNN) have shown that low-precision neural networks are able to provide a satisfying accuracy on various image datasets with a significant reduction in computation and memory cost. In this talk, we will present recent works on CIM architectures with parallelized weighted-sum operation for accelerating the inference of DNN using CMOS and beyond-CMOS technologies: 1) parallel XNOR-SRAM, where a customized 8T-SRAM cell is used as a synapse; 2) parallel XNOR-RRAM, where a customized bit-cell consisting of 2T2R cells is used as a synapse. The impact of process variations has been quantified for VGG-like network on CIFAR-10 dataset, and the silicon prototype chips have been designed and taped-out to validate these CIM architectures. The strategies of extension from 1-bit to multi-bit weight and the scalability to large-scale system are also discussed. The inference engines based on SRAM and RRAM are benchmarked, showing the benefits of using the non-volatile RRAM for better energy efficiency.
    Biography: Shimeng Yu is an associate professor of electrical and computer engineering at the Georgia Institute of Technology in Atlanta, Georgia. He received the B.S. degree in microelectronics from Peking University, Beijing, China in 2009, and the M.S. degree and Ph.D. degree in electrical engineering from Stanford University, Stanford, California, in 2011 and in 2013, respectively. From 2013 to 2018, he was an assistant professor of electrical and computer engineering at Arizona State University, Tempe, Arizona.
    Prof. Yu’s research interests are nanoelectronic devices and circuits for energy-efficient computing systems. His expertise is on the emerging non-volatile memories (e.g., RRAM) for different applications, such as machine/deep learning accelerator, neuromorphic computing, monolithic 3D integration, and hardware security.
    Among Prof. Yu’s honors, he was a recipient of the DOD-DTRA Young Investigator Award in 2015, the NSF Faculty Early CAREER Award in 2016, the ASU Fulton Outstanding Assistant Professor in 2017, the IEEE Electron Devices Society (EDS) Early Career Award in 2017, and the ACM Special Interests Group on Design Automation (SIGDA) Outstanding New Faculty Award in 2018.
    Prof. Yu has served on the Technical Program Committees for many conferences, including the IEEE International Electron Devices Meeting (IEDM), IEEE International Symposium on Circuits and Systems (ISCAS), ACM/IEEE Design Automation Conference (DAC), and ACM/IEEE International Conference on Computer Aided Design (ICCAD). He is a senior member of the IEEE.
  • Title: Security Vulnerabilities in Emerging Machine Learning Systems
    Speaker: Siddharth Garg (NYU)
    Abstract:At the same time, as ML techniques become more sophisticated, they themselves are vulnerable to attack. These include stealthy training data poisoning attacks, and so-called ``adversarial input perturbations” which have to been shown to be particularly pernicious for deep neural networks. For these reasons, there is a growing interest in techniques to develop and deploy verifiably safe and secure ML systems, adopting and adapting techniques from the software security domain. A final vulnerability involves the fact that modern ML systems and especially deep learning systems are trained and executed in the cloud, raising concerns about the privacy of the user’s data. New solutions are being developed to address these privacy concerns. The goal of the talk will be to introduce students to these emerging security issues.
    Biography: Siddharth Garg received his Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University in 2009, and a B.Tech. degree in Electrical Engineering from the Indian Institute of Technology Madras. He joined NYU in Fall 2014 as an Assistant Professor, and prior to that, was an Assistant Professor at the University of Waterloo from 2010-2014. His general research interests are in computer engineering, and more particularly in secure, reliable and energy-efficient computing.
    In 2016, Siddharth was listed in Popular Science Magazine's annual list of "Brilliant 10" researchers. Siddharth has received the NSF CAREER Award (2015), and paper awards at the IEEE Symposium on Security and Privacy (S&P) 2016, USENIX Security Symposium 2013, at the Semiconductor Research Consortium TECHCON in 2010, and the International Symposium on Quality in Electronic Design (ISQED) in 2009. Siddharth also received the Angel G. Jordan Award from ECE department of Carnegie Mellon University for outstanding thesis contributions and service to the community. He serves on the technical program committee of several top conferences in the area of computer engineering and computer hardware, and has served as a reviewer for several IEEE and ACM journals. His research interests are in Cyber-security and computer hardware design, with a specific focus on hardware security, low power design, and computing architectures for machine learning.
  • Title: Give a Winning Presentation: From Idea to Delivery
    Speaker: Laleh Behjat (Calgary)
    Biography: Dr. Laleh Behjat is a Professor in the department of Electrical and Computer Engineering, Schulich School of Engineering, University of Calgary. She joined the University of Calgary in 2002. Dr. Behjat’s research focus is on developing EDA techniques for physical design and application of large-scale optimization in EDA. Her research team has won several awards including 1st and 2nd places in ISPD 2014 and ISPD 2015 High Performance Routability Driven Placement Contests and 3rd place in DAC Design Perspective Challenge in 2015. She is an Associate Editor of the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems and Optimization in Engineering from Springer. Dr. Behjat has been developing new and innovative methods to teach Computer Science and EDA to students. She acted as an academic advisor for of Google Technical Development Guide and has won several awards for her efforts in education including 2017 Killam Graduate Student Supervision and Mentorship Award. Her team, Schulich Engineering Outreach Team, was also the recipient of the ASTech Leadership Excellence in Science and Technology Public Awareness Award in 2017. Her other interests include raising awareness about issues related to diversity and inclusion and promoting diversity in engineering. She received the Women in Engineering and Geoscience Award from APEGA in 2015 in recognition of her work in this area.