GTC 2018 in San Jose

 

GTC18

 

When: March 26 - 29, 2018

Where: San Jose, California

Booth: 827

What: NVIDIA’s GPU Technology Conference (GTC) is the premier AI and deep learning event, providing you with training, insights, and direct access to experts in the industry. GTC 2018 will feature 500+ sessions, tutorials, and hands-on programming labs covering the latest breakthroughs in self-driving cars, smart cities, healthcare, big data, high performance computing, virtual reality, and more!

Explore the exhibit hall and connect with technology experts in a one stop shop for the hottest information on GPU-enabled applications, developer tools, and hardware systems.

Discover the latest advances in GPU technology, and see how GPU technologies are creating amazing breakthroughs in important fields and how scientists, developers, engineers, and IT managers are using it to tackle their day-to-day computational and graphics challenges.

Improve your programming skills and hear about exciting innovations in the wide selection of tutorials and programming labs which are led by industry experts and NVIDIA engineers.

Learn more about this event here

 


Booth Giveaway - LEGO Star Wars First Order Star Destroyer™!

 

Win Me!

This year at our booth, we have are having a giveaway for one (1) LEGO Star Wars First Order Star Destroyer!

 

Visit our booth during exhibit hours for your chance to win!
Just scan your badge, receive your number, and be back at our booth on Thursday, March 29 at 1:00pm for the live draw!
 
Remember - you MUST be at the booth during the draw to win! Good Luck!

 

   

Acceleware Tutorials at GTC

Session 1

Title: An Introduction to CUDA Programming Presented by Acceleware (Session 1 of 4)

Session ID: TBA

When: TBA

Where: TBA

Presenter: Chris Mason

Audience Level (all/intermediate/advanced): Beginner/All

Intended audience: This introductory tutorial is intended for those new to CUDA and is the foundation for our following three tutorials.  Those with no previous CUDA experience will leave with essential knowledge to start programming in CUDA.  For those with previous CUDA experience, this tutorial will refresh key concepts required for subsequent tutorials on CUDA optimization. 

Description: Join us for an informative introduction to CUDA programming. The tutorial will begin with a brief overview of CUDA and data-parallelism before focusing on the GPU programming model. We will explore the fundamentals of GPU kernels, host and device responsibilities, CUDA syntax and thread hierarchy. A programming demonstration of a simple CUDA kernel will be delivered. Printed copies of the material will be provided to all attendees for each session - collect all four!

 

Session 2

Title: An Introduction to the GPU Memory Model - Presented by Acceleware (Session 2 of 4)

Session ID: TBA

When: TBA

Where: TBA

Presenter: Chris Mason

Audience Level (all/intermediate/advanced): Beginner/All

Intended audience: This tutorial is for those with a basic understanding of CUDA who want to learn about the GPU memory model and optimal storage locations. New to CUDA?  Join us for our first tutorial, An Introduction to GPU Programming – Session 1, to learn the basics of CUDA programming required for this tutorial. 

Description: Explore the memory model of the GPU! This session will begin with an essential overview of the GPU architecture and thread cooperation before focusing on the different memory types available on the GPU. We will define shared, constant and global memory and discuss the best locations to store your application data for optimized performance. Features such as shared memory configurations and Read-Only Data Cache are introduced and optimization techniques discussed. A programming demonstration of shared and constant memory will be delivered.  Printed copies of the material will be provided to all attendees for each session - collect all four!

 

Session 3

Title: Asynchronous Operations and Dynamic Parallelism in CUDA - Presented by Acceleware (Session 3 of 4)

Session ID: TBA

When: TBA

Where: TBA

Presenter: Chris Mason

Audience Level (all/intermediate/advanced): Beginner/All

Intended audience: This tutorial builds on the two previous sessions (An Introduction to GPU Programming and the Introduction to GPU Memory Model) and is intended for those with a basic understanding of CUDA programming.

Description: 

This tutorial dives deep into asynchronous operations and how to maximize throughput on both the CPU and GPU with streams. We will demonstrate how to build a CPU/GPU pipeline and how to design your algorithm to take advantage of asynchronous operations. The second part of the session will focus on dynamic parallelism.

A programming demo involving asynchronous operations will be delivered. Printed copies of the material will be provided to all attendees for each session - collect all four!

 

Session 4

Title: Essential CUDA Optimization Techniques - Presented by Acceleware (Session 4 of 4)

Session ID: TBA

When: TBA

Where: TBA

Presenter: Chris Mason

Audience Level (all/intermediate/advanced): Beginner/All

Intended audience: This tutorial is for those with some background in CUDA including an understanding of the CUDA memory model and streaming multiprocessor.  Our earlier tutorials (An Introduction to GPU Programming, an Introduction to the GPU Memory Model, and Asynchronous Operations & Dynamic Parallelism) provide the background information necessary for this session. 

Description: Learn how to optimize your algorithms for NVIDIA GPUs. This informative tutorial will provide an overview of the key optimization strategies for compute, latency and memory bound problems. The session will include techniques for ensuring peak utilization of CUDA cores by choosing the optimal block size. For compute bound algorithms we will discuss how to improve branching efficiency, intrinsic functions and loop unrolling. For memory bound algorithms, optimal access patterns for global and shared memory will be presented. Cooperative groups will also be introduced as an additional optimization technique. This session will include code examples throughout and a programming demonstration highlighting the optimal global memory access pattern which is applicable to all GPU architectures. Printed copies of the material will be provided to all attendees for each session - collect all four!

 

 


Presenters

Acceleware training instructor Chris Mason

Chris Mason

Technical Product Manager, Acceleware Ltd.
Chris is the Product Manager for Acceleware’s GPU accelerated electromagnetic product line. He is responsible for the successful development and launch of Acceleware products used by companies world-wide. Chris has 13 years of experience in developing commercial applications for the GPU and has delivered numerous CUDA courses to students in a diverse range of industries. His previous experience also includes parallelization of algorithms on digital signal processors (DSPs) for cellular phones and base stations. Chris has a Masters in Electrical Engineering from Stanford University.