Plenary and Keynote Speakers

Plenary Speakers

Monday, September 26, 09:00–10:00

Jim Bell

Arizona State University, School of Space and Earth Exploration

"Roving on Mars: New Challenges and New Opportunities for Image Processing"

Chair: Lina Karam, Arizona State University

Starting in 2004, NASA has successfully landed three rovers on Mars: Spirit, Opportunity, and Curiosity, in some of the most ambitious missions of robotic exploration ever attempted. Professor Jim Bell from the School of Earth and Space Exploration at Arizona State University is the President of The Planetary Society and one of the leading scientist in charge of the color camera systems on these rovers. Since the beginnings of their missions, he has had an amazing front row seat for the photographic and geologic adventures of these sophisticated exploring robots. In this presentation, Prof. Bell will share his favorite images and stories from "inside" mission operations, and describe the major technical and scientific image processing and analysis challenges that have been encountered by the team during the six year adventure of the Spirit rover, the more than 12 year (and going!) adventure of the Opportunity rover, and most recently by the Curiosity during its first 3+ years of exploration on Mars. He will share the latest stories, photos, and scientific results from Mars, and will discuss plans for the future exploration of the Red Planet--including details about the new ASU-led "Mastcam-Z" camera system on NASA's planned Mars-2020 rover, and its role in returning samples to Earth to better prepare for future human exploration.

Dr. Jim Bell is a Professor in the School of Earth and Space Exploration at Arizona State University in Tempe, Arizona, an Adjunct Professor in the Department of Astronomy at Cornell University in Ithaca, New York, and a Distinguished Visiting Scientist at NASA's Jet Propulsion Laboratory in Pasadena, California. He received his B.S. in Planetary Science and Aeronautics from Caltech, his M.S. and Ph.D. in Geology & Geophysics from the University of Hawaii, and served as a National Research Council postdoctoral research fellow at NASA's Ames Research Center. Jim's research group primarily focuses on the geology, geochemistry, and mineralogy of planets, moons, asteroids, and comets using data obtained from telescopes and spacecraft missions.

Jim is an active planetary scientist and has been heavily involved in many NASA robotic space exploration missions, including the Near Earth Asteroid Rendezvous (NEAR), Mars Pathfinder, Comet Nucleus Tour, Mars Exploration Rovers Spirit and Opportunity, Mars Odyssey Orbiter, Mars Reconnaissance Orbiter, Lunar Reconnaissance Orbiter, and the Mars Science Laboratory Curiosity rover mission. Jim is the lead scientist in charge of the Panoramic camera (Pancam) color, stereoscopic imaging system on the Spirit and Opportunity rovers, is the Deputy Principal Investigator of the Mastcam camera system on the Curiosity rover, and is the Principal Investigator for the Mastcam-Z cameras on NASA's upcoming Mars-2020 rover. As a professional scientist, Jim has published 35 first-authored and more than 180 co-authored research papers in peer reviewed scientific journals, has authored or co-authored nearly 600 short abstracts and scientific conference presentations, and has co-edited or edited two scientific books for Cambridge University Press (one on the NEAR mission: "Asteroid Rendezvous"; the other on Mars: "The Martian Surface: Composition, Mineralogy, and Physical Properties"). He has been an active user of the Hubble Space Telescope, and of a number of ground based telescopes, including several at Mauna Kea Observatory in Hawaii.

Jim is also an extremely active and prolific public communicator of science and space exploration, and is President of The Planetary Society, the world's largest public space education and advocacy organization. He is a frequent contributor to popular astronomy and science magazines like Sky & Telescope, Astronomy, and Scientific American, and to radio shows and internet blogs about astronomy and space. He has appeared on television on the NBC "Today" show, on CNN's "This American Morning," on the PBS "Newshour," and on the Discovery, National Geographic, Wall St. Journal, and History Channels. He has also written many photography-oriented books that showcase some of the most spectacular images acquired during the space program: Postcards from Mars (Dutton/Penguin, 2006), Mars 3-D (Sterling, 2008), Moon 3-D (Sterling, 2009), and The Space Book (Sterling, 2013). Jim's latest book is "The Interstellar Age: Inside the 40 Year Voyager Mission" (Dutton, 2015). Jim has a main belt asteroid named after him (8146 Jimbell). He and teammates have received more than a dozen NASA Group Achievement Awards for work on space missions, and he was the recipient of the 2011 Carl Sagan Medal from the American Astronomical Society, for excellence in public communication in planetary sciences.

View speaker bio »

Tuesday, September 27, 09:00–10:00

Cordelia Schmid

INRIA Research Director

"Automatic Understanding of the Visual World"

Chair: Michael Marcellin, University of Arizona

One of the central problems of artificial intelligence is machine perception, i.e., the ability to understand the visual world based on the input from sensors such as cameras. Computer vision is the area which analyzes visual input.

In this talk, I will present recent progress in visual understanding. It is for the most part due to design of robust visual representations and learned models capturing the variability of the visual world based on state-of-the-art machine learning techinques, including convolutional neural networks. Progress has resulted in technology for a variety of applications. I will present in particular results for human action recognition.

Cordelia Schmid holds a M.S. degree in Computer Science from the University of Karlsruhe and a Doctorate, also in Computer Science, from the Institut National Polytechnique de Grenoble (INPG). Her doctoral thesis on "Local Greyvalue Invariants for Image Matching and Retrieval" received the best thesis award from INPG in 1996. She received the Habilitation degree in 2001 for her thesis entitled "From Image Matching to Learning Visual Models". Dr. Schmid was a post-doctoral research assistant in the Robotics Research Group of Oxford University in 1996--1997. Since 1997 she has held a permanent research position at INRIA Grenoble Rhone-Alpes, where she is a research director and directs an INRIA team. Dr. Schmid is the author of over a hundred technical publications. She has been an Associate Editor for IEEE PAMI (2001--2005) and for IJCV (2004--2012), editor-in-chief for IJCV (2013---), a program chair of IEEE CVPR 2005 and ECCV 2012 as well as a general chair of IEEE CVPR 2015. In 2006 and 2014, she was awarded the Longuet-Higgins prize for fundamental contributions in computer vision that have withstood the test of time. She is a fellow of IEEE. She was awarded an ERC advanced grant in 2013 and the Humbolt research award in 2015.

View speaker bio »

Wednesday, September 28, 09:00–10:00

Henrique 'Rico' Malvar

Chief Scientist, Microsoft Research

"Media Information Processing for Computing Systems"

Chair: Sethuraman Panchanathan, Arizona State University

We present an overview of new technologies for visual and audio information processing, with emphasis on applications to computing and information technology systems. We will discuss new scenarios for visual information processing, new kinds of user interfaces, and advances in speech and language processing. Many of those technologies are enabled by developments in new computing architectures, streaming data processing, and deep neural networks, which are related to the rapid growth in new technologies for the efficient communication, storage, and analytics on big data. By combining advances in computing and signal/image processing and related areas, computing devices and cloud systems can now deliver new and enhanced user experiences.

Henrique (Rico) Malvar is a Microsoft Distinguished Engineer and the Chief Scientist for Microsoft Research. He was born and raised in Brazil. Before moving to industry in 1993, he was a professor of electrical engineering at University of Brasília, Brazil. Rico was Vice-President of Research and Advanced Technologies at PictureTel from 1993 to 1997. He joined Microsoft in 1997, when he started a signal processing research group, which developed new technologies such as new media compression formats used in Windows, Xbox, and Office, microphone array processing technologies used in Windows, Tablet PCs, and Xbox Kinect, as well as machine learning technologies for music identification in Windows Media, junk mail filtering in Exchange, and others. The group also developed the first prototype of the RoundTable videoconferencing device. Rico was a key architect for several media compression formats, such as WMA and HD Photo/JPEG XR, and made key contributions to the popular video format H.264, used by YouTube, Netflix, digital TV, and many other applications. Rico received a Ph.D. in electrical engineering and computer science from the Massachusetts Institute of Technology in 1986. His technical interests include multimedia signal compression and enhancement, fast algorithms, multi-rate filter banks, and multi-resolution and wavelet transforms. He has over 160 publications and over 115 issued patents in those areas. He received the Young Scientist Award from the Marconi International Fellowship in 1981, was elected to Fellow of the IEEE in 1997, received the Technical Achievement Award from the IEEE Signal Processing Society in 2002, and was elected a Member of the U.S. National Academy of Engineering (NAE) in 2012. He is also a Member of the Brazilian Academy of Sciences and the Brazilian National Academy of Engineering.

View speaker bio »

Innovation Program Keynote Speakers

Tuesday, September 27, 12:30 - 17:10


Michael Antonov

Co-Founder, Oculus

Keynote Title: Bringing People Closer Through Virtual Reality

Abstract: Virtual reality lets us experience anything, anywhere. This unique potential makes VR set to become the next major computing platform. Now, with an increased prevalence in 360 cameras, immersive videos and 360 photo experiences are accessible to more consumers around the world. The latest advancements in image capturing hardware and high-end VR headsets—for both PC and Mobile—make it possible for people everywhere to connect in powerful new ways.

Oculus is poised to expand its mission of true immersion and human connectivity over the next several years. Michael Antonov, Chief Software Architect for Oculus, will walk you through some of the technical challenges and solutions we've encountered on our VR journey so far, as well as share some details and thoughts about what's next for VR hardware, its capabilites, and the future for what "social" means for VR.

Speaker Bio: Michael’s professional career started after meeting Brendan at the University of Maryland and co-founding Scaleform, a user interface technology company for games. As Scaleform CTO, Michael led software development – working on GPU accelerated vector graphics and integrating them into 3D engines. By the time Scaleform was sold to Autodesk in 2011, it was the leading game UI solution, shipping in hundreds of titles.

Michael fell in love with virtual reality when he met Palmer in 2012 and became the Chief Software Architect of Oculus VR. There, he put together the Oculus software team, led development of the DK1/DK2 software stack, and focused on the challenge of stable, low-cost positional tracking, as well as interaction between tracking, sensor fusion, and optimized rendering to achieve lowest latency and the greatest feeling of presence.

View speaker bio »

Achin Bhowmik

VP, Intel

"Intel® RealSenseTM Technology: Adding Human-Like Sensing and Interactions to Devices"

Abstract: The world of intelligent and interactive systems is undergoing a revolutionary transformation. With rapid advances in natural sensing and perceptual computing technologies, devices are being endowed with abilities to “sense”, “understand”, and “interact” with us and the physical world. This keynote will describe and demonstrate the Intel® RealSenseTM Technology, which is enabling a new class of applications based on real-time 3D-sensing, including interactive computing devices, autonomous machines such as robots and drones, as well as immersive mixed-reality devices, blurring the border between the real and the virtual words.

Speaker Bio: Dr. Achin Bhowmik is vice president and general manager of the perceptual computing group at Intel, where he leads the development and deployment of Intel® RealSense™ Technology. His responsibilities include creating and growing new businesses in the areas of interactive computing systems, immersive virtual reality devices, autonomous robots and unmanned aerial vehicles.

Previously, he served as the chief of staff of the personal computing group, Intel’s largest business unit with over $30B revenues. Prior to that, he led the development of advanced video and display processing technologies for Intel’s computing products. His prior work includes liquid-crystal-on-silicon microdisplay technology and integrated electro-optical devices. As an adjunct and guest professor, Dr. Bhowmik has advised graduate research and taught courses at the Liquid Crystal Institute of the Kent State University, Stanford University, University of California, Berkeley, Kyung Hee University, Seoul, and the Indian Institute of Technology, Gandhinagar. He has >100 publications including two books and >100 granted and pending patents. He is a Fellow of the Society for Information Display (SID), and serves on the board of directors for OpenCV, the organization behind the open source computer vision library.

View speaker bio »


Bill Dally

SVP, nVidia

Keynote Title: GPU Computing from CUDA to Deep Learning

Abstract: The CUDA programming system enables programmers to harness the tremendous computational power of GPUs to a variety of tasks. Enabled by CUDA, GPUs now power the fastest supercomputers in the US and Europe and have enabled the recent revolution in deep learning. This talk will trace the history of CUDA from stream processing research at Stanford to the present.

Speaker Bio: Bill Dally joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University, where he was chairman of the computer science department. Dally and his Stanford team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. Dally was previously at the Massachusetts Institute of Technology from 1986 to 1997, where he and his team built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. From 1983 to 1986, he was at California Institute of Technology (CalTech), where he designed the MOSSIM Simulation Engine and the Torus Routing chip, which pioneered “wormhole” routing and virtual-channel flow control. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and has received the IEEE Seymour Cray Award and the ACM Maurice Wilkes award. He has published over 200 papers, holds over 50 issued patents, and is an author of two textbooks. Dally received a bachelor's degree in Electrical Engineering from Virginia Tech, a master’s in Electrical Engineering from Stanford University and a Ph.D. in Computer Science from CalTech. He is a cofounder of Velio Communications and Stream Processors.

View speaker bio »


John Harding

VP, Google

Keynote Title: The Promise of YouTube

Abstract: In 2005, two friends stood in front of the elephant pen at the San Francisco Zoo and filmed the very first clip that would appear on their new video-hosting website. The video was utterly unremarkable: 19 seconds of unsteady footage shot on a camcorder in low definition. But after the video went live on their new site called YouTube, media and entertainment would never be the same. All of a sudden, anyone in the world could share a video with everyone in the world. You didn’t have to audition at a casting call; you didn’t have to pitch a screenplay to an executive; you didn’t have to beam a signal into people’s homes; and you didn’t need a budget. With YouTube, you suddenly had access to free and instant global distribution. With over a billion viewers around the world visiting YouTube every single month, it's taken unbelievable feats of engineering to keep that promise alive. But ever year more content is uploaded (400 hours every single minute), higher quality and more complex formats are supported (4K, HDR, 360º, VR) and more video is served to more users around the world. And we're just getting started.

Speaker Bio: John Harding is the VP of Engineering for Emerging Experiences at YouTube where he leads the Engineering efforts for Emerging Markets, Gaming, Kids, Living Room, Music, and VR. He joined YouTube shortly after it was acquired by Google, and has worked on most aspects of the product over the years. Prior to Google, John worked at Microsoft on Internet Explorer and Xbox.

View speaker bio »

Tim Milliron

VP, Lytro

"How Light Field will Revolutionize Image Capture and Playback"

Abstract: For over 100 years, imaging technology has been flat. We've captured and displayed the color and brightness of light with a myriad of technologies: film, print, digital, still and moving - but always as a two dimensional images. Light Field technology has the promise to revolutionize this most basic aspect of imaging. Very soon, we will view and capture imagery in three dimensions - capturing the play of light and depth in any scene. In this talk, I will discuss how this transformation from flat to volumetric imaging will impact media, consumer experiences, and ultimately the way we think about imagery itself.

Speaker Bio: Tim Milliron is Vice President of Engineering at Lytro, where he leverages his broad experience in computer graphics and cloud computing to drive engineering across hardware and software.
Tim began his career at Pixar, where he first specialized in large procedural set pieces and character rigging for films like Toy Story 2, Monsters, Inc., and Finding Nemo. He led the characters and crowds group for Cars, which built hundreds of unique characters for the film as well as the software used to animate and simulate them. After Cars, Tim led software development for Pixar’s next-generation character articulation, animation, and simulation systems, first used on Brave and now used studio-wide. Most recently, Tim spent four and a half years at Twilio, serving in senior leadership roles in engineering and product and tackling the scaling challenges of a hypergrowth cloud startup. During his tenure, Tim helped grow Twilio’s team, revenue, and infrastructure more than 10x.

View speaker bio »

Anthony Park

VP, Netflix

"Netflix - Inventing Internet TV"

Abstract: Entertainment and technology are continuing to transform each other as they have been doing for over a hundred years. Netflix has been a pioneer in inventing Internet TV over the last decade. We can now put consumers across the world in the driver’s seat when it comes to how, when, and where they watch. In this talk, I will discuss some of the technology Netflix has brought to millions of consumers to make Internet TV a reality, including new user experiences, video streaming, and personalized recommendations.

Speaker Bio: Anthony Park is VP of Engineering at Netflix and is responsible for video streaming on consumer devices like smart TVs, set-top boxes, phones, laptops, and game consoles. With over 20 years of software engineering experience, he's spent the last eight years at Netflix implementing and improving video streaming on a variety of devices. Recently, Anthony has helped bring innovations like 4K and HDR to millions of Netflix customers around the world. Anthony has a Master of Engineering (MEng) and a Master of Business Administration (MBA) from Arizona State University.

View speaker bio »

Jamie Shotton

Co-Inventor of Kinect, Microsoft

"The Impact of Visual Innovations: Kinect"

Abstract: Microsoft launched Kinect for Xbox 360 in 2010. Kinect combined a depth sensing camera with novel machine learning algorithms to bring full-body controller-free motion gaming to the living room. Kinect broadened the world of Xbox gaming to a new consumer audience, and garnered a Guinness World Record for the fastest selling consumer electronics device in history. But the creation of a market for consumer-grade depth cameras has arguably had even greater impact: Kinect and other depth sensors have become indispensable tools in widespread use in companies and research labs across the world. This short talk will both tell the story of the original Kinect product, as well as highlighting some of the exciting advances that Kinect is enabling in 3D scanning, mixed reality, healthcare, and more.

Speaker Bio: Jamie Shotton leads the Machine Intelligence & Perception group at Microsoft Research Cambridge. He studied Computer Science at the University of Cambridge, where he remained for his PhD in computer vision and machine learning for visual object recognition. He joined Microsoft Research in 2008 where he is now a Principal Researcher. His research focuses at the intersection of computer vision, AI, machine learning, and graphics, with particular emphasis on systems that allow people to interact naturally with computers. He has received multiple Best Paper and Best Demo awards at top academic conferences. His work on machine learning for Kinect was awarded the Royal Academy of Engineering's gold medal MacRobert Award 2011, and he shares Microsoft's Outstanding Technical Achievement Award for 2012 with the Kinect product team. In 2014 he received the PAMI Young Researcher Award, and in 2015 the MIT Technology Review Innovator Under 35 Award ("TR35").

View speaker bio »

C. –C. Jay Kuo

Dean’s Professor, University of Southern California

"Data-Driven Perceptual Coding: A Collaborative Example between Academia and Industry"

Abstract: There has been a significant progress in image/video coding in the last 50 years, and many visual coding standards have been established, including JPEG, MPEG-1, MPEG-2, H.264/AVC and H.265, in the last three decades. The visual coding research field has reached a mature stage, and the question “is there anything left for image/video coding?” arises in recent years. To address this question, we need to examine the visual coding problem from a new angle – a data driven approach based on human subjective test results. In particular, I will describe a new methodology that uses a just-noticeable-difference (JND) approach to measure the subjective visual experience and takes a statistical approach to characterize joint visual experiences of a test group. This new methodology builds a bridge between the traditional visual coding problem and modern big data analytics. I have collaborated with a couple of companies on solving this problem together, and will use this example to talk about challenges and tips for academia-industry collaboration for visual innovation.

Speaker Bio: Dr. C.-C. Jay Kuo received his Ph.D. degree from the Massachusetts Institute of Technology in 1987. He is now with the University of Southern California (USC) as Director of the Media Communications Laboratory and Dean’s Professor in Electrical Engineering-Systems. His research interests are in the areas of digital media processing, compression, communication and networking technologies. Dr. Kuo was the Editor-in-Chief for the IEEE Trans. on Information Forensics and Security in 2012-2014. He was the Editor-in-Chief for the Journal of Visual Communication and Image Representation in 1997-2011, and served as Editor for more than 10 other international journals. Dr. Kuo was the recipient of the Electronic Imaging Scientist of the Year Award in 2010 and the holder of the 2010-2011 Fulbright-Nokia Distinguished Chair in Information and Communications Technologies. He also received the USC Associates Award for Excellence in Teaching, the IEEE Computer Society Taylor L. Booth Education Award, the IEEE Circuits and Systems Society John Choma Education Award, and the IS&T Raymond C. Bowman Award in 2016. Dr. Kuo is a Fellow of AAAS, IEEE and SPIE. Dr. Kuo has guided 134 students to their Ph.D. degrees and supervised 25 postdoctoral research fellows. He is a co-author of about 250 journal papers, 900 conference papers and 14 books.

View speaker bio »


Hanno Basse

CTO, 20th Century Fox

Keynote Title: Market implementation of HDR technology – a case study

Abstract: 20th Century Fox worked with Samsung and other leading CE companies to introduce displays with High Dynamic Range capability into the consumer market. This presentation will discuss how studio and CE representatives collaborated to develop display as well as content mastering requirements. It also describes the benefits of starting such collaboration at a very early stage, in order to ensure that CE product development and the creation of matching content are aligned and products and content are introduced to the market at the same time.

Speaker Bio: Hanno Basse, chief technology officer (CTO) at 20th Century Fox Film Corp., oversees technology strategy and engineering, including home entertainment, theatrical distribution, and postproduction. At Fox, Hanno and his team of engineers are developing new distribution methods, are working on next generation entertainment technologies like High Dynamic Range and Ultra-HD as well as interactive platforms, and are involved with many other initiatives, including Content Protection, Immersive Audio etc. He earlier spent more than 14 years at DIRECTV, ultimately as senior vice president of broadcast systems engineering, with accomplishments including the 2005 successful launch of the largest HD channel rollout to date and the 2009 implementation of DIRECTV’s video-on-demand infrastructure, as well as significant contributions to DIRECTV’s broadcast infrastructure and construction of its Los Angeles Broadcast Center. Hanno began his career in 1991 as a scientist-engineer at the Institut für Rundfunktechnik (IRT) in Munich, Germany, and worked as a systems engineer at ProSieben Media AG, also in Germany. He has been awarded 22 patents and was named a Fellow of the Society of Motion Picture and Television Engineers in 2014. Hanno currently serves as the president and chairman of the board of the UHD Alliance, an organization that brings together major content, consumer electronics and distribution companies with the goal of defining a next generation premium audio-visual experience.

View speaker bio »


Bo Begole

VP, Huawei

Keynote Title: Responsive Media in the Future of Thinking Machines

Abstract: Perception and Cognition technologies have evolved to a point where systems need not simply react to user input, so that now systems can proactively deliver personalized media that responds dynamically to the users' attention, engagement and context: Responsive Media. Media experiences will be dramatically changed by the next generation of these technologies embedded into smartphones, VR goggles, robots, smart homes and vehicles so that they not only sense the audience's engagement in real time, but they can also predict disengagement and prevent it by dynamically shifting the content to appeal to an individual's preferences, emotion state and situation. Media technologies no longer simply deliver entertainment: imagine robots that can sense a child's frustration and actively assist in the homework, digital assistants that do not interrupt inappropriately, semi-autonomous vehicles that use media to maximize driver engagement, and other intelligent media experiences. Responsive media will be more like an engaging conversation among humans, rather than just passive consumption. This talk will paint a picture and challenge the audience to identify the remaining technology barriers, architectures, business ecosystems, threats, and yes, killer applications.

Speaker Bio: Dr. Bo Begole is VP and Global Head of Huawei Technologies’ Media Lab whose mission is to create the future of networked media technologies and user experiences through innovations in ultra-high-efficiency compression, computer vision/hearing, augmented/virtual reality, full field communications and personalized, responsive media. Previously, he was a Sr. Director at Samsung Electronics’ User Experience Center America where he directed a team to develop new contextually intelligent services for wearable, mobile and display devices. Prior to that, he was a Principal Scientist and Area Manager at Xerox PARC where he directed the Ubiquitous Computing research program creating behavior-modeling technologies, responsive media and intelligent mobile agents. An inventor of 30 issued patents, he is also the author of Ubiquitous Computing for Business (FT Press, 2011) and dozens of peer-reviewed research papers. Dr. Begole is an ACM Distinguished Scientist, active in many research conferences and was co-Chair of the 2015 ACM conference on human factors in computing systems (CHI 2015) in Seoul, Korea. Dr. Begole received a Ph.D. in computer science from Virginia Tech in 1998.

View speaker bio »


Raj Talluri

SVP, Qualcomm

Keynote Title: Future Innovations in Visual Processors for Embedded Vision Applications

Abstract: In the last couple of decades we have seen tremendous advances in processors for visual computing. This has led to an explosion in the use of computer vision in many embedded applications - including self driving cars, virtual reality headsets, smart cameras, autonomous robots etc. This talk will highlight some of the key innovations in the area of visual processors and drill deeper into what future innovations to expect and the impact of these processing innovations on future vision applications.

Speaker Bio: Raj Talluri serves as senior vice president of product management for Qualcomm, where he is currently responsible for managing IoT, mobile computing and Qualcomm Snapdragon Sense ID 3D finger print technology businesses. Prior to this role, he was responsible for product management of Qualcomm Snapdragon application processor technologies. Talluri has more than 20 years of experience spanning across business management, strategic marketing, and engineering management. He has published more than 35 journal articles, papers, and book chapters in many leading electrical engineering publications. Raj Talluri was chosen as No. 5, in Fast Company's list of 100 Most Creative People in business in 2014.

View speaker bio »


Susie Wee

VP and CTO of DevNet Innovations and Cloud Experiences, Cisco

Keynote Title: The Next Wave of Visual Innovation with Visual Microservices and the Internet of Things

Abstract: Visual technologies have made tremendous advances over the last few decades: from QCIF and CIF resolution video conferencing in the 1990s to video streaming and HDTV in the 2000s and to widespread use of mobile video and 4K video, high-dynamic range video, and augmented and virtual reality now in the 2010s. Each of these innovations required technology advancements in the full stack, including video capture and display, compression, streaming, and in the network itself. With a proper deployment of today’s technologies, it is feasible that within the decade the 7 billion people in the world will be able to create and consume video. In parallel to advances in video technologies, there have also been tremendous advancements in the development and deployment of software with open source, app stores, virtualization, devops, and more recently containers and micro services.
The next wave of visual innovation will be driven by the need to capture, deliver, and analyze video that exceeds what 7 billion people can consume. Video is no longer captured just for the purpose of viewing by people, but it will be captured for the purpose of extracting information and making intelligent decisions. With advances in the Internet of Things and machine-to-machine communication, video will be increasingly used for sensing, automation, surveillance, and event detection. Analytics will be used to extract intelligence from captured video streams to make decisions that reach far beyond the video application itself. These applications require the global network to carry and process not billions of video streams, but trillions of video streams. This next chapter of visual innovation once again requires full stack technology advancements and a network architecture that allows video to be captured and processed at the edge of the network, along with an application framework that allows the flexible deployment of visual microservices.

Speaker Bio: Susie is the Vice President and Chief Technology Officer of Networked Experiences and DevNet at Cisco Systems. She is the founder and lead of DevNet, Cisco's developer program, which aims to make the evolving Internet an innovation platform for the developer ecosystem. Susie and her team are developing UX and technology innovations that improve the operational experience, end user experience, and developer experience with the network. They are developing technologies and systems for the Internet of Things, software-defined networking, augmented collaboration and co-creation, and network visualization. Prior to this, Susie was the Vice President and Chief Technology and Experience Officer of Cisco’s Collaboration Technology Group where she was responsible for driving innovation and experience design in Cisco's collaboration products and software services, including unified communications, telepresence, web and video conferencing, and cloud collaboration.
Susie received Technology Review’s Top 100 Young Innovators award, ComputerWorld's Top 40 Innovators under 40 award, the INCITs Technical Excellence award, the Women In Technology International Hall of Fame award, and was on the Forbes Most Powerful Women list. She is an IEEE Fellow for her contributions in multimedia technology and has over 50 international publications and over 45 granted patents. Susie received her B.S., M.S., and Ph.D. degrees from the Massachusetts Institute of Technology.

View speaker bio »