Hash tables provide a fast way to maintain a set of keys or map keys to values, even if the keys are objects, like strings. They are such a ubiquitous tool in computer science that even incremental improvements can have a large impact. The potential for optimization led to a proliferation of hash table implementations [...]
The post Open-sourcing F14 for faster, more memory-efficient hash tables appeared first on Facebook Code.
The time it takes for an interaction to go from the user input event (such as clicking a button or typing in a box) that triggered it to being completely rendered is an important web metric. At Facebook, we measure events at four stages: the moment the operating system gets the input, the moment we [...]
The post Faster input events with Facebook’s first browser API contribution appeared first on Facebook Code.
Thousands of engineers write the code to create our apps, which serve billions of people worldwide. This is no trivial task—our services have grown so diverse and complex that the codebase contains millions of lines of code that intersect with a wide variety of different systems, from messaging to image rendering. To simplify and speed [...]
The post Aroma: Using machine learning for code recommendation appeared first on Facebook Code.
We recently hosted our first-ever London Facebook Engineering Fair. The invitation-only event gathered together software engineers, product managers, UX researchers, data scientists, academics, and others working in the technology industry. Several London-based teams gave attendees an exclusive behind-the-scenes look at the products and technologies being worked on by our U.K. engineers. Director of Product for [...]
WHAT’S NEW: The first experimental back end for our Glow compiler and runtime project, designed to target Habana’s existing hardware accelerator. This back end is the first to customize for various vendors’ accelerators. Learn more about how hardware makers are working with our compiler and future plans for Glow. WHY IT MATTERS: Glow’s open source [...]
WHAT IT IS: A new tool from Facebook AI Research that enables training of multi-relation graph embeddings for very large graphs. PyTorch-BigGraph (PBG) handles graphs with billions of nodes and trillions of edges. Since PBG is written in PyTorch, researchers and engineers can easily swap in their own loss functions, models, and other components. Read [...]
The post PyTorch-BigGraph: Faster embeddings of extremely large graphs appeared first on Facebook Code.