A Stable Numerical Computation

Photo by Patrick Fore on Unsplash

First, we implement the Log-Sum-Exp (LSE) in Python.

>>> import torch
>>> x = torch.randn(3,3)*100
>>> x.exp().sum(-1,keepdim=True).log()
tensor([[ inf],
[ inf],

In this example, we have tensors that have overflowed while computing LSE.

Now, we use the trick.

>>> xmax = x.max(-1)[0].unsqueeze(-1)
>>> xmax + (x-xmax).exp().sum(-1,keepdim=True).log()
[ 83.7103]])

Finally, we use PyTorch’s implementation of LSE for validation.

>>> x.logsumexp(-1, keepdim=True)
[ 83.7103]])

As you can see, the outputs are equal.





Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

How FLEXable can you be?

The first P of being a PM

ARCUS Common Cache Module with Basic Pattern Caching in Java Environment

Patterns in Usage Equal Happy Users

Tree Data Structure in Swift

Beginners, These Tricks Will Accelerate Your Coding

Why Workspace? Why Mef? What HBD?

PPL 2020 — Understanding Agile

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Mansoor Aldosari

Mansoor Aldosari


More from Medium

🧬 Intro to genetic algorithms with python

Some important Math functions in Pytorch

Time series Forecasting: Using a LSTM Neural Network to predict Bitcoin prices

How to get started with Hyper-parameter Optimization