CalSync — Automate Outlook Calendar Colors

Auto-color-code events for your team using rules. Faster visibility, less admin. 10-user minimum · 12-month term.

CalSync Colors is a service by CPI Consulting

In this blog post Get Started With Tensors with PyTorch we will walk through how to work with tensors with simple, copy‑paste examples you can use today.

Tensors are the workhorse behind modern AI and numerical computing. Think of them as powerful, N‑dimensional arrays that can live on CPUs or GPUs and support fast math, automatic differentiation, and clean syntax. In this article, we start with a high-level explanation, then move step by step through creating a tensor, indexing it, adding values, and performing common operations you’ll use in real projects.

We’ll use PyTorch because it’s concise, production‑ready, and maps naturally to how engineers think about data. The same ideas apply across frameworks (NumPy, TensorFlow, JAX), but PyTorch keeps the examples clean.

What’s a tensor and why it matters

A tensor generalizes familiar objects:

  • 0-D: scalar (e.g., 3.14)
  • 1-D: vector (e.g., [1, 2, 3])
  • 2-D: matrix (e.g., a spreadsheet)
  • 3-D and beyond: stacks of matrices, images, batches, sequences

Under the hood, a tensor is a block of memory plus metadata: shape (dimensions), dtype (precision), device (CPU or GPU), and sometimes a gradient buffer. Operations call into highly optimized libraries (BLAS, cuBLAS, cuDNN) so the same Python code can run fast on GPU. Autograd (reverse‑mode automatic differentiation) keeps track of operations so you can compute gradients for optimization and training.

Setup

Create tensors

You can create tensors from Python lists, NumPy arrays, or with factory functions.

Tip: use arange for integer steps and linspace for evenly spaced points in a range.

Indexing and slicing

Torch indexing feels like NumPy: square brackets, slices, masks, and advanced indexing.

Adding values and common math

PyTorch supports element‑wise ops, broadcasting, and linear algebra. Out‑of‑place operations return new tensors; in‑place operations modify existing tensors (ending with an underscore, like add_).

Broadcasting rules: trailing dimensions must match or be 1; PyTorch virtually “stretches” size‑1 dimensions without copying data. If the shapes don’t align, you’ll get a runtime error—check shapes with .shape.

Working with shapes

Reshaping is the glue for real workloads. You’ll stack batches, flatten features, reorder channels, and add singleton dimensions.

Use reshape for a safe reshape (it handles non‑contiguous memory by copying if needed). view is faster but requires contiguous tensors.

Device management and moving data

Switching between CPU and GPU is explicit and simple.

Moving large tensors between CPU and GPU is relatively expensive. Keep tensors on the device where the bulk of computation happens.

NumPy interoperability

PyTorch and NumPy can share memory (zero‑copy) on CPU.

Autograd in one minute

Autograd builds a computation graph as you operate on tensors with requires_grad=True. Calling backward() computes gradients with respect to those tensors.

Note: avoid in‑place ops on tensors that require grad if those ops are part of the computation graph; they can invalidate history. When in doubt, use out‑of‑place operations or .clone().

Mini workflow example

Let’s put it together: normalize a batch, apply a linear layer, and compute a simple loss.

This snippet demonstrates common patterns: reductions, broadcasting, matrix multiplies, and autograd across CPU/GPU seamlessly.

Cheat sheet

  • Create: torch.tensor, zeros, ones, arange, randn
  • Inspect: .shape, .dtype, .device
  • Index: t[i], t[i:j], masks, advanced indexing
  • Math: element‑wise ops, reductions (sum, mean, max), @ for matmul
  • Shapes: reshape, permute, unsqueeze/squeeze, cat, stack
  • Device: .to('cuda') / .to('cpu')
  • Autograd: set requires_grad=True, call backward()
  • I/O: from_numpy and .numpy() (CPU tensors)

Wrap up

Tensors turn math into scalable, production‑ready code. You learned how to create, index, add, reshape, and compute with tensors; how broadcasting and reductions work; and how to use devices and autograd. With these building blocks, you can handle everything from feature engineering to deep learning training loops efficiently.

Next step: paste the snippets into a notebook or script, swap in your own data shapes, and build from there. When you’re ready to scale up, keep your tensors on the right device and let PyTorch’s kernels do the heavy lifting.


Discover more from CPI Consulting

Subscribe to get the latest posts sent to your email.