Skip to main content

Implementation

The Ackermann Function: Taming the Wildest Recursion in Computer Science
·2775 words·14 mins
The Ackermann function is a deceptively simple algorithm that stands as a landmark in theoretical computer science. Defined by a concise set of recursive rules, it generates numerical values that grow at a rate faster than any primitive recursive function, quickly reaching magnitudes that are physically incomputable. While its naive implementation serves as a classic example of a recursion depth stress test, its true importance is historical and philosophical.
ResNet Overview and Implementatoin
·2612 words·13 mins
ResNet model and the seminal paper, Deep Residual Learning for Image Recognition by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, which won the Best Paper award at CVPR 2016. It is one of the most influential and fundamental papers in the history of deep learning for computer vision.
VGGNet Overview
·1820 words·9 mins
VGGNet is a famous deep learning model used in computer vision—essentially, teaching computers to understand images. It was created by researchers at the Visual Geometry Group (VGG) at the University of Oxford. Since its debut in 2014, VGGNet has become one of the key models that helped advance how machines see and recognize objects in photos. At its core, VGGNet is designed to look at images and decide what is in them.