Test Markdown
0001-01-01
Table of Contents
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua, explorando novas fronteiras em machine learning e desenvolvendo assistentes de pesquisa capazes de atravessar complexos grafos de citações para gerar levantamentos bibliográficos aprofundados.
Text Formatting
You can make text bold, italic, or both. You can also use strikethrough.
Lists
Unordered List
- One does not simply make a list
- Type shii
Ordered List
- Wake up
- Realize you’re surrounded by flames
- Say “This is fine”
Links
Images

Blockquotes
I don’t always use blockquotes, But when I do, I prefer Dos Equis.
Code
Inline code: U,S,V=np.linalg.svd(A)
Code block:
import agi
agi.do_stuff()
Tables
You | The guy she tells you not to worry about |
---|---|
O_o |
( ͡° ͜ʖ ͡°) |
¯\_(ツ)_/¯ |
ᕦ(ò_óˇ)ᕤ |
Horizontal Rule
(ok)
Footnotes
Here’s a sentence with a footnote. 1
Emoji
❄️
Math (Bonus)
Inline math: $E = mc^2$
Display math:
$$ \begin{aligned} \nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\ \nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\ \nabla \cdot \vec{\mathbf{B}} & = 0 \end{aligned} $$Longer Code (Extra Bonus)
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
# Define a neural network that's way too complicated for the task
class OverengineeredMNIST(nn.Module):
def __init__(self):
super().__init__()
self.encoder = nn.Sequential(
nn.Conv2d(1, 32, 3, stride=1, padding=1),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(32, 64, 3, stride=1, padding=1),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Flatten(),
)
self.attention = nn.MultiheadAttention(embed_dim=64*7*7, num_heads=8)
self.classifier = nn.Sequential(
nn.Linear(64*7*7, 512),
nn.ReLU(),
nn.Dropout(0.5), # For that academic paper feel
nn.Linear(512, 10)
)
print("Initialized a model that could recognize galaxies but will only see digits")
def forward(self, x):
features = self.encoder(x)
# Unnecessary attention mechanism for MNIST
attn_output, _ = self.attention(
features.view(-1, 1, 64*7*7),
features.view(-1, 1, 64*7*7),
features.view(-1, 1, 64*7*7)
)
return self.classifier(attn_output.squeeze(1))
# Prepare to train on a dataset that could be loaded in 5 lines
def main():
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if str(device) == "cpu":
print("GPU not found. Preparing excuses for slow training...")
# Transform that could be just ToTensor()
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)),
transforms.RandomRotation(5), # Because we're fancy
transforms.RandomAffine(degrees=0, translate=(0.1, 0.1)),
])
# Load MNIST with excessive batch size
train_loader = DataLoader(
datasets.MNIST('data', train=True, download=True, transform=transform),
batch_size=512,
shuffle=True,
num_workers=4,
pin_memory=True
)
model = OverengineeredMNIST().to(device)
optimizer = optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-5)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience=3)
criterion = nn.CrossEntropyLoss()
print("Starting training (prepare your excuses for why accuracy is only 98%)")
# Train for 1 epoch because who has time for more
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
if batch_idx % 100 == 0:
print(f"Loss: {loss.item():.4f}, Progress: {batch_idx}/{len(train_loader)}")
print(f"GPU Memory: {torch.cuda.max_memory_allocated()/1e9:.2f} GB (if you're lucky)")
print("Training complete! Time to overfit on the validation set to feel better.")
if __name__ == "__main__":
main()
-
Go back mf ↩︎