Skip to content

Commit a9a4e5d

Browse files
Merge branch 'master' into two_unique_numbers
2 parents 0d5a797 + 709c18e commit a9a4e5d

File tree

14 files changed

+335
-19
lines changed

14 files changed

+335
-19
lines changed

.github/workflows/build.yml

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,21 @@ jobs:
99
build:
1010
runs-on: ubuntu-latest
1111
steps:
12+
- run:
13+
sudo apt-get update && sudo apt-get install -y libtiff5-dev libjpeg8-dev libopenjp2-7-dev
14+
zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python3-tk
15+
libharfbuzz-dev libfribidi-dev libxcb1-dev
16+
libxml2-dev libxslt-dev
17+
libhdf5-dev
18+
libopenblas-dev
1219
- uses: actions/checkout@v5
13-
- uses: astral-sh/setup-uv@v6
20+
- uses: astral-sh/setup-uv@v7
1421
with:
1522
enable-cache: true
1623
cache-dependency-glob: uv.lock
1724
- uses: actions/setup-python@v6
1825
with:
19-
python-version: 3.x
26+
python-version: 3.14
2027
allow-prereleases: true
2128
- run: uv sync --group=test
2229
- name: Run tests

.github/workflows/directory_writer.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,8 @@ jobs:
1111
fetch-depth: 0
1212
- uses: actions/setup-python@v6
1313
with:
14-
python-version: 3.x
14+
python-version: 3.14
15+
allow-prereleases: true
1516
- name: Write DIRECTORY.md
1617
run: |
1718
scripts/build_directory_md.py 2>&1 | tee DIRECTORY.md

.github/workflows/project_euler.yml

Lines changed: 20 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,21 +14,37 @@ jobs:
1414
project-euler:
1515
runs-on: ubuntu-latest
1616
steps:
17+
- run:
18+
sudo apt-get update && sudo apt-get install -y libtiff5-dev libjpeg8-dev libopenjp2-7-dev
19+
zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python3-tk
20+
libharfbuzz-dev libfribidi-dev libxcb1-dev
21+
libxml2-dev libxslt-dev
22+
libhdf5-dev
23+
libopenblas-dev
1724
- uses: actions/checkout@v5
18-
- uses: astral-sh/setup-uv@v6
25+
- uses: astral-sh/setup-uv@v7
1926
- uses: actions/setup-python@v6
2027
with:
21-
python-version: 3.x
28+
python-version: 3.14
29+
allow-prereleases: true
2230
- run: uv sync --group=euler-validate --group=test
2331
- run: uv run pytest --doctest-modules --cov-report=term-missing:skip-covered --cov=project_euler/ project_euler/
2432
validate-solutions:
2533
runs-on: ubuntu-latest
2634
steps:
35+
- run:
36+
sudo apt-get update && sudo apt-get install -y libtiff5-dev libjpeg8-dev libopenjp2-7-dev
37+
zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python3-tk
38+
libharfbuzz-dev libfribidi-dev libxcb1-dev
39+
libxml2-dev libxslt-dev
40+
libhdf5-dev
41+
libopenblas-dev
2742
- uses: actions/checkout@v5
28-
- uses: astral-sh/setup-uv@v6
43+
- uses: astral-sh/setup-uv@v7
2944
- uses: actions/setup-python@v6
3045
with:
31-
python-version: 3.x
46+
python-version: 3.14
47+
allow-prereleases: true
3248
- run: uv sync --group=euler-validate --group=test
3349
- run: uv run pytest scripts/validate_solutions.py
3450
env:

.github/workflows/ruff.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,5 +12,5 @@ jobs:
1212
runs-on: ubuntu-latest
1313
steps:
1414
- uses: actions/checkout@v5
15-
- uses: astral-sh/setup-uv@v6
15+
- uses: astral-sh/setup-uv@v7
1616
- run: uvx ruff check --output-format=github .

.github/workflows/sphinx.yml

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,11 +25,18 @@ jobs:
2525
build_docs:
2626
runs-on: ubuntu-24.04-arm
2727
steps:
28+
- run:
29+
sudo apt-get update && sudo apt-get install -y libtiff5-dev libjpeg8-dev libopenjp2-7-dev
30+
zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.6-dev tk8.6-dev python3-tk
31+
libharfbuzz-dev libfribidi-dev libxcb1-dev
32+
libxml2-dev libxslt-dev
33+
libhdf5-dev
34+
libopenblas-dev
2835
- uses: actions/checkout@v5
29-
- uses: astral-sh/setup-uv@v6
36+
- uses: astral-sh/setup-uv@v7
3037
- uses: actions/setup-python@v6
3138
with:
32-
python-version: 3.13
39+
python-version: 3.14
3340
allow-prereleases: true
3441
- run: uv sync --group=docs
3542
- uses: actions/configure-pages@v5

.pre-commit-config.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ repos:
1919
- id: auto-walrus
2020

2121
- repo: https://github.com/astral-sh/ruff-pre-commit
22-
rev: v0.13.2
22+
rev: v0.13.3
2323
hooks:
2424
- id: ruff-check
2525
- id: ruff-format
@@ -32,7 +32,7 @@ repos:
3232
- tomli
3333

3434
- repo: https://github.com/tox-dev/pyproject-fmt
35-
rev: v2.6.0
35+
rev: v2.7.0
3636
hooks:
3737
- id: pyproject-fmt
3838

DIRECTORY.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -195,6 +195,7 @@
195195
* [Permutations](data_structures/arrays/permutations.py)
196196
* [Prefix Sum](data_structures/arrays/prefix_sum.py)
197197
* [Product Sum](data_structures/arrays/product_sum.py)
198+
* [Rotate Array](data_structures/arrays/rotate_array.py)
198199
* [Sparse Table](data_structures/arrays/sparse_table.py)
199200
* [Sudoku Solver](data_structures/arrays/sudoku_solver.py)
200201
* Binary Tree

ciphers/gronsfeld_cipher.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ def gronsfeld(text: str, key: str) -> str:
2020
>>> gronsfeld('yes, ¥€$ - _!@#%?', '')
2121
Traceback (most recent call last):
2222
...
23-
ZeroDivisionError: integer modulo by zero
23+
ZeroDivisionError: division by zero
2424
"""
2525
ascii_len = len(ascii_uppercase)
2626
key_len = len(key)
Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
def rotate_array(arr: list[int], steps: int) -> list[int]:
2+
"""
3+
Rotates a list to the right by steps positions.
4+
5+
Parameters:
6+
arr (List[int]): The list of integers to rotate.
7+
steps (int): Number of positions to rotate. Can be negative for left rotation.
8+
9+
Returns:
10+
List[int]: Rotated list.
11+
12+
Examples:
13+
>>> rotate_array([1, 2, 3, 4, 5], 2)
14+
[4, 5, 1, 2, 3]
15+
>>> rotate_array([1, 2, 3, 4, 5], -2)
16+
[3, 4, 5, 1, 2]
17+
>>> rotate_array([1, 2, 3, 4, 5], 7)
18+
[4, 5, 1, 2, 3]
19+
>>> rotate_array([], 3)
20+
[]
21+
"""
22+
23+
n = len(arr)
24+
if n == 0:
25+
return arr
26+
27+
steps = steps % n
28+
29+
if steps < 0:
30+
steps += n
31+
32+
def reverse(start: int, end: int) -> None:
33+
"""
34+
Reverses a portion of the list in place from index start to end.
35+
36+
Parameters:
37+
start (int): Starting index of the portion to reverse.
38+
end (int): Ending index of the portion to reverse.
39+
40+
Returns:
41+
None
42+
43+
Examples:
44+
>>> example = [1, 2, 3, 4, 5]
45+
>>> def reverse_test(arr, start, end):
46+
... while start < end:
47+
... arr[start], arr[end] = arr[end], arr[start]
48+
... start += 1
49+
... end -= 1
50+
>>> reverse_test(example, 0, 2)
51+
>>> example
52+
[3, 2, 1, 4, 5]
53+
>>> reverse_test(example, 2, 4)
54+
>>> example
55+
[3, 2, 5, 4, 1]
56+
"""
57+
58+
while start < end:
59+
arr[start], arr[end] = arr[end], arr[start]
60+
start += 1
61+
end -= 1
62+
63+
reverse(0, n - 1)
64+
reverse(0, steps - 1)
65+
reverse(steps, n - 1)
66+
67+
return arr
68+
69+
70+
if __name__ == "__main__":
71+
examples = [
72+
([1, 2, 3, 4, 5], 2),
73+
([1, 2, 3, 4, 5], -2),
74+
([1, 2, 3, 4, 5], 7),
75+
([], 3),
76+
]
77+
78+
for arr, steps in examples:
79+
rotated = rotate_array(arr.copy(), steps)
80+
print(f"Rotate {arr} by {steps}: {rotated}")
Lines changed: 178 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,178 @@
1+
"""
2+
t-distributed stochastic neighbor embedding (t-SNE)
3+
4+
For more details, see:
5+
https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding
6+
"""
7+
8+
import doctest
9+
10+
import numpy as np
11+
from numpy import ndarray
12+
from sklearn.datasets import load_iris
13+
14+
15+
def collect_dataset() -> tuple[ndarray, ndarray]:
16+
"""
17+
Load the Iris dataset and return features and labels.
18+
19+
Returns:
20+
tuple[ndarray, ndarray]: Feature matrix and target labels.
21+
22+
>>> features, targets = collect_dataset()
23+
>>> features.shape
24+
(150, 4)
25+
>>> targets.shape
26+
(150,)
27+
"""
28+
iris_dataset = load_iris()
29+
return np.array(iris_dataset.data), np.array(iris_dataset.target)
30+
31+
32+
def compute_pairwise_affinities(data_matrix: ndarray, sigma: float = 1.0) -> ndarray:
33+
"""
34+
Compute high-dimensional affinities (P matrix) using a Gaussian kernel.
35+
36+
Args:
37+
data_matrix: Input data of shape (n_samples, n_features).
38+
sigma: Gaussian kernel bandwidth.
39+
40+
Returns:
41+
ndarray: Symmetrized probability matrix.
42+
43+
>>> x = np.array([[0.0, 0.0], [1.0, 0.0]])
44+
>>> probabilities = compute_pairwise_affinities(x)
45+
>>> float(round(probabilities[0, 1], 3))
46+
0.25
47+
"""
48+
n_samples = data_matrix.shape[0]
49+
squared_sum = np.sum(np.square(data_matrix), axis=1)
50+
squared_distance = np.add(
51+
np.add(-2 * np.dot(data_matrix, data_matrix.T), squared_sum).T, squared_sum
52+
)
53+
54+
affinity_matrix = np.exp(-squared_distance / (2 * sigma**2))
55+
np.fill_diagonal(affinity_matrix, 0)
56+
57+
affinity_matrix /= np.sum(affinity_matrix)
58+
return (affinity_matrix + affinity_matrix.T) / (2 * n_samples)
59+
60+
61+
def compute_low_dim_affinities(embedding_matrix: ndarray) -> tuple[ndarray, ndarray]:
62+
"""
63+
Compute low-dimensional affinities (Q matrix) using a Student-t distribution.
64+
65+
Args:
66+
embedding_matrix: Low-dimensional embedding of shape (n_samples, n_components).
67+
68+
Returns:
69+
tuple[ndarray, ndarray]: (Q probability matrix, numerator matrix).
70+
71+
>>> y = np.array([[0.0, 0.0], [1.0, 0.0]])
72+
>>> q_matrix, numerators = compute_low_dim_affinities(y)
73+
>>> q_matrix.shape
74+
(2, 2)
75+
"""
76+
squared_sum = np.sum(np.square(embedding_matrix), axis=1)
77+
numerator_matrix = 1 / (
78+
1
79+
+ np.add(
80+
np.add(-2 * np.dot(embedding_matrix, embedding_matrix.T), squared_sum).T,
81+
squared_sum,
82+
)
83+
)
84+
np.fill_diagonal(numerator_matrix, 0)
85+
86+
q_matrix = numerator_matrix / np.sum(numerator_matrix)
87+
return q_matrix, numerator_matrix
88+
89+
90+
def apply_tsne(
91+
data_matrix: ndarray,
92+
n_components: int = 2,
93+
learning_rate: float = 200.0,
94+
n_iter: int = 500,
95+
) -> ndarray:
96+
"""
97+
Apply t-SNE for dimensionality reduction.
98+
99+
Args:
100+
data_matrix: Original dataset (features).
101+
n_components: Target dimension (2D or 3D).
102+
learning_rate: Step size for gradient descent.
103+
n_iter: Number of iterations.
104+
105+
Returns:
106+
ndarray: Low-dimensional embedding of the data.
107+
108+
>>> features, _ = collect_dataset()
109+
>>> embedding = apply_tsne(features, n_components=2, n_iter=50)
110+
>>> embedding.shape
111+
(150, 2)
112+
"""
113+
if n_components < 1 or n_iter < 1:
114+
raise ValueError("n_components and n_iter must be >= 1")
115+
116+
n_samples = data_matrix.shape[0]
117+
rng = np.random.default_rng()
118+
embedding = rng.standard_normal((n_samples, n_components)) * 1e-4
119+
120+
high_dim_affinities = compute_pairwise_affinities(data_matrix)
121+
high_dim_affinities = np.maximum(high_dim_affinities, 1e-12)
122+
123+
embedding_increment = np.zeros_like(embedding)
124+
momentum = 0.5
125+
126+
for iteration in range(n_iter):
127+
low_dim_affinities, numerator_matrix = compute_low_dim_affinities(embedding)
128+
low_dim_affinities = np.maximum(low_dim_affinities, 1e-12)
129+
130+
affinity_diff = high_dim_affinities - low_dim_affinities
131+
132+
gradient = 4 * (
133+
np.dot((affinity_diff * numerator_matrix), embedding)
134+
- np.multiply(
135+
np.sum(affinity_diff * numerator_matrix, axis=1)[:, np.newaxis],
136+
embedding,
137+
)
138+
)
139+
140+
embedding_increment = momentum * embedding_increment - learning_rate * gradient
141+
embedding += embedding_increment
142+
143+
if iteration == int(n_iter / 4):
144+
momentum = 0.8
145+
146+
return embedding
147+
148+
149+
def main() -> None:
150+
"""
151+
Run t-SNE on the Iris dataset and display the first 5 embeddings.
152+
153+
>>> main() # doctest: +ELLIPSIS
154+
t-SNE embedding (first 5 points):
155+
[[...
156+
"""
157+
features, _labels = collect_dataset()
158+
embedding = apply_tsne(features, n_components=2, n_iter=300)
159+
160+
if not isinstance(embedding, np.ndarray):
161+
raise TypeError("t-SNE embedding must be an ndarray")
162+
163+
print("t-SNE embedding (first 5 points):")
164+
print(embedding[:5])
165+
166+
# Optional visualization (Ruff/mypy compliant)
167+
168+
# import matplotlib.pyplot as plt
169+
# plt.scatter(embedding[:, 0], embedding[:, 1], c=labels, cmap="viridis")
170+
# plt.title("t-SNE Visualization of the Iris Dataset")
171+
# plt.xlabel("Dimension 1")
172+
# plt.ylabel("Dimension 2")
173+
# plt.show()
174+
175+
176+
if __name__ == "__main__":
177+
doctest.testmod()
178+
main()

0 commit comments

Comments
 (0)