docs: add all untracked content

This commit is contained in:
cheeks 2025-06-05 13:04:03 +00:00
parent 430ef868af
commit 0a7bbc10d5
7 changed files with 946 additions and 1 deletions

View File

@ -0,0 +1,200 @@
---
title: Guide for Docker Organization
description: Ai Written (local llama3.2:3b model)
published: true
date: 2025-06-03T11:52:58.719Z
tags: llama3.2:3b, llamavista
editor: markdown
dateCreated: 2025-06-03T11:50:29.981Z
---
Guide: Best Practices for Docker Organization
====================================================================
Introduction
------------
Docker provides a powerful way to manage and deploy applications using containers.
However, as your containerized application grows in complexity, managing and
maintaining it can become overwhelming. This guide outlines best practices for
organizing and maintaining Docker containers, including data structure, naming
conventions, updating containers, and improving ease of use.
**Data Structure**
-----------------
1. **Create a clear directory structure**: Organize your project into logical
directories, such as `docker-compose`, `config`, `data`, `logs`, and `images`.
2. **Use a consistent naming convention**: Use a consistent naming scheme for
containers, images, and volumes to make it easier to identify and manage them.
3. **Store sensitive data securely**: Store sensitive data, such as database
credentials or API keys, in environment variables or secure storage solutions like
Hashicorp's Vault.
**Container Naming Conventions**
------------------------------
1. **Use a clear naming scheme**: Use a consistent naming scheme for containers, such
as `app-name-service-name` or `app-name-version`.
2. **Avoid using special characters**: Avoid using special characters in container
names to prevent issues with shell commands and file system permissions.
3. **Keep it concise**: Keep container names concise and descriptive to make them
easier to identify.
**Updating Containers Regularly**
-------------------------------
1. **Regularly update dependencies**: Use tools like `pip` or `npm` to regularly
update dependencies in your containers.
2. **Use Docker Compose's built-in updates**: Use Docker Compose's built-in features,
such as `docker-compose pull`, to update images and containers.
3. **Automate testing**: Automate testing of updated containers to ensure they
function correctly.
**Improving Ease of Use**
-----------------------
1. **Use Docker Compose's scripts**: Use Docker Compose's scripts feature to automate
tasks, such as starting and stopping containers.
2. **Create a `docker-compose.yml` file**: Create a `docker-compose.yml` file that
defines your containerized application and automates its deployment and management.
**Example Directory Structure**
------------------------------
```bash
my-app/
|---- docker-compose.yml
|---- config/
| |---- database.properties
|---- data/
| |---- logs/
|---- images/
| |---- app-image:latest
|---- logs/
|---- .env
```
This directory structure includes a clear separation of concerns, with separate
directories for configuration files, data storage, and container images.
**Example `docker-compose.yml` File**
-----------------------------------
```yml
version: '3'
services:
app:
build: .
ports:
- "8080:8080"
depends_on:
- db
environment:
- DATABASE_URL=postgres://user:password@db:5432/mydb
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
```
This `docker-compose.yml` file defines two services, `app` and `db`, with clear
dependencies and environment variables.
**Guide 2: Best Practices for Docker Security, Networking, Updating, and Monitoring**
=====================================================================================
Introduction
------------
Docker provides a powerful way to manage and deploy applications using containers.
However, as your containerized application grows in complexity, managing and
maintaining it can become overwhelming. This guide outlines best practices for
securing, networking, updating, and monitoring Docker containers.
**Security Best Practices**
-------------------------
1. **Use secure protocols**: Use secure protocols, such as HTTPS, to protect data
transmitted between containers and the outside world.
2. **Implement access controls**: Implement access controls, such as Docker's
`docker-compose run` command with the `-u` flag, to restrict access to sensitive data.
3. **Regularly update dependencies**: Regularly update dependencies in your
containers to ensure you have the latest security patches.
**Networking Best Practices**
---------------------------
1. **Use a network for communication**: Use a Docker network for communication
between containers to isolate them and prevent unauthorized access.
2. **Configure firewall rules**: Configure firewall rules to restrict incoming and
outgoing traffic to specific ports and protocols.
3. **Use a reverse proxy**: Use a reverse proxy, such as NGINX or Apache, to protect
your application from external attacks.
**Updating Containers Regularly**
-------------------------------
1. **Regularly update dependencies**: Use tools like `pip` or `npm` to regularly
update dependencies in your containers.
2. **Use Docker Compose's built-in updates**: Use Docker Compose's built-in features,
such as `docker-compose pull`, to update images and containers.
3. **Automate testing**: Automate testing of updated containers to ensure they
function correctly.
**Monitoring Containers**
-----------------------
1. **Use Docker's built-in logging**: Use Docker's built-in logging feature to
monitor container logs.
2. **Install monitoring tools**: Install monitoring tools, such as Prometheus and
Grafana, to track key metrics and performance indicators.
3. **Set up alerts and notifications**: Set up alerts and notifications to notify you
of issues or anomalies in your application.
**Example Docker Network**
-------------------------
```yml
version: '3'
networks:
app-network:
driver: bridge
services:
app:
build: .
ports:
- "8080:8080"
networks:
- app-network
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
networks:
- app-network
```
This Docker network configuration defines a bridge network for communication between
containers.
**Example Prometheus Configuration**
---------------------------------
```yml
global:
scrape_interval: 10s
scrape_configs:
- job_name: 'app'
scrape_interval: 10s
metrics_path: '/metrics'
static_configs:
- targets: ['localhost:8080']
```
This Prometheus configuration defines a scrape interval of 10 seconds and targets the
`localhost:8080` port for scraping metrics.

View File

@ -0,0 +1,106 @@
---
title: Docker Security / Monitoring / Maintenance
description: Ai Written (llama3.2:3b)
published: true
date: 2025-06-03T11:54:26.507Z
tags: ollama, llama3.2:3b, llamavista
editor: markdown
dateCreated: 2025-06-03T11:54:24.932Z
---
**Guide: Best Practices for Docker Security, Networking, Updating, and Monitoring**
=====================================================================================
Introduction
------------
Docker provides a powerful way to manage and deploy applications using containers.
However, as your containerized application grows in complexity, managing and
maintaining it can become overwhelming. This guide outlines best practices for
securing, networking, updating, and monitoring Docker containers.
**Security Best Practices**
-------------------------
1. **Use secure protocols**: Use secure protocols, such as HTTPS, to protect data
transmitted between containers and the outside world.
2. **Implement access controls**: Implement access controls, such as Docker's
`docker-compose run` command with the `-u` flag, to restrict access to sensitive data.
3. **Regularly update dependencies**: Regularly update dependencies in your
containers to ensure you have the latest security patches.
**Networking Best Practices**
---------------------------
1. **Use a network for communication**: Use a Docker network for communication
between containers to isolate them and prevent unauthorized access.
2. **Configure firewall rules**: Configure firewall rules to restrict incoming and
outgoing traffic to specific ports and protocols.
3. **Use a reverse proxy**: Use a reverse proxy, such as NGINX or Apache, to protect
your application from external attacks.
**Updating Containers Regularly**
-------------------------------
1. **Regularly update dependencies**: Use tools like `pip` or `npm` to regularly
update dependencies in your containers.
2. **Use Docker Compose's built-in updates**: Use Docker Compose's built-in features,
such as `docker-compose pull`, to update images and containers.
3. **Automate testing**: Automate testing of updated containers to ensure they
function correctly.
**Monitoring Containers**
-----------------------
1. **Use Docker's built-in logging**: Use Docker's built-in logging feature to
monitor container logs.
2. **Install monitoring tools**: Install monitoring tools, such as Prometheus and
Grafana, to track key metrics and performance indicators.
3. **Set up alerts and notifications**: Set up alerts and notifications to notify you
of issues or anomalies in your application.
**Example Docker Network**
-------------------------
```yml
version: '3'
networks:
app-network:
driver: bridge
services:
app:
build: .
ports:
- "8080:8080"
networks:
- app-network
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
networks:
- app-network
```
This Docker network configuration defines a bridge network for communication between
containers.
**Example Prometheus Configuration**
---------------------------------
```yml
global:
scrape_interval: 10s
scrape_configs:
- job_name: 'app'
scrape_interval: 10s
metrics_path: '/metrics'
static_configs:
- targets: ['localhost:8080']
```
This Prometheus configuration defines a scrape interval of 10 seconds and targets the
`localhost:8080` port for scraping metrics.

15
home/homelab/WoLAddrs.md Normal file
View File

@ -0,0 +1,15 @@
---
title: Wake on LAN Addresses
description:
published: true
date: 2025-06-02T20:04:56.549Z
tags:
editor: markdown
dateCreated: 2025-06-02T02:28:12.160Z
---
# Wake on LAN Addresses
<br></br>
| Computer Name | MAC Address |
|:----: | :----: |
| BinarySage | 2c:f0:5d:99:d2:ef |

View File

@ -0,0 +1,486 @@
---
title: Full Guide - Install Ubuntu Server and Configure Ollama with CodeGemma and Phi-3 Mini
description: AI Project
published: true
date: 2025-06-01T20:09:17.154Z
tags: ai, guide, walk-though, ubuntu server, server, ollama
editor: markdown
dateCreated: 2025-06-01T20:09:15.206Z
---
# Install Ubuntu Server and Configure Ollama with CodeGemma and Phi-3 Mini
This guide provides step-by-step instructions to set up a headless **Ubuntu Server 24.04 LTS** on a PC with the following specs, install **Ollama** with **CodeGemma 7B** for user `arti` (Python coding assistance) and **Phi-3 Mini (3.8B)** for user `phixr` (system administration tasks), and restrict each users SSH access to their respective interactive AI session:
- **GPU**: Radeon RX 6600 (8 GB VRAM)
- **CPU**: AMD Ryzen 7 2700 (8 cores, ~3.2 GHz)
- **RAM**: 64 GB (2133 MT/s)
- **Storage**: 465.8 GB NVMe SSD (`nvme0n1`), 2x 931.5 GB SSDs (`sda`, `sdb`)
The setup is command-line only, with no desktop environment or window manager, and assumes youre replacing any existing OS (e.g., Proxmox). Both models use Q4_K_M quantization to fit within 8 GB VRAM and <20 GB disk space, leveraging ROCm for GPU acceleration.
---
## Step 1: Prepare for Ubuntu Server Installation
Lets prepare to install Ubuntu Server on your NVMe SSD, replacing any existing OS.
1. **Download Ubuntu Server 24.04 LTS**:
- On another computer, download the ISO:
```bash
wget https://releases.ubuntu.com/24.04/ubuntu-24.04-live-server-amd64.iso
```
- Or download manually from [ubuntu.com](https://ubuntu.com/download/server).
- Verify the ISO:
```bash
sha256sum ubuntu-24.04-live-server-amd64.iso
```
- Check the hash against [Ubuntus checksums](https://releases.ubuntu.com/24.04/).
2. **Create a Bootable USB Drive**:
- Use a USB drive (≥4 GB). Identify it with:
```bash
lsblk
```
- Write the ISO (replace `/dev/sdX` with your USB device):
```bash
sudo dd if=ubuntu-24.04-live-server-amd64.iso of=/dev/sdX bs=4M status=progress && sync
```
- **Warning**: Double-check `/dev/sdX` to avoid overwriting other drives.
- Alternatively, use Rufus (Windows) or Etcher (cross-platform).
3. **Backup Existing Data**:
- If replacing Proxmox or another OS, back up data to an external drive or another system:
```bash
scp -r /path/to/data user@other-machine:/destination
```
4. **Boot from USB**:
- Insert the USB, reboot, and enter the BIOS (usually `Del` or `F2`).
- Set the USB as the first boot device.
- Save and reboot to start the Ubuntu installer.
---
## Step 2: Install Ubuntu Server 24.04 LTS
Lets install Ubuntu Server on the NVMe SSD (`nvme0n1`, 465.8 GB).
1. **Start the Installer**:
- Select “Install Ubuntu Server”.
- Set language (English), keyboard layout, and network (DHCP or static IP).
2. **Configure Storage**:
- Choose “Custom storage layout”.
- Partition `nvme0n1`:
- **EFI Partition**: 1 GB, `fat32`, mount at `/boot/efi`.
- **Root Partition**: 464.8 GB, `ext4`, mount at `/`.
- Example (in installer):
- Select `nvme0n1`, create partitions as above.
- Write changes and confirm.
- Optional: Use `sda` or `sdb` (931.5 GB SSDs) for additional storage (e.g., mount as `/data`).
3. **Set Up Users and SSH**:
- Set hostname (e.g., `ai-server`).
- Create an admin user (e.g., `admin`):
- Username: `admin`
- Password: Set a secure password.
- Enable “Install OpenSSH server”.
- Skip importing SSH keys unless needed.
4. **Complete Installation**:
- Select no additional packages (Ollama and ROCm will be installed later).
- Finish and reboot.
5. **Verify Boot**:
- Remove the USB, boot into Ubuntu, and log in as `admin` via a local terminal or SSH:
```bash
ssh admin@<server-ip>
```
---
## Step 3: Install AMD ROCm for Radeon RX 6600
Lets set up ROCm to enable GPU acceleration for Ollama.
1. **Update System**:
```bash
sudo apt update && sudo apt upgrade -y
```
2. **Add ROCm Repository**:
- Install dependencies and add ROCm 5.7:
```bash
sudo apt install -y wget gnupg
wget -qO - https://repo.radeon.com/rocm/rocm.gpg.key | sudo apt-key add -
echo 'deb [arch=amd64] https://repo.radeon.com/rocm/apt/5.7 ubuntu main' | sudo tee /etc/apt/sources.list.d/rocm.list
```
3. **Install ROCm**:
```bash
sudo apt update
sudo apt install -y rocm-libs rocminfo
```
4. **Verify ROCm**:
- Reboot:
```bash
sudo reboot
```
- Check GPU:
```bash
rocminfo
```
- Look for “Navi 23 [Radeon RX 6600]”.
- Check VRAM:
```bash
rocm-smi --showmeminfo vram
```
- Expect ~8192 MB.
5. **Troubleshooting**:
- If no GPU is detected, verify:
```bash
lspci | grep -i vga
```
- Try ROCm 5.6:
```bash
echo 'deb [arch=amd64] https://repo.radeon.com/rocm/apt/5.6 ubuntu main' | sudo tee /etc/apt/sources.list.d/rocm.list
sudo apt update && sudo apt install -y rocm-libs
```
---
## Step 4: Install Ollama and Models
Lets install Ollama and download both **CodeGemma 7B** and **Phi-3 Mini**.
1. **Install Ollama**:
```bash
curl -fsSL https://ollama.com/install.sh | sh
```
- Verify:
```bash
ollama --version
```
2. **Pull CodeGemma 7B**:
- Download Q4_K_M (~4.2 GB):
```bash
ollama pull codegemma:7b
```
- Verify:
```bash
ollama list
```
- Expect `codegemma:7b` (q4_k_m).
3. **Test CodeGemma**:
- Run:
```bash
ollama run codegemma:7b
```
- Prompt: “Debug: `x = [1, 2]; print(x[2])`.”
- Expected: “Check the lists length with `len(x)`.”
- Exit: `Ctrl+D`.
4. **Pull Phi-3 Mini**:
- Download Q4_K_M (~2.3 GB):
```bash
ollama pull phi3:mini
```
- Verify:
```bash
ollama list
```
- Expect `phi3:mini` (q4_k_m).
5. **Test Phi-3 Mini**:
- Run:
```bash
ollama run phi3:mini
```
- Prompt: “Walk me through configuring a firewall.”
- Expected: “Install `ufw` with `sudo apt install ufw`. Enable with `sudo ufw enable`.”
- Exit: `Ctrl+D`.
6. **Verify GPU Usage**:
- During a session, check:
```bash
rocm-smi
```
- CodeGemma: ~56 GB VRAM.
- Phi-3 Mini: ~3.54.5 GB VRAM.
7. **Enable Ollama Service**:
```bash
sudo systemctl enable ollama
sudo systemctl start ollama
```
- Verify:
```bash
systemctl status ollama
```
---
## Step 5: Configure User `arti` for CodeGemma 7B
Lets restrict `arti`s SSH access to an interactive CodeGemma 7B session for Python coding.
1. **Create User `arti`**:
```bash
sudo adduser arti
```
- Set a secure password, optional details (e.g., full name: “Artificial Intelligence”).
2. **Restrict Home Directory**:
```bash
sudo chown arti:arti /home/arti
sudo chmod 700 /home/arti
```
- Verify:
```bash
ls -ld /home/arti
```
- Expect: `drwx------ arti arti`
3. **Create Shell Script**:
```bash
sudo nano /usr/local/bin/ollama-shell
```
- Add:
```bash
#!/bin/bash
echo "Starting CodeGemma 7B interactive session..."
/usr/bin/ollama run codegemma:7b
```
- Save and exit.
- Make executable:
```bash
sudo chmod +x /usr/local/bin/ollama-shell
sudo chown root:root /usr/local/bin/ollama-shell
sudo chmod 755 /usr/local/bin/ollama-shell
```
4. **Set Shell**:
```bash
sudo usermod -s /usr/local/bin/ollama-shell arti
```
- Verify:
```bash
getent passwd arti
```
- Expect: `arti:x:1000:1000:,,,:/home/arti:/usr/local/bin/ollama-shell`
5. **Add GPU Access**:
```bash
sudo usermod -a -G render arti
```
6. **Restrict SSH**:
```bash
sudo nano /etc/ssh/sshd_config
```
- Add:
```bash
Match User arti
ForceCommand /usr/local/bin/ollama-shell
```
- Restart SSH:
```bash
sudo systemctl restart sshd
```
7. **Limit Permissions**:
```bash
sudo usermod -G nogroup arti
```
8. **Test SSH**:
```bash
ssh arti@<server-ip>
```
- Expect: `Starting CodeGemma 7B interactive session...`
- Prompt: “Debug: `x = '5'; y = 3; print(x + y)`.”
- Expected: “Check types with `type(x)`.”
- Exit: `Ctrl+D` (terminates SSH).
- Try: `ssh arti@<server-ip> bash` (should fail).
---
## Step 6: Configure User `phixr` for Phi-3 Mini
Lets restrict `phixr`s SSH access to a Phi-3 Mini session for system administration.
1. **Create User `phixr`**:
```bash
sudo adduser phixr
```
- Set password, optional details (e.g., full name: “Phi-3 System Admin”).
2. **Restrict Home Directory**:
```bash
sudo chown phixr:phixr /home/phixr
sudo chmod 700 /home/phixr
```
- Verify:
```bash
ls -ld /home/phixr
```
- Expect: `drwx------ phixr phixr`
3. **Create Shell Script**:
```bash
sudo nano /usr/local/bin/ollama-phi3-shell
```
- Add:
```bash
#!/bin/bash
echo "Starting Phi-3 Mini interactive session..."
/usr/bin/ollama run phi3:mini
```
- Save and exit.
- Make executable:
```bash
sudo chmod +x /usr/local/bin/ollama-phi3-shell
sudo chown root:root /usr/local/bin/ollama-phi3-shell
sudo chmod 755 /usr/local/bin/ollama-phi3-shell
```
4. **Set Shell**:
```bash
sudo usermod -s /usr/local/bin/ollama-phi3-shell phixr
```
- Verify:
```bash
getent passwd phixr
```
- Expect: `phixr:x:1001:1001:,,,:/home/phixr:/usr/local/bin/ollama-phi3-shell`
5. **Add GPU Access**:
```bash
sudo usermod -a -G render phixr
```
6. **Restrict SSH**:
```bash
sudo nano /etc/ssh/sshd_config
```
- Add (below `Match User arti`):
```bash
Match User phixr
ForceCommand /usr/local/bin/ollama-phi3-shell
```
- Restart SSH:
```bash
sudo systemctl restart sshd
```
7. **Limit Permissions**:
```bash
sudo usermod -G nogroup phixr
```
8. **Test SSH**:
```bash
ssh phixr@<server-ip>
```
- Expect: `Starting Phi-3 Mini interactive session...`
- Prompt: “Walk me through installing pfSense.”
- Expected: “Download the ISO from pfsense.org. Create a USB with `dd if=pfSense.iso of=/dev/sdX bs=4M`.”
- Exit: `Ctrl+D` (terminates SSH).
- Try: `ssh phixr@<server-ip> bash` (should fail).
---
## Step 7: Optimize and Troubleshoot
Lets ensure optimal performance and address potential issues.
1. **Performance Optimization**:
- **CodeGemma 7B**: ~56 GB VRAM, ~812 tokens/second. Good for Python debugging.
- **Phi-3 Mini**: ~3.54.5 GB VRAM, ~1015 tokens/second. Ideal for system administration guidance.
- **Prompting**:
- `arti`: “Debug this Python code: [snippet].”
- `phixr`: “Walk me through [task] step-by-step.”
- **Temperature**: For precise responses, set temperature to 0.2:
- For CodeGemma:
```bash
nano ~/.ollama/models/codegemma-modelfile
```
Add:
```
FROM codegemma:7b
PARAMETER temperature 0.2
```
Create:
```bash
ollama create codegemma-lowtemp -f ~/.ollama/models/codegemma-modelfile
```
Update `/usr/local/bin/ollama-shell` to use `ollama run codegemma-lowtemp`.
- For Phi-3 Mini:
```bash
nano ~/.ollama/models/phi3-modelfile
```
Add:
```
FROM phi3:mini
PARAMETER temperature 0.2
```
Create:
```bash
ollama create phi3-lowtemp -f ~/.ollama/models/phi3-modelfile
```
Update `/usr/local/bin/ollama-phi3-shell` to use `ollama run phi3-lowtemp`.
2. **Troubleshooting**:
- **No Session**:
- Check scripts:
```bash
ls -l /usr/local/bin/ollama-shell /usr/local/bin/ollama-phi3-shell
cat /usr/local/bin/ollama-shell
cat /usr/local/bin/ollama-phi3-shell
```
- **GPU Issues**: If slow (~15 tokens/second), verify ROCm:
```bash
rocminfo
rocm-smi --showmeminfo vram
```
- Reinstall ROCm 5.6/5.7 if needed.
- **Shell Access**: If `arti` or `phixr` access Bash:
```bash
getent passwd arti
getent passwd phixr
```
- Confirm shells. Re-run `usermod -s`.
- **SSH Errors**:
```bash
sudo systemctl status sshd
```
- Restart: `sudo systemctl restart sshd`.
---
## Expected Performance
- **Hardware Fit**: CodeGemma (~56 GB VRAM, ~4.2 GB disk) and Phi-3 Mini (~3.54.5 GB VRAM, ~2.3 GB disk) fit your Radeon RX 6600, Ryzen 7 2700, 64 GB RAM, and 465.8 GB NVMe SSD.
- **Use Case**:
- `arti`: Guides Python coding/debugging (e.g., “Check your list index with `len()`”).
- `phixr`: Provides detailed system administration instructions (e.g., “Download pfSense ISO, then use `dd`”).
- **Speed**: CodeGemma (~812 tokens/second), Phi-3 Mini (~1015 tokens/second). Responses in ~12 seconds.
- **Restriction**: `arti` locked to CodeGemma; `phixr` to Phi-3 Mini. No Bash access.
## Example Usage
- **For `arti`**:
```bash
ssh arti@<server-ip>
>>> Debug: x = [1, 2]; print(x[2]).
The error suggests an invalid index. Check the lists length with `len(x)`.
```
- **For `phixr`**:
```bash
ssh phixr@<server-ip>
>>> Walk me through installing pfSense.
Download the ISO from pfsense.org. Create a USB with `dd if=pfSense.iso of=/dev/sdX bs=4M`. Check with `lsblk`.
```

View File

@ -2,7 +2,7 @@
title: Inventory
description: Basic run down of equipment in use
published: true
date: 2025-06-04T23:35:23.834Z
date: 2025-06-04T23:35:25.358Z
tags:
editor: markdown
dateCreated: 2024-07-09T19:44:24.462Z

View File

@ -0,0 +1,24 @@
---
title: Fix Linux Mint Autocomplete
description: Autocomplete doesn't work by default
published: true
date: 2025-02-16T20:40:50.805Z
tags:
editor: markdown
dateCreated: 2025-02-16T20:40:49.190Z
---
# Fix Linux Mint Autocomplete
1. Rename apt to apt-mint in /usr/local/bin/
2. Rename apt-linux-mint to apt-mint in /etc/bash_completion.d/
3. Edit /etc/bash_completion.d/apt-mint replacing "apt" with "apt-mint" so it looks like this:
At the top (lines 5 and 6, most likely):
`
have apt-mint &&
_apt-mint()
`
At the end:
`complete -F _apt-mint apt-mint`
Credit: https://forums.linuxmint.com/viewtopic.php?p=2276518&sid=3b36b9fa5693b543ed571cf304275c30#p2276518

View File

@ -0,0 +1,114 @@
---
title: NGINX Configuration
description: How to simply configure and NGINX http server
published: true
date: 2025-02-27T20:27:40.472Z
tags: nginx, server, http, linux
editor: markdown
dateCreated: 2025-02-27T15:48:54.929Z
---
# Configuring NGINX
#### Note:
Following "The NGINX Crash Course" by Laiture on Youtube.com
*Big BIG thanks to Laiture for this incredible how to video.*
##### - */etc/nginx/nginx.conf ==> main configuration file*
## Terminology:
Directives - key value pairs within blocks of code
Contexts - blocks of code that contain directives. http, events, etc... events context must be present.
To Start NGINX:
`nginx`
Simplest Static Site:
- Use http and events context:
```
#nginx.conf example
http {
server{
listen 8080;
root /var/www/html;
}
}
events {}
```
Reload NGINX after changes:
`nginx -s reload`
## Mime Types
- Mime types allow styles and other dynamic content to be served based on filetypes.
- Mime.types file contains a collection of filetypes supported by NGINX to serve.
- Including it in the conf file allows dynamic content to be served.
To include Mime Types simply add:
`include mime.types`
## Location Block
- Adding a location block can be done as the following:
```
server {
listen 8080;
root /var/www/html;
location /somepath { # http://localhost:8080/somepath
root /var/www/html; # Serves index.html within /somepath folder in /var/www/html
}
}
```
- Location can take directives such as:
1. *alias* - redirects elsewhere such as:
```
server {
listen 8080;
root /var/www/html;
location /somepath { # http://localhost:8080/somepath
root /var/www/html; # Serves index.html within /somepath folder in /var/www/html
}
location /otherpath {
alias /var/www/html/somepath; # Points back to "somepath".
}
}
```
2. *try_files* - Specifies files to try before index.html, followed by what to show if not found.
- Example: ` try_files /somepath/somesite.html /index.html =404; `
- Explanation:
- Try to find and serve *somesite.html* in *somepath* folder and if not found, redirect to *root/index.html*.
- If neither are found, produce 404 Error.
## Redirects & Rewrites
### Redirects:
```
location /somesite {
return 307 /someOtherSite;
}
```
### Rewrite:
```
rewrite /some/path/I/want /to/some/path/that/exists;
location { ... }