- tags: Operating system
Linux
Links to this note
Ubuntu FFmpeg4 PPA
tags: Linux, FFmpeg source: https://askubuntu.com/a/1360862 sudo add-apt-repository ppa:savoury1/ffmpeg4 sudo apt-get update sudo apt-get install ffmpeg
systemd by example
tags: Linux, Systemd source: “Systemd by Example - Part 1: Minimization - Sebastian Jambor’s Blog.” Accessed February 18, 2024. https://seb.jambor.dev/posts/systemd-by-example-part-1-minimization/. systemd unit types There are 11 different unit types, the most common 3 of them: targets: activated by different system state, it’s useful as a dependency, e.g. sleep.target:do something when system is going to sleep. Of course, we can use systemctl to list the units that dependant on a specific unit....
Use Podman to Run Systemd in a Container
tags: Linux,Systemd source: Walsh, Daniel. “How to Run Systemd in a Container.” Red Hat Developer, April 24, 2019. https://developers.redhat.com/blog/2019/04/24/how-to-run-systemd-in-a-container. Podmand is a container engine which is published by Red Hat, has the same command-line interface(CLI) as Docker.
Logrotate
tags: Linux,Tools
Logrotate Don't Work With Supervisor on Ubuntu 20.04 With Systemd
tags: Linux,Systemd,Supervisor,Logrotate Background I usually use supervisor1 to deploy my services, and capture the stdout/stderr to the log files, and then use logrotate to rotate the logs, which the configuration likes: /data/log/app/*/*.log { daily missingok rotate 180 dateext compress delaycompress notifempty create 640 nobody adm sharedscripts postrotate /usr/local/bin/supervisorctl -c /etc/supervisord.conf pid && kill -USR2 `/usr/local/bin/supervisorctl -c /etc/supervisord.conf pid` > /tmp/kill.log 2>&1 endscript } As you can see, I make the logrotate to send a signal to supervisord after the logs have been rotated, to let the supervisord reopen the logs....
Systemd
tags: Linux
BPF
tags: High Performance, Linux
Porting OpenBSD pledge() to Linux
tags: Linux, Tools, BPF source: “Porting OpenBSD Pledge() to Linux.” Accessed October 31, 2023. https://justine.lol/pledge/. $ ./pledge.com 'stdio rpath' ls # read only list files
Bash
tags: Linux
Searchable Linux Syscall Table for x86 and x86_64
tags: Cheatsheet, Linux source: “Searchable Linux Syscall Table for X86 and X86_64 | PyTux.” Accessed April 17, 2023. https://filippo.io/linux-syscall-table/.
Nginx
tags: Linux
netstat
tags: Linux,Network,Tools netstat -s: show network status(errors) # show udp status $ netstat -s --udp Udp: 437.0k/s packets received 0.0/s packets to unknown port received. 386.9k/s packet receive errors 0.0/s packets sent RcvbufErrors: 123.8k/s SndbufErrors: 0 InCsumErrors: 0
ethtool
tags: Linux,Network,TCP,UDP,Tools ethtool -S: Reveal where the packets actuaaly went receiver$ watch 'sudo ethtool -S eth2 |grep rx' rx_nodesc_drop_cnt: 451.3k/s rx-0.rx_packets: 8.0/s rx-1.rx_packets: 0.0/s rx-2.rx_packets: 0.0/s rx-3.rx_packets: 0.5/s rx-4.rx_packets: 355.2k/s rx-5.rx_packets: 0.0/s rx-6.rx_packets: 0.0/s rx-7.rx_packets: 0.5/s rx-8.rx_packets: 0.0/s rx-9.rx_packets: 0.0/s rx-10.rx_packets: 0.0/s
Multi-queue NICs
tags: Linux,High Performance,Network,ethtool source: The Cloudflare Blog. “How to Receive a Million Packets per Second,” June 16, 2015. http://blog.cloudflare.com/how-to-receive-a-million-packets/. What are Multi-queue NICs RX queue was used to pass packets between hardware and kernel. Now days NICs support multiple RX queues: Each RX queue is pinned to a separate CPU. Multi-queue hashing algorithms Use a hash from packet to decide the RX queue number. The hash is usually counted from a tuple (src IP, dst IP, src port, dst port)....
iptables
tags: Network,Linux,Tools
fork() is evil; vfork() is goodness; afork() would be better; clone() is stupid
tags: Computer Systems,Linux source: 262588213843476. “Fork() Is Evil; Vfork() Is Goodness; Afork() Would Be Better; Clone() Is Stupid.” Gist. Accessed March 2, 2022. https://gist.github.com/nicowilliams/a8a07b0fc75df05f684c23c18d7db234.
NASM Assembly Language Tutorials
tags: Computer Systems,Assembly,Linux,Online Tutorial source: “NASM Assembly Language Tutorials - Asmtutor.Com.” Accessed January 5, 2022. https://asmtutor.com/.
GitHub: 像小说一样品读 Linux 0.11 核心代码
tags: Linux source: https://github.com/sunym1993/flash-linux0.11-talk
Audio: The lost talks from Linus Torvalds at DECUS'94
tags: Linux source: https://archive.org/details/199405-decusnew-orleans/1994050DECUSNewOrleansLinuxImplementationIssuesInLinux.ogg
Linux kernel
tags: Linux Linux I/O Linux I/O 演进 阻塞式:read()/write() 非阻塞式:select()/poll()/epoll(),不支持文件 I/O Thread Pool Direct I/O(数据软件):绕过 page cache 异步 IO(Linux AIO):早起进支持文件 I/O,近期支持了 epoll 支持非文件 I/O Linux io_uring [译] Linux 异步 I/O 框架 io_uring:基本原理、程序示例与性能压测 对比 Linux AIO: 重新设计实现真正的是不。 支持任何类型的 I/O:cached files、direct-access files 甚至 blocking sockets。 灵活、可扩展:基于 io_uring 能够重写 Linux 的每个系统调用。 原理及核心数据结构:SQ/CQ/SQE/CQE 每个 io_uring 实例都有两个环形队列,在内核和应用程序之间共享: 提交队列:submission queue(SQ) 完成队列:completion queue(CQ) 这两个队列: 都是单生产者、单消费者,size 是 2 的幂次; 提供无锁接口(lock-less access interface),内部使用内存屏障做同步(coordinated with memory barrers)。 使用方式: 请求 应用创建 SQ entries(SQE),更新 SQ tail; 内核消费 SQE,更新 SQ head 完成 内核为完成一个或多个请求创建 CQ enries(CQE),更新 CQ tail; 应用消费 CQE,更新 CQ head 完成事件(completion events)可能以任意顺序到达,到总是与特定的 SQE 相关联的。 消费 CQE 过程无需切换到内核态 带来的好处 支持批处理 支持文件 I/O 系统调用:read、write、send、recv、accept、opentat、stat、专用的一些系统调用,如 fallocate 不再局限于数据库应用 应对现在硬件架构:将硬件架构本身作为一个网络(多核多 CPU 是一个基础网络、CPU 之间是一个网络、CPU 和磁盘 I/O 之间又是一个网络) 三种工作模式 中断驱动模式(interrupt driven):默认模式。可通过 io_uring_enter() 提交 I/O 请求,然后直接检查 CQ 状态判断是否完成。...
Linux Virtual Memory Management
tags: Linux 原文连接:Linux Virtual Memory Management Chapter 2 Describing Physical Memory:描述物理内存 独立于平台架构的方式描述内存 — 更好的支持多平台 本章包含描述存储器、内存页的结构体(structures)和一些影响 VM 行为的标识位(flags) VM 中普遍(prevlent)认为第一重要(principal)的概念是 NUMA。 大型机器中内存访问速度取决于 CPU 到内存的距离。比如一组(bank)内存分配给每一个处理器或者一组内存非常适合靠近的 DMA 设备卡。 这里的每组(bank)内存被称为节点(node)并且这个概念在 Linux 中通过 struct pglist_data(typedef pg_data_t) 表示,即使在 UMA 架构下也是如此。每一个节点是一个由 NULL 结尾的链表,通过 pg_data_t->next_node 指向下一个节点。 每一个节点都被分割成多个块(block)称为分区(zone)用于表示内存中的范围。分区使用 struct zone_struct(typedef zone_t) 结构体描述,每一个分区都是以下三种类型的一种 ZONE_DMA 开始 16MB 内存,供 ISA 设备使用 ZONE_NORMAL 16MB - 896MB,由内核直接映射到线性地址空间的上部区域(将在第四章讨论) ZONE_HIGHMEM 896MB - END,剩余不由内核直接映射的系统可用内存, 大部分内核操作都只能使用这种类型的分区,所以这里也是这里也是最关键的性能区域(most performance critical zone) 每一个物理页帧(physical page frame)都使用结构体 struct page 表示,所有的结构体都保存在全局数组 mem_map 中,mem_map 通常存储在 ZONE_NORMAL 的开始处;...