Learning the core_pattern in linux kernel

The Core Pattern (core_pattern), or how to specify filename and path for core dumps | SIGQUIT

https://sigquit.wordpress.com/2009/03/13/the-core-pattern/

http://forum.odin.com/threads/proc-sys-kernel-core_pattern-permission-denied.338549/

http://www.cathaycenturies.com/blog/?p=1892

http://askubuntu.com/questions/420410/how-to-permanently-edit-the-core-pattern-file

http://stackoverflow.com/questions/12760220/unable-to-create-a-core-file-for-my-crashed-program

Operating Systems: File-System Implementation

A very good writeup on Filesystem Implementation internals:

https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/12_FileSystemImplementation.html

12_11_IO_wo_BufferCache.jpg

Figure 12.11 – I/O without a unified buffer cache.

12_12_UnifiedBufferCache.jpg

Figure 12.12 – I/O using a unified buffer cache.

  • Page replacement strategies can be complicated with a unified cache, as one needs to decide whether to replace process or file pages, and how many pages to guarantee to each category of pages. Solaris, for example, has gone through many variations, resulting in priority paging giving process pages priority over file I/O pages, and setting limits so that neither can knock the other completely out of memory.
  • Another issue affecting performance is the question of whether to implement synchronous writes or asynchronous writes. Synchronous writes occur in the order in which the disk subsystem receives them, without caching; Asynchronous writes are cached, allowing the disk subsystem to schedule writes in a more efficient order ( See Chapter 12. ) Metadata writes are often done synchronously. Some systems support flags to the open call requiring that writes be synchronous, for example for the benefit of database systems that require their writes be performed in a required order.
  • The type of file access can also have an impact on optimal page replacement policies. For example, LRU is not necessarily a good policy for sequential access files. For these types of files progression normally goes in a forward direction only, and the most recently used page will not be needed again until after the file has been rewound and re-read from the beginning, ( if it is ever needed at all. ) On the other hand, we can expect to need the next page in the file fairly soon. For this reason sequential access files often take advantage of two special policies:
    • Free-behind frees up a page as soon as the next page in the file is requested, with the assumption that we are now done with the old page and won’t need it again for a long time.
    • Read-ahead reads the requested page and several subsequent pages at the same time, with the assumption that those pages will be needed in the near future. This is similar to the track caching that is already performed by the disk controller, except it saves the future latency of transferring data from the disk controller memory into motherboard main memory.
  • The caching system and asynchronous writes speed up disk writes considerably, because the disk subsystem can schedule physical writes to the disk to minimize head movement and disk seek times. ( See Chapter 12. ) Reads, on the other hand, must be done more synchronously in spite of the caching system, with the result that disk writes can counter-intuitively be much faster on average than disk reads.

I am disabled – Valerie Aurora’s blog

https://blog.valerieaurora.org/2012/02/20/i-am-disabled/

One of the prominent developer in Linux kernel.

She is one of my favorite techie for linux kernel filesystem internals expert, especially debugging using User Mode Linux:

http://valerieaurora.org/uml_tips.html

 

Linux kernel memory exploitation via PTE

http://slideplayer.com/slide/9160576/

An introduction to KProbes LWN.net

https://lwn.net/Articles/132196/

This article answer the question:
How does kprobe worked?
How does jprobe worked?
Where in the kernel source is kprobe and jprobe detected and handled?
What is the hardware mechanisms used for probing?

How to use qemu for setting up VM client?

How to use QEMU to run a VM client, assuming that the kernel have kvm enabled and running?

a. create rootfs image as your OS file image, with all the general GNU/Linux utilities:

This is how I create the rootfs for Xenial (I copied and modified from Syzkaller project), using the debootstrap command mainly, but for CentOS rootfs, perhaps you can try:

https://linuxconfig.org/how-to-debootstrap-on-centos-linux

or:

https://github.com/dozzie/yumbootstrap

And here is the script for creating Xenial-based rootfs using debootstrap:

#!/bin/bash
# Copyright 2016 syzkaller project authors. All rights reserved.
# Use of this source code is governed by Apache 2 LICENSE that can be found in the LICENSE file.

# create-image.sh creates a minimal Debian-xenial Linux image suitable for syzkaller.

set -eux

# Create a minimal Debian-xenial distributive as a directory.
sudo rm -rf xenial
mkdir -p xenial
sudo debootstrap –include=openssh-server xenial xenial

# Set some defaults and enable promtless ssh to the machine for root.
sudo sed -i ‘/^root/ { s/:x:/::/ }’ xenial/etc/passwd
echo ‘V0:23:respawn:/sbin/getty 115200 hvc0’ | sudo tee -a xenial/etc/inittab
printf ‘\nauto eth0\niface eth0 inet dhcp\n’ | sudo tee -a xenial/etc/network/interfaces
echo ‘debugfs /sys/kernel/debug debugfs defaults 0 0’ | sudo tee -a xenial/etc/fstab
echo ‘debug.exception-trace = 0’ | sudo tee -a xenial/etc/sysctl.conf
sudo mkdir xenial/root/.ssh/
mkdir -p ssh
ssh-keygen -f ssh/id_rsa -t rsa -N ”
cat ssh/id_rsa.pub | sudo tee xenial/root/.ssh/authorized_keys

# Install some misc packages.
sudo chroot xenial /bin/bash -c “export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; \
apt-get update; apt-get install –yes curl tar time strace”

# Build a disk image
dd if=/dev/zero of=xenial.img bs=5M seek=2047 count=1
mkfs.ext4 -F xenial.img
sudo mkdir -p /mnt/xenial
sudo mount -o loop xenial.img /mnt/xenial
sudo cp -a xenial/. /mnt/xenial/.
sudo mkdir -p /mnt/xenial/lib/modules/xxx/
sudo cp -a /lib/modules/xxx/. /mnt/xenial/lib/modules/xxx/.
sudo umount /mnt/xenial

b. compile the linux kernel, and this will generate a few files: vmlinux, initrd, and bzImage.

When compiling the kernel:

make will generate the vmlinux + bzImage file.

make install will generate the the initramfs.img file.

make modules_install will generate the kernel modules located in /lib/modules/xxx directory, which is used above.

c. boot it up with the correct option:

qemu-system-x86_64 -hda xenial.img -snapshot -m 2048 -net nic -net user,host=10.0.2.10,hostfwd=tcp::53167-:22 -nographic -enable-kvm -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 -smp sockets=2,cores=2,threads=1 -usb -usbdevice mouse -usbdevice tablet -soundhw all -kernel /linux/arch/x86/boot/bzImage -append “console=ttyS0 root=/dev/sda debug earlyprintk=serial slub_debug=UZ” -initrd /boot/initramfs.img

From above we can see that the options choices are very great, which is why virt-manager is highly recommended to use, as it provides an interface for automatic generation of the different option easily:

https://tthtlc.wordpress.com/2016/03/13/setting-up-virtual-machine-via-virshvirt-managervirt-viewer/

Notice that vmlinux is not used above, but it is needed when kgdb debugging is needed:

https://tthtlc.wordpress.com/2014/01/14/how-to-do-kernel-debugging-via-gdb-over-serial-port-via-qemu/#comments

https://tthtlc.wordpress.com/2012/06/16/virtualbox-kgdb-analysis-of-linux-kernel-v3-4-0-rc3/

https://tthtlc.wordpress.com/2014/05/21/how-to-kgdb-qemu-freebsd-10-kernel-debugging/

A Primer on Memory Consistency and Cache Coherence (and other processor related ebooks)

https://lagunita.stanford.edu/c4x/Engineering/CS316/asset/A_Primer_on_Memory_Consistency_and_Coherence.pdf

https://lagunita.stanford.edu/c4x/Engineering/CS316/asset/p261-chung.pdf

https://lagunita.stanford.edu/c4x/Engineering/CS316/asset/chrysos.pdf

https://lagunita.stanford.edu/c4x/Engineering/CS316/asset/Processor_Microarchitecture.pdf

https://lagunita.stanford.edu/c4x/Engineering/CS316/asset/smith.precise_exceptions.pdf

https://lagunita.stanford.edu/c4x/Engineering/CS316/asset/A_Primer_on_Memory_Consistency_and_Coherence.pdf

%d bloggers like this: