Linux kernel memory exploitation via PTE

http://slideplayer.com/slide/9160576/

An introduction to KProbes LWN.net

https://lwn.net/Articles/132196/

This article answer the question:
How does kprobe worked?
How does jprobe worked?
Where in the kernel source is kprobe and jprobe detected and handled?
What is the hardware mechanisms used for probing?

How to use qemu for setting up VM client?

How to use QEMU to run a VM client, assuming that the kernel have kvm enabled and running?

a. create rootfs image as your OS file image, with all the general GNU/Linux utilities:

This is how I create the rootfs for Xenial (I copied and modified from Syzkaller project), using the debootstrap command mainly, but for CentOS rootfs, perhaps you can try:

https://linuxconfig.org/how-to-debootstrap-on-centos-linux

or:

https://github.com/dozzie/yumbootstrap

And here is the script for creating Xenial-based rootfs using debootstrap:

#!/bin/bash
# Copyright 2016 syzkaller project authors. All rights reserved.
# Use of this source code is governed by Apache 2 LICENSE that can be found in the LICENSE file.

# create-image.sh creates a minimal Debian-xenial Linux image suitable for syzkaller.

set -eux

# Create a minimal Debian-xenial distributive as a directory.
sudo rm -rf xenial
mkdir -p xenial
sudo debootstrap –include=openssh-server xenial xenial

# Set some defaults and enable promtless ssh to the machine for root.
sudo sed -i ‘/^root/ { s/:x:/::/ }’ xenial/etc/passwd
echo ‘V0:23:respawn:/sbin/getty 115200 hvc0’ | sudo tee -a xenial/etc/inittab
printf ‘\nauto eth0\niface eth0 inet dhcp\n’ | sudo tee -a xenial/etc/network/interfaces
echo ‘debugfs /sys/kernel/debug debugfs defaults 0 0’ | sudo tee -a xenial/etc/fstab
echo ‘debug.exception-trace = 0’ | sudo tee -a xenial/etc/sysctl.conf
sudo mkdir xenial/root/.ssh/
mkdir -p ssh
ssh-keygen -f ssh/id_rsa -t rsa -N ”
cat ssh/id_rsa.pub | sudo tee xenial/root/.ssh/authorized_keys

# Install some misc packages.
sudo chroot xenial /bin/bash -c “export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; \
apt-get update; apt-get install –yes curl tar time strace”

# Build a disk image
dd if=/dev/zero of=xenial.img bs=5M seek=2047 count=1
mkfs.ext4 -F xenial.img
sudo mkdir -p /mnt/xenial
sudo mount -o loop xenial.img /mnt/xenial
sudo cp -a xenial/. /mnt/xenial/.
sudo mkdir -p /mnt/xenial/lib/modules/xxx/
sudo cp -a /lib/modules/xxx/. /mnt/xenial/lib/modules/xxx/.
sudo umount /mnt/xenial

b. compile the linux kernel, and this will generate a few files: vmlinux, initrd, and bzImage.

When compiling the kernel:

make will generate the vmlinux + bzImage file.

make install will generate the the initramfs.img file.

make modules_install will generate the kernel modules located in /lib/modules/xxx directory, which is used above.

c. boot it up with the correct option:

qemu-system-x86_64 -hda xenial.img -snapshot -m 2048 -net nic -net user,host=10.0.2.10,hostfwd=tcp::53167-:22 -nographic -enable-kvm -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 -smp sockets=2,cores=2,threads=1 -usb -usbdevice mouse -usbdevice tablet -soundhw all -kernel /linux/arch/x86/boot/bzImage -append “console=ttyS0 root=/dev/sda debug earlyprintk=serial slub_debug=UZ” -initrd /boot/initramfs.img

From above we can see that the options choices are very great, which is why virt-manager is highly recommended to use, as it provides an interface for automatic generation of the different option easily:

https://tthtlc.wordpress.com/2016/03/13/setting-up-virtual-machine-via-virshvirt-managervirt-viewer/

Notice that vmlinux is not used above, but it is needed when kgdb debugging is needed:

https://tthtlc.wordpress.com/2014/01/14/how-to-do-kernel-debugging-via-gdb-over-serial-port-via-qemu/#comments

https://tthtlc.wordpress.com/2012/06/16/virtualbox-kgdb-analysis-of-linux-kernel-v3-4-0-rc3/

https://tthtlc.wordpress.com/2014/05/21/how-to-kgdb-qemu-freebsd-10-kernel-debugging/

A Primer on Memory Consistency and Cache Coherence (and other processor related ebooks)

https://lagunita.stanford.edu/c4x/Engineering/CS316/asset/A_Primer_on_Memory_Consistency_and_Coherence.pdf

https://lagunita.stanford.edu/c4x/Engineering/CS316/asset/p261-chung.pdf

https://lagunita.stanford.edu/c4x/Engineering/CS316/asset/chrysos.pdf

https://lagunita.stanford.edu/c4x/Engineering/CS316/asset/Processor_Microarchitecture.pdf

https://lagunita.stanford.edu/c4x/Engineering/CS316/asset/smith.precise_exceptions.pdf

https://lagunita.stanford.edu/c4x/Engineering/CS316/asset/A_Primer_on_Memory_Consistency_and_Coherence.pdf

How to add new entries to “sysctl” with the same root?

For example, entering “sudo sysctl -a | grep ‘^dev'” gives me the following list:

dev.cdrom.autoclose = 1
dev.cdrom.autoeject = 0
dev.cdrom.check_media = 0
dev.cdrom.debug = 0

dev.cdrom.info = CD-ROM information, Id: cdrom.c 3.20 2003/12/17
dev.cdrom.info =
dev.cdrom.info = drive name:
dev.cdrom.info = drive speed:
dev.cdrom.info = drive # of slots:
dev.cdrom.info = Can close tray:
dev.cdrom.lock = 0
dev.hpet.max-user-freq = 64
dev.mac_hid.mouse_button2_keycode = 97
dev.mac_hid.mouse_button3_keycode = 100
dev.mac_hid.mouse_button_emulation = 0
dev.parport.default.spintime = 500
dev.parport.default.timeslice = 200
dev.raid.speed_limit_max = 200000
dev.raid.speed_limit_min = 1000
dev.scsi.logging_level = 0

As you can see, under the “dev” branch are many different children. How is it done?

For example, under “dev” is “scsi”, which is achieved via register_sysctl_table(), and in this linux kernel file:

./scsi/scsi_sysctl.c:
scsi_table_header = register_sysctl_table(scsi_root_table);

static struct ctl_table scsi_root_table[] = {
{ .procname = “dev”,
.mode = 0555,
.child = scsi_dir_table },
{ }
};

static struct ctl_table_header *scsi_table_header;

int __init scsi_init_sysctl(void)
{
scsi_table_header = register_sysctl_table(scsi_root_table);
if (!scsi_table_header)
return -ENOMEM;
return 0;
}

And under scsi_root_table:

static struct ctl_table scsi_root_table[] = {
{ .procname = “dev”,
.mode = 0555,
.child = scsi_dir_table },
{ }
};

And under scsi_dir_table:

static struct ctl_table scsi_dir_table[] = {
{ .procname = “scsi”,
.mode = 0555,
.child = scsi_table },
{ }
};

And under scsi_table:

static struct ctl_table scsi_table[] = {
{ .procname = “logging_level”,
.data = &scsi_logging_level,
.maxlen = sizeof(scsi_logging_level),
.mode = 0644,
.proc_handler = proc_dointvec },
{ }
};

So the multilevel tables is to implement:

dev.scsi.logging_level = 0

And similarly:

./cdrom/cdrom.c:
cdrom_sysctl_header = register_sysctl_table(cdrom_root_table);

So traversing from cdrom_root_table, all the way to “cdrom_table”:

static struct ctl_table cdrom_root_table[] = {
{
.procname = “dev”,
.maxlen = 0,
.mode = 0555,
.child = cdrom_cdrom_table,
},
{ }
};

static struct ctl_table cdrom_table[] = {
{
.procname = “info”,
.data = &cdrom_sysctl_settings.info,
.maxlen = CDROM_STR_SIZE,
.mode = 0444,
.proc_handler = cdrom_sysctl_info,
},
{
.procname = “autoclose”,
.data = &cdrom_sysctl_settings.autoclose,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = cdrom_sysctl_handler,
},

And noticed the function cdrom_sysctl_info() above:

It is where all the information below is printed:

dev.cdrom.info = CD-ROM information, Id: cdrom.c 3.20 2003/12/17
dev.cdrom.info =
dev.cdrom.info = drive name:
dev.cdrom.info = drive speed:
dev.cdrom.info = drive # of slots:
dev.cdrom.info = Can close tray:
<…>

And the function snippets is here:

pos = sprintf(info, “CD-ROM information, ” VERSION “\n”);

if (cdrom_print_info(“\ndrive name:\t”, 0, info, &pos, CTL_NAME))
goto done;
if (cdrom_print_info(“\ndrive speed:\t”, 0, info, &pos, CTL_SPEED))
goto done;
if (cdrom_print_info(“\ndrive # of slots:”, 0, info, &pos, CTL_SLOTS))
goto done;
if (cdrom_print_info(“\nCan close tray:\t”,
CDC_CLOSE_TRAY, info, &pos, CTL_CAPABILITY

And so that’s how multiple entries under the same root “dev” can be achieved.

This is also answering the question posted here:

http://stackoverflow.com/questions/20164041/dynamically-adding-entries-to-sysctl

virt-manager error

While trying to create a VM in virt-manager, I got a “bind socket” permission denied error. This happens whether CentOS or Ubuntu is used as the VM guest.

Error as follows:

Unable to complete install: ‘internal error: process exited while connecting to monitor: 2016-03-19T04:58:53.268413Z qemu-system-x86_64: -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-centos7.0/org.qemu.guest_agent.0,server,nowait: Failed to bind socket to /var/lib/libvirt/qemu/channel/target/domain-centos7.0/org.qemu.guest_agent.0: Permission denied’
Traceback (most recent call last):
File “/usr/share/virt-manager/virtManager/asyncjob.py”, line 90, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File “/usr/share/virt-manager/virtManager/create.py”, line 2277, in _do_async_install
guest.start_install(meter=meter)
File “/usr/share/virt-manager/virtinst/guest.py”, line 501, in start_install
noboot)
File “/usr/share/virt-manager/virtinst/guest.py”, line 416, in _create_guest
dom = self.conn.createLinux(start_xml or final_xml, 0)
File “/usr/lib/python2.7/dist-packages/libvirt.py”, line 3606, in createLinux
if ret is None:raise libvirtError(‘virDomainCreateLinux() failed’, conn=self)
libvirtError: internal error: process exited while connecting to monitor: 2016-03-19T04:58:53.268413Z qemu-system-x86_64: -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-centos7.0/org.qemu.guest_agent.0,server,nowait: Failed to bind socket to /var/lib/libvirt/qemu/channel/target/domain-centos7.0/org.qemu.guest_agent.0: Permission denied

Causes of error:

The error arise from “channel qemu-ga” virtual hardware not emulated correctedly.

Workaround Steps:

a. Create new VM -> select ISO images.

b. Use ISO images -> select ISO file.

c. Set memory / CPU.

d. Set disk image size.

e. Set filename of image, and then now select “custom configuration before install”.

f. Inside the custom configuration screen, you see “Channel qemu-ga” as the hardware. Remove this hardware.

g. After removable, everything now works.

XFS: how to extend the filesystem size when full?

Scenario: My CentOS7 is running inside QEMU.

Looking at my CentOS7 filesystem using "df":

You can see the /home is near 100%. How to extend it?

Luckily, the default filesystem in CentOS7 is XFS:

Just do a "sudo blkid /dev/mapper/centos-home" and you can see that it is "XFS".

To extend it I need to do a few things:

a. Add new disk. SInce the OS is running inside QEMU, just do:

qemu-img create -f qcow2 centos7_hdd2.img 80G

to create a new "harddisk" named as centos7_hdd2.img. If you are not using QEMU, then it is equivalent to shutting down system and putting a new harddisk instead.

b. Reboot CentOS7. If you are using QEMU, then remember to include the new harddisk image when you start your CentOS7 guest, for example part of it shown below:

qemu-system-x86_64 -hda centos7_hdd.img -hdb centos7_hdd2.img …

c. Now the new harddisk is recognized as /dev/sdb. Create a new partition table using "fdisk /dev/sdb" and add a new partition called /dev/sdb1.

Now XFS is using LVM concept: There is PV: which house all the harddisk. Now we will have two PV – /dev/sda and /dev/sdb. From the PV, you create VG: nothing to add as we are reusing an existing VG. From VG, you create LV: nothing to add, but the LV size need to be extended. So here it goes:

d. Add the new partition to PV:

sudo pvcreate /dev/sdb1

And check:

sudo pvdisplay

e. Extend the existing VG with the new PV:

sudo vgextend centos /dev/sdb1

And check:

sudo vgdisplay

f. Now extend the size of the LV:

sudo lvextend -L80G /dev/centos/home

And check:

sudo lvdisplay

g. Finally extend the filesystem (XFS) on the LV:

sudo xfs_growfs /home

And check:

sudo df

And now the diskspace utilization is 35%. Cool.

https://ma.ttias.be/increase-expand-xfs-filesystem-in-red-hat-rhel-7-cento7/

http://serverfault.com/questions/610973/how-to-increase-the-size-of-an-xfs-file-system

http://linoxide.com/file-system/create-mount-extend-xfs-filesystem/

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/xfsgrow.html

https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/VG_grow.html

http://www.microhowto.info/howto/increase_the_size_of_an_lvm_logical_volume.html

https://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lv_extend.html

%d bloggers like this: