2017年12月25日 星期一

__attribute__((alias)) variable attribute


Source: http://www.keil.com/support/man/docs/armcc/armcc_chr1359124980906.htm


__attribute__((alias)) variable attribute

9.59 __attribute__((alias)) variable attribute

This variable attribute enables you to specify multiple aliases for a variable.

Syntax

type newname __attribute__((alias("oldname")));
Where:
oldname
is the name of the variable to be aliased
newname
is the new name of the aliased variable.

Usage

Aliases must be defined in the same translation unit as the original variable.

Note

You cannot specify aliases in block scope. The compiler ignores aliasing attributes attached to local variable definitions and treats the variable definition as a normal local definition.
In the output object file, the compiler replaces alias references with a reference to the original variable name, and emits the alias alongside the original name. For example:
int oldname = 1;
extern int newname __attribute__((alias("oldname")));
This code compiles to:
     LDR      r1,[r0,#0]  ; oldname
     ...
oldname
newname
     DCD      0x00000001
If the original variable is defined as static but the alias is defined as extern, then the compiler changes the original variable to be external.

Note

Function names might also be aliased using the corresponding function attribute __attribute__((alias)).

Example

#include 
int oldname = 1;
extern int newname __attribute__((alias("oldname"))); // declaration
void foo(void)
{
    printf("newname = %d\n", newname); // prints 1
}

2017年12月19日 星期二

USB port control

CIM_USBController class
https://msdn.microsoft.com/en-us/library/aa388644%28v=vs.85%29.aspx


Win32_USBController class
https://msdn.microsoft.com/en-us/library/aa394504%28v=vs.85%29.aspx


===========================================================================
Is it possible to power up ports on a USB hub from Ubuntu?

hub-ctrl will do what you need.
sudo apt-get install libusb-dev
cc -o hub-ctrl hub-ctrl.c -l usb
sudo ./hub-ctrl -v
sudo ./hub-ctrl -P 2 -p 1 # turn on port 2
sudo ./hub-ctrl -P 2 -p 0 # turn off port 2

Disclaimer: although I have tested it on Ubuntu 12.04 (precise), I did not write this utility. It does require a hub with built in power control, but given that your hub is powering down certain ports it is a good bet your hub has it.

2017年12月8日 星期五

[C++]使用Cppcheck靜態分析工具輔助檢查C++程式潛在問題

Source: https://dotblogs.com.tw/larrynung/2011/10/29/47866

Cppcheck是開放源碼的靜態分析工具,可用於分析C/C++的程式。跟一般的編譯器所具備的靜態分析功能不同的是,Cppcheck被定位在專門偵 測編譯器一般偵測不到的問題,所以能幫我們檢查出程式中是否有記憶體洩漏、未初始的變數或是未使用到的方法、或是存取位置超出範圍...等等,而像是語法 錯誤這類編譯器能偵測到的問題Cppcheck就不提供了。主要能偵測的有下面幾項:
  • Out of bounds
  • Exception safety
  • Memory leaks
  • Obsolete functions are used
  • Invalid usage of STL
  • Uninitialized variables and unused functions

Cppcheck在使用上能下載Standalone Client直接使用,或是能透過外掛的方式跟現有的開發環境整合,這邊可參閱Cppcheck - A tool for static C/C++ code analysis上的資料,開放源碼的外掛與商業授權的外掛都整理得很仔細:
Clients and plugins (open source):
Clients and plugins (commercial)

Client程式可直接至Cppcheck網站上下載,安裝後運行能看到如下的畫面:
image

點選上方工具列最左邊的工具按鈕,可以指定要偵測的程式所在目錄位置。
image

選取完畢程式會直接進入分析的動作,分析的結果會列表在程式中,將列出的檔案展開可查閱指定的檔案有檢查出哪些問題,是警告還是錯誤、被視為問題的原因、與問題所在的行數。若嫌資料太多難以查閱,可進一步透過上方工具列做過濾或篩選。
image

要查閱程式碼時可用滑鼠連點,Cppcheck會用記事本帶出對應的程式碼 。
image

要將分析結果給團隊成員看時,Cppcheck也能將分析結果給匯出成txt、xml、與csv檔,方便散佈分析的結果。
image

透過GUI的方式使用Cppcheck就是那麼簡單,若嫌分析的速度過慢,我們還可以透過Preferences去設定要用多少執行緒去處理。
image

若覺得用記事本帶出程式碼很難使用,也可以設定自己所喜好的編輯器,所以想要用UltraEdit之類功能強大的編輯器來開啟程式碼也是可以的。
image

另外一提,Cppcheck也允許透過Command的方式使用,所以我們也能利用他跟Visual Studio外部工具整合,將分析結果導到輸出視窗。這邊就不多做介紹,有興趣的可以在MS-Dos下運行Cppcheck就可以看到進一步的使用方法。
Cppcheck - A tool for static C/C++ code analysis

Syntax:
    cppcheck [OPTIONS] [files or paths]

If a directory is given instead of a filename, *.cpp, *.cxx, *.cc, *.c++, *.c,
*.tpp, and *.txx files are checked recursively from the given directory.

Options:
    --append=<file>      This allows you to provide information about
                         functions by providing an implementation for them.
    --check-config       Check cppcheck configuration. The normal code
                         analysis is disabled by this flag.
    -D<ID>               By default Cppcheck checks all configurations.
                         Use -D to limit the checking. When -D is used the
                         checking is limited to the given configuration.
                         Example: -DDEBUG=1 -D__cplusplus
    --enable=<id>        Enable additional checks. The available ids are:
                          * all
                                  Enable all checks
                          * style
                                  Enable all coding style checks. All messages
                                  with the severities 'style', 'performance'
                                  and 'portability' are enabled.
                          * performance
                                  Enable performance messages
                          * portability
                                  Enable portability messages
                          * information
                                  Enable information messages
                          * unusedFunction
                                  Check for unused functions
                          * missingInclude
                                  Warn if there are missing includes.
                                  For detailed information use --check-config
                         Several ids can be given if you separate them with
                         commas.
    --error-exitcode=<n> If errors are found, integer [n] is returned instead
                         of the default 0. 1 is returned
                         if arguments are not valid or if no input files are
                         provided. Note that your operating system can
                         modify this value, e.g. 256 can become 0.
    --errorlist          Print a list of all the error messages in XML format.
    --exitcode-suppressions=<file>
                         Used when certain messages should be displayed but
                         should not cause a non-zero exitcode.
    --file-list=<file>   Specify the files to check in a text file. Add one
                         filename per line. When file is -, the file list will
                         be read from standard input.
    -f, --force          Force checking of all configurations in files that have
                         "too many" configurations.
    -h, --help           Print this help.
    -I <dir>             Give include path. Give several -I parameters to give
                         several paths. First given path is checked first. If
                         paths are relative to source files, this is not needed.
    -i <dir or file>     Give a source file or source file directory to exclude
                         from the check. This applies only to source files so
                         header files included by source files are not matched.
                         Directory name is matched to all parts of the path.
    --inline-suppr       Enable inline suppressions. Use them by placing one or
                         more comments, like: // cppcheck-suppress warningId
                         on the lines before the warning to suppress.
    -j <jobs>            Start [jobs] threads to do the checking simultaneously.
    --platform=<type>    Specifies platform specific types and sizes. The
                         available platforms are:
                          * unix32
                                 32 bit unix variant
                          * unix64
                                 64 bit unix variant
                          * win32A
                                 32 bit Windows ASCII character encoding
                          * win32W
                                 32 bit Windows UNICODE character encoding
                          * win64
                                 64 bit Windows
    -q, --quiet          Only print error messages.
    --report-progress    Report progress messages while checking a file.
    --rule=<rule>        Match regular expression.
    --rule-file=<file>   Use given rule file. For more information, see: 
                         https://sourceforge.net/projects/cppcheck/files/Articles/
    -s, --style          Deprecated, use --enable=style
    --std=posix          Code is posix
    --std=c99            Code is C99 standard
    --suppress=<spec>    Suppress warnings that match <spec>. The format of
                         <spec> is:
                         [error id]:[filename]:[line]
                         The [filename] and [line] are optional. If [error id]
                         is a wildcard '*', all error ids match.
    --suppressions-list=<file>
                         Suppress warnings listed in the file. Each suppression
                         is in the same format as <spec> above.
    --template '<text>'  Format the error messages. E.g.
                         '{file}:{line},{severity},{id},{message}' or
                         '{file}({line}):({severity}) {message}'
                         Pre-defined templates: gcc, vs
    -v, --verbose        Output more detailed error information.
    --version            Print out version number.
    --xml                Write results in xml format to error stream (stderr).
    --xml-version=<version>
                         Select the XML file version. Currently versions 1 and 2
                         are available. The default version is 1.
Example usage:
  # Recursively check the current folder. Print the progress on the screen and
    write errors to a file:
    cppcheck . 2> err.txt
  # Recursively check ../myproject/ and don't print progress:
    cppcheck --quiet ../myproject/
  # Check only files one.cpp and two.cpp and give all information there is:
    cppcheck -v -s one.cpp two.cpp
  # Check f.cpp and search include files from inc1/ and inc2/:
    cppcheck -I inc1/ -I inc2/ f.cpp

For more information:
    http://cppcheck.sf.net/manual.pdf
cppcheck: error: could not find or open any of the paths given.

Debian on an emulated ARM machine

Source: https://www.aurel32.net/info/debian_arm_qemu.php

Introduction

QEMU is a generic and open source processor emulator which can emulate i386, x86_64, ARM, MIPS, PowerPC and SPARC systems. In case of ARM, it can emulate an Integrator or a Versatile platform. The Versatile one is the most interesting as it includes a hard disk SCSI controller, an Ethernet card and a graphical display.
Using a kernel compiled with the right options, it is possible to install a Debian distribution on such an emulated platform. That makes a cheap development platform. The emulated system running on an Athlon 64 X2 3800+ is around 20% faster than the popular NSLU2 and possibly with much more RAM (my emulated system has 256MiB of RAM).
This howto has been written for a Debian host system, but could be easily adapted for other distributions.
Alternatively prebuilt images are also available.

Installing QEMU

QEMU is currently available as a package in the Debian distribution, you will need at least version 0.9.1. If you want to compile it yourself, here is the procedure:
wget http://fabrice.bellard.free.fr/qemu/qemu-0.9.1.tar.gz
To build QEMU a few packages like SDL needs to be installed on your system. gcc version 3.4 is also needed, as some parts of QEMU do not build with newer gcc versions. As QEMU is present in the archive, just run:
$ su -c "apt-get build-dep qemu"
Then run the configure script:
$ cd qemu-0.9.1
$ ./configure
Then compile it:
$ make
And install it on your system:
$ su -c "make install"

Preparing the installation

First you need to create an image of the hard disk. In my case I have chosen to emulate a 10GB hard-disk, but this size could be changed to correspond to your needs. Note that the file is created in qcow format, so that only the non-empty sectors will be written in the file.
A small tip: create a directory to hold all the files related to the emulated ARM machine.
$ qemu-img create -f qcow hda.img 10G
Debian currently does not support the Versatile platform natively, that means there is no kernel available for this platform. However there is full support for this platform in the upstream kernel. You can either compile your own kernel (using a cross-compiler or an other ARM machine), or use the kernel I have built:
$ wget http://people.debian.org/~aurel32/arm-versatile/vmlinuz-2.6.18-6-versatile
$ wget http://people.debian.org/~aurel32/arm-versatile/initrd.img-2.6.18-6-versatile
You also need to get the initrd of the Etch Debian-Installer:
$ wget http://ftp.de.debian.org/debian/dists/etch/main/installer-arm/current/images/rpc/netboot/initrd.gz

Installing Debian Etch

To start the installation process, use the following line:
$ qemu-system-arm -M versatilepb -kernel vmlinuz-2.6.18-6-versatile -initrd initrd.gz -hda hda.img -append "root=/dev/ram"
After a few seconds you should see the kernel booting:
Debian-Installer booting And then the first screen of the Debian-Installer:
First screen of Debian-Installer Proceed as a normal installation, until you get to the following screen. If you need some documentation, please refer to the Debian installation guide.
Debian-Installer complains about missing kernel modules Debian-Installer complains that it can't find kernel modules. This is normal because the initrd of another platform is used. This is not really a problem as the kernel I provide has been compiled with the network driver, the disk driver and ext3 support built-in. However that means you won't be able to install Debian on an XFS partition. This is a known limitation that will disappear if/when the Versatile kernel is integrated in the official Debian kernel.
So in short answer yes, contrarily to what is suggested.
During the installation, Debian installer will complain that it can not found a suitable kernel for this platform, as shown on the screenshot below. This is due to the fact that Debian currently does not support the ARM Versatile platform; the support will be added post-Etch. An unofficial kernel being provided directly to QEMU, you can safely ignore this message and continue the installation.
Debian-Installer complains about missing bootloader Near to the end of the installation you will get the following error screen:
Debian-Installer complains about missing bootloader Again consider this message as harmless. There is no need for a bootloader, as QEMU is able to directly load a kernel and an initrd.
Then you will get to the end of the installation. Congratulations!
End of the installation When the systems reboot, just exit QEMU. Different parameters have to be used to boot the installed system.

Using the system

First boot

To start the system use the following command:
$ qemu-system-arm -M versatilepb -kernel vmlinuz-2.6.18-6-versatile -initrd initrd.img-2.6.18-6-versatile -hda hda.img -append "root=/dev/sda1"
After a few seconds the system should give you a login prompt:
Console login The first thing to do is to install the kernel image corresponding to the running kernel. This will install all the modules that you may need.
$ apt-get install initramfs-tools
$ wget http://people.debian.org/~aurel32/arm-versatile/linux-image-2.6.18-6-versatile_2.6.18.dfsg.1-18etch1+versatile_arm.deb
$ su -c "dpkg -i linux-image-2.6.18-6-versatile_2.6.18.dfsg.1-18etch1+versatile_arm.deb"

Using Xorg

You now have a full Debian arm system that you can use for development or whatever. You can even run Xorg using the fb device. Note that you have to select a 256-color mode, with a resolution up to 1024x768.
Running Xorg You can even chat on IRC :)
Running Xchat

QEMU without X

If you don't want to start QEMU in graphic mode, you can use the -nographic option, and ask the kernel to use ttyAMA0 as the console. The command to start the emulated machine then become:
$ qemu-system-arm -M versatilepb -kernel vmlinuz-2.6.18-6-versatile -initrd initrd.img-2.6.18-R-versatile -hda hda.img -append "root=/dev/sda1 console=ttyAMA0" -nographic
To set up a getty on the serial port, and be able to login, you must edit /etc/inittab and add the following line:
T0:23:respawn:/sbin/getty -L ttyAMA0 9600 vt100
All users except root should be able to login on this console. To alswo allow root login you must add the following line in /etc/securetty:
ttyAMA0

More RAM

By default QEMU emulate a machine with 128MiB of RAM. You can use the -m option to increase or decrease the size of the RAM. It is however limited to 256MiB, greater sizes will crash the kernel.

Connect your emulated machine to a real network

When no option is specified QEMU uses a non priviledged user mode network stack that gives the emulated machine access to the world. But you probably want to make your emulated machine accessible from the outside. It is possible by using the tap mode and bridging the tap interface with the network interface of the host machine.
The first thing to do is to active a bridge on your host machine. For that you have to modify the /etc/network/interfaces file as follow:
Before:
auto eth0
iface eth0 inet dhcp

After:
auto br0
iface br0 inet dhcp
  bridge_ports eth0
  bridge_maxwait 0
Then you need to install the bridge-utils package and restart your network interface:
# apt-get install bridge-utils
# ifdown eth0
# ifup br0
Create a script call /etc/qemu-ifup that will be executed upon the start of QEMU:
#!/bin/sh
echo "Executing /etc/qemu-ifup"
echo "Bringing up $1 for bridged mode..."
sudo /sbin/ifconfig $1 0.0.0.0 promisc up
echo "Adding $1 to br0..."
sudo /usr/sbin/brctl addif br0 $1
sleep 2
As you probably don't want to execute QEMU as root, you need to create a qemu user group and authorize the brctl and ifconfig commands for users of the qemu via sudo. You need to add the following lines to /etc/sudoers (edit the file using visudo):
...
Cmnd_Alias QEMU = /usr/sbin/brctl, /sbin/ifconfig
%qemu ALL=NOPASSWD: QEMU
Finally you can start your emulated machine using the following command
$ qemu-system-arm -M versatilepb -kernel vmlinuz-2.6.18-6-versatile -initrd initrd.img-2.6.18-6-versatile -hda hda.img -append "root=/dev/sda1" -net nic,macaddr=00:16:3e:00:00:01 -net tap
You don't need to give a MAC address if you are emulating only one machine, as QEMU will use a default one. However if you have more than one emulated machine (don't forget QEMU can also emulate other architectures than ARM), you will have to specify a unique MAC address for each machine. I advise you to select an address from the range 00:16:3e:xx:xx:xx, which has been assigned to Xen.

Other options

QEMU has a lot of other useful options. For a full list, please refer to the QEMU documentation.

Links

2017年12月5日 星期二

Many approaches to sandboxing in Linux

You can isolate malicious programs or risky tasks by sandboxing them in different ways to stop them from affecting your main system. This article gives the reader a working knowledge of sandboxing in Linux.
Securing your system is a big priority for every production environment, whether you are a systems admin or a software developer. The best way to secure your operating system from doubtful programs or processes is by sandboxing (also termed as jailing). Sandboxing involves providing a safe environment for a program or software so that you can play around with it without hurting your system. It actually keeps your program isolated from the rest of the system, by using any one of the different methods available in the Linux kernel. Sandboxing can be useful to systems administrators if they want to test their tasks without any damage and also to developers for testing their pieces of code. A sandbox can help you to create a different environment from your base operating system. It has become trendy due to its extensive use by PaaS and SaaS providers.
The idea of jailing is not new since it has been available in UNIX based BSD OSs. For years, BSD has used the concept of ‘jails’, while Solaris has used ‘zones’. But in Linux, this concept was started with chroot and has been possible because namespaces are present in the Linux kernel.
Namespaces
Namespaces are features available in Linux to isolate processes in different system resource aspects. There are six types of namespaces available up to kernel 4.0. And more will be added in the future. These are:
  • mnt (mount points, file systems)
  • pid (processes)
  • net (network stack)
  • ipc (system V IPC)
  • uts (host name)
  • user (UIDs)
Linux namespaces are not new. The first one was added to Linux in 2008 (Linux kernel 2.6), but they became more widely used only in Linux kernel 3.6, when work on the most complex of them all — the users namespace — was completed. Linux kernel uses clone(), unshare() and setns() system calls to create and control namespaces.
Creation of new namespaces is done by the clone() system call, which is also used to start a process. The setns() system call adds a running process to the existing namespace. The unshare() call works on a process inside the namespace, and makes the caller a member of the namespace. Its main purpose is to isolate the namespace without having to create a new process or thread (as is done by clone()).You can directly use some services to get the features of these namespaces. CLONE_NEW* identifiers are used with these system calls to identify the type of namespace. These three system calls make use of the CLONE_NEW* as CLONE_NEWIPC, CLONE_NEWNS, CLONE_NEWNET, CLONE_NEWPID, CLONE_NEWUSER, and CLONE_NEWUTS. A process in a namespace can be different because of its unique inode number when it is created.
#ls -al /proc//ns
lrwxrwxrwx 1 root root 0 Feb 7 13:52 ipc -> ipc:[4026532253]
lrwxrwxrwx 1 root root 0 Feb 7 15:39 mnt -> mnt:[4026532251]
lrwxrwxrwx 1 root root 0 Feb 7 13:52 net -> net:[4026531957]
lrwxrwxrwx 1 root root 0 Feb 7 13:52 pid -> pid:[4026532254]
lrwxrwxrwx 1 root root 0 Feb 7 13:52 user -> user:[4026531837]
lrwxrwxrwx 1 root root 0 Feb 7 15:39 uts -> uts:[4026532252]
Mount namespace: A process views different mount points other than the original system mount point. It creates a separate file system tree associated with different processes, which restricts them from making changes to the root file system.
PID namespace: PID namespace isolates a process ID from the main PID hierarchy. A process inside a PID namespace can have the same PID as a process outside it, and even inside the namespace, you can have different init with PID 1.
UTS namespace: In the UTS (UNIX Timesharing System) namespace, a process can have a different set of domain names and host names than the main system. It uses sethostname() and setdomainname() to do that.
IPC namespace: This is used for inter-process communication resources isolation and POSIX message queues.
User namespace: This isolates user and group IDs inside a namespace, which is allowed to have the same UID or GID in the namespace as in the host machine. In your system, unprivileged processes can create user namespaces in which they have full privileges.
Network namespace: Inside this namespace, processes can have different network stacks, i.e., different network devices, IP addresses, routing tables, etc.
Sandboxing tools available in Linux use this namespaces feature to isolate a process or create a new virtual environment. A much more secure tool will be that which uses maximum namespaces for isolation. Now, let’s talk about different methods of sandboxing, from soft to hard isolation.
chroot
chroot is the oldest sandboxing tool available in Linux. Its work is the same as mount namespace, but it is implemented much earlier. chroot changes the root directory for a process to any chroot directory (like /chroot). As the root directory is the top of the file system hierarchy, applications are unable to access directories higher up than the root directory, and so are isolated from the rest of the system. This prevents applications inside the chroot from interfering with files elsewhere on your computer. To create an isolated environment in old SystemV based operating systems, you first need to copy all required packages and libraries to that directory. For demonstration purposes, I am running ‘ls’ on the chroot directory.
First, create a directory to set as root a file system for a process:
#mkdir /chroot
Next, make the required directory inside it.
#mkdir /chroot/{lib,lib64,bin,etc}
Now, the most important step is to copy the executable and libraries. To get the shell inside the chroot, you also need /bin/bash.
#cp -v /bin/{bash,ls} /chroot/bin
To see the libraries required for this script, run the following command:
#ldd /bin/bash
linux-vdso.so.1 (0x00007fff70deb000)
libncurses.so.5 => /lib/x86_64-linux-gnu/libncurses.so.5 (0x00007f25e33a9000)
libtinfo.so.5 => /lib/x86_64-linux-gnu/libtinfo.so.5 (0x00007f25e317f000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f25e2f7a000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f25e2bd6000)
/lib64/ld-linux-x86-64.so.2 (0x00007f25e360d000)
 
#ldd /bin/ls
linux-vdso.so.1 (0x00007fff4f8e6000)
libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f9f00aec000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f9f00748000)
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f9f004d7000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9f002d3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9f00d4f000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f9f000b6000)
Now, copy these files to the lib or lib64 of /chroot
as required.
Once you have copied all the necessary files, it’s time to enter the chroot.
#sudo chroot /chroot/ /bin/bash
You will be prompted with a shell running inside your virtual environment. Here, you don’t have much to run besides ls, but it has changed the root file system for this process to /chroot.
To get a more full-featured environment you can use the debootstrap utility to bootstrap a basic Debian system:
#debootstrap --arch=amd64 unstable my_deb/
It will download a minimal system to run under chroot. You can use this to even test 32-bit applications on 64-bit systems or for testing your program before installation. To get process management, mount proc to the chroot, and to make the contents of home ‘lost on exit’, mount tmpfs at /home//:
#sudo mount -o bind /proc my_deb/proc
#mount -t tmpfs -o size=100m tmpfs /home/user
To get Internet connection inside, use the following command:
#sudo cp /etc/resolv.conf /var/chroot/etc/resolv.conf
After that, you are ready to enter your environment.
#chroot my_deb/ /bin/bash
Here, you get a whole basic operating system inside your chroot. But it differs from your main system by mount point, because it only uses the mount property as the isolator. It has the same hostname, IP address and process running as in the main system. That’s why it is much less secure (this is even mentioned in the man page of chroot), and any running process can still harm your computer by killing your tasks or affecting network based services.
Note: To run graphical applications inside chroot, open x server by running the following command on the main system:
#xhost +
and on chroot system
#export DISPLAY=:0.0
On systemd based systems, chrooting is pretty straightforward. It’s needed to define the root directory on the processes unit file only.
[Unit]
Description=my_chroot_Service
[Service]
RootDirectory=/chroot/foobar
ExecStartPre=/usr/local/bin/pre.sh
ExecStart=/bin/my_program
RootDirectoryStartOnly=yes
Here RootDirectory shows where the root directory is for the foobar process.
Note: The program script path has to be inside chroot, which makes the full path of that process script as /chroot/bin/my_program.
Before the daemon is started, a shell script pre.sh is invoked, the purpose of which is to set up the chroot environment as necessary, i.e., mount /proc and similar file systems into it, depending on what the service might need. You can start your service by using the following command:
#systemctl start my_chroot_Service.service
Ip-netns
The Ip-netns utility is one of the few that directly use network namespaces to create virtual interfaces. To create a new network namespace, use the following command:
#ip netns add netns1
To check the interfaces inside, use the command shown below:
#ip netns exec netns ip addr
You can even get the shell inside it, as follows:
#ip netns exec netns /bin/bash
This will take you inside the network namespace, which has only a single network interface with no IP. So, you are not connected with the external network and also can’t ping.
#ip netns exec netns ip link set dev lo up
This will bring the loop interface up. But to connect to the external network you need to create a virtual Ethernet and add it to netns as follows:
# ip link add veth0 type veth peer name veth1
# ip link set veth1 netns netns1
Now, it’s time to set the IP to these devices, as follows:
# ip netns exec netns1 ifconfig veth1 10.1.1.1/24 up
# ifconfig veth0 10.1.1.2/24 up
Unshare
The unshare utility is used to create any namespace isolated environment and run a program or shell inside it.
To get a network namespace and run the shell inside it, use the command shown below:
#unshare --net /bin/bash
The shell you get back will come with a different network stack. You can check this by using #ip addr, as follows:
1: lo: mtu 65536 qdisc noop state DOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
To create a user namespace environment, use the following command:
#unshare --user /bin/bash
You can check your user inside the shell by using the command below:
#whoami
nobody
To get the PID namespace, use the following command:
#unshare --pid --fork /bin/bash
Inside this namespace, you can see all the processes but cannot kill any.
#ps -aux |grep firefox
root 1110 42.6 11.0 1209424 436756 tty1 Sl 23:36 0:15 .firefox1/./firefox
root 1208 0.0 0.0 12660 1648 pts/2 S+ 23:37 0:00 grep firefox
#kill 1110
bash: kill: (1110) - No such process
To get a whole different degree of process tree isolation you need to mount another proc for the namespace, as follows:
unshare --pid --fork --mount-proc /bin/bash
In this way, you can use unshare to create a single namespace. More about it can be found out on the man page of unshare.
Note: A namespace created by using unshare can also be combined to create a single shell which uses different namespaces. For example:
#unshare --pid --fork --user /bin/bash
This will create an isolated environment using the PID and user namespaces.
Firejail
Firejail is an SUID sandbox program that is used to isolate programs for testing or security purposes. It is written in C and can be configured to use most of the namespaces. To start a service in firejail, use the following command:
#firejail firefox
It will start Firefox in a sandbox with the root file system mounted as read only. To start Firefox with only ~/Downloads and ~/.mozilla mounted to write, use the following command:
#firejail --whitelist=~/.mozilla --whitelist=~/Download firefox
Firejail, by default, uses the user namespace and mounts empty temporary file systems (tmpfs) on top of the user home directory in private mode. To start a program in private mode, use the command given below:
#firejail --private firefox
To start firejail in a new network stack, use the following command:
#firejail --net=eth0 --whitelist=~/.mozilla --whitelist=~/Download firefox
To assign an IP address to the sandbox, use the following command:
#firejail --net=eth0 --ip=192.168.1.155 firefox
Note: To sandbox all programs running by a user, you can change the default shell of that user to /usr/bin/firejail.
#chsh –shell /usr/bin/firejail
Containers
When learning about virtualisation technologies, what attracted me most were containers because of their easy deployment. Containers (also known as lightweight virtualisation) are tools for isolation, which use namespaces for the purpose. They are a better sandboxing utility, because they generally use more then one namespace and are more focused on creating a whole virtual system instance rather than isolating a single process.
Containers are not a new technology. They have been in UNIX and Linux for decades but due to their increasing use in SaaS and PaaS, they have become a hot topic since they provide the most secure environment to deliver and use these services. They are called lightweight virtualisation because they provide process level isolation, which means they depend on the Linux kernel. Hence, only those instances can be created which use the same base kernel. There are lots of containers available for Linux that have gained popularity over the past few years.
Systemd-nspawn
This is a utility available by default with systemd, which creates separate containers for isolation. It uses mount and PID namespaces by default but another namespace can also be configured. To create a container or isolated shell, you need to download a basic distribution which we have done already, using debootstrap. To get inside this container, use the code below:
#systemd-nspawn -D my_deb
This container is stronger then chroot because it not only has a different mount point but also a separate process tree (check it by ps -aux). But still, the hostname and IP interfaces are the same as the host system. To add your own network stack, you need to connect to the existing network bridge.
#systemd-nspawn -D my_deb --network-bridge=br0
This will start the container with the network namespace with a pair of veth devices. You can even boot the instance by the -b option, as follows:
#systemd-nspawn -bD my_deb
Note: While booting the container, you will be required to enter the password of the root user; so first run #passwd inside to set the root password.
The whole nspawn project is relatively young; hence there is still a lot that needs to be developed.
Docker
Docker is the smartest and most prominent container in Linux to run an applications environment. Over the past few years, it has grabbed the most attention. Docker containers use most of the namespaces and cgroups present in systemd for providing a strong isolated environment. Docker runs on the Docker daemon, which starts an isolated instance like systemd-nspawn, in which any service can be deployed with just a few tweaks. It can be used as a sandboxing tool to run applications securely or to deploy some software service inside it.
To get your first Docker container running, you need to first start the Docker daemon, and then download the base image from the Docker online repository, as follows:
#service docker start
#docker pull kalilinux/kali-linux-docker
Note: You can also download other Docker images from the Docker Hub (https://hub.docker.com/).
It will download the base Kali Linux image. You can see all the available images on your system by using the following code:
#docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
kalilinux/kali-linux-docker
latest 63ae5ac8df0f 1 minute ago 325 MB
centos centos6 b9aeeaeb5e17 9 months ago 202.6 MB
hello-world latest 91c95931e552 9 months ago 910 B
To run a program inside your container, use the command given below:
#docker run -i -t kalilinux/kali-linux-docker ls
bin dev home lib64 mnt proc run selinux sys usr
boot etc lib media opt root sbin srv tmp var
This will start (run) your container, execute the command and then close the container. To get an interactive shell inside the container, use the command given below:
#docker run -t -i kalilinux/kali-linux-docker /bin/bash
root@24a70cb3095a:/#
This will get you inside the container where you can do your work, isolated from your host machine. 24a70cb3095a is your container’s ID. You can check all the running containers by using the following command:
#docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
24a70cb3095a kalilinux/kali-linux-docker /bin/bash” About a minute ago Up About a minute angry_cori
While installing the Docker image, Docker automatically creates a veth for Docker, which makes the Docker image connect to the main system. You can check this by using #ifconfig and pinging your main system. At any instance, you can save your Docker state as a new container by using the code given below:
#docker commit 24a70cb3095a new_image
#docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
new_image latest a87c73abca9d 6 seconds ago 325 MB
kalilinux/kali-linux-docker
latest 63ae5ac8df0f 1 hours ago 325 MB
centos centos6 b9aeeaeb5e17 9 months ago 202.6 MB
hello-world latest 91c95931e552 9 months ago 910 B
You can remove that image by using #docker rmi new_image. To stop a container, use docker stop and after that remove the files created on the host node by that container.
#docker stop 24a70cb3095a
#docker rm 24a70cb3095a
For running applications on a Docker instance, you may require to attach it to the host system in some way. So, to mount the external storage to the Docker image, you can use the -v flag, as follows:
#docker run -it -v /temp/:/home/ kalilinux/kali-linux-docker /bin/bash
This will mount /temp/ from the main system to the /home/ of the host system. To attach the Docker port to an external system port, use  -–p:
#docker run -it -v /temp/:/home/ -p 4567:80 kalilinux/kali-linux-docker /bin/bash
This will attach the external port 4567 to the container’s port 80. This can be very useful for SaaS and PaaS, provided that the deployed application needs to connect to the external network. Running GUI applications on Docker can often be another requirement. Docker doesn’t have x server defined so, to do that, you need to mount the x server file to the Docker instance.
#docker run -it -v -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix \ kalilinux/kali-linux-docker /bin/bash
This will forward the X11 socket to the container inside Docker. To ship the Docker image to another system, you need to push it on the Docker online repository, as follows:
#docker push new_image
You can even save the container image in the tar archive:
#docker export new_image
There is a lot more to learn on Docker, but going deeper into the subject is not the purpose of this article. The positive point about Docker is its many tutorials and hacks available online, from which you can easily get a better understanding of how to use it to get your work done. Since its first release in 2013, Docker has improved so much that it can be deployed in a production or testing environment because it is easy to use.
There are other solutions made for Docker, which are designed for all scenarios. These include Kubernetes (a Google project for the orchestration of Docker), Swarm and many more services for Docker migrations, which provide graphical dashboards, etc. Automation tools for systems admins like Puppet and Chef are also starting to provide support to Docker containers. Even systemd has started to provide a management utility for nspawn and other containers with a number of tools like machinectl and journalctl.
machinectl
This comes pre-installed with the systemd init manager. It is used to manage and control the state of the systemd based virtual machine, and the container works underneath the systemd service. To see all containers running in your system, use the command given below:
#machinectl -a
To get a status of any running container, use the command given below:
#machinectl status my_deb
Note: machinectl doesn’t show Docker containers, since the latter run behind the Docker daemon.
To log in to a container, use the command given below:
#machinectl login my_deb
Switch off a container, as follows:
#machinectl poweroff my_deb
To kill a container forcefully, use the following command:
#machinectl -s kill my_deb
To view the logs of a container, you can use journalctl, as follows:
#journalctl -M my_deb
What to get from this article
Sandboxes are important for every IT professional, but different professionals may require different solutions. If you are a developer or application tester, chroot may not be a good solution as it allows attackers to escape from the chroot jail. Weak containers like systemd-nspawn or firejail can be a good solution because they are easy to deploy. Using Docker-like containers for application testing can be a minor headache, as making your container ready for your process to run smoothly can be a little painful.
If you are a SaaS or PaaS provider, containers will always be the best solution for you because of their strong isolation, easy shipping, live migration and clustering-like features. You may go with traditional virtualisation solutions (virtual machines), but resource management and quick booting can only be got with containers.

10個Q&A快速認識Docker

 Source: https://www.ithome.com.tw/news/91847

Docker和虛擬化技術有何不同?Container和虛擬機器有什麼不同?10個問答讓你快速認識什麼是Docker

不論是Google、Amazon、微軟、VMware都紛紛擁戴,加入Docker和Container所掀起的新世代雲端虛擬化行列,這2項技術成為了IT界的新顯學。Docker和Container到底是什麼?以下10個Q&A告訴你。
Q1:Container技術和伺服器虛擬化是一樣的技術嗎?
A:不是。兩者雖然都屬於虛擬化的技術,目標都是為了將一套應用程式所需的執行環境打包起來,建立一個孤立環境,方便在不同的硬體中移動,但兩者的 運作思維截然不同。簡單來說,常見的傳統虛擬化技術如vSphere或Hyper-V是以作業系統為中心,而Container技術則是一種以應用程式為 中心的虛擬化技術。
傳統虛擬化技術從作業系統層下手,目標是建立一個可以用來執行整套作業系統的沙箱獨立執行環境,習慣以虛擬機器(Virtual Machine)來稱呼。而Container技術則是直接將一個應用程式所需的相關程式碼、函式庫、環境配置檔都打包起來建立沙箱執行環境,為了和傳統 虛擬化技術產生的虛擬機器區分,Container技術產生的環境就稱為Container。
Q2:一般常見的虛擬機器和Container有何不同?
A:最明顯的差別是,虛擬機器需要安裝作業系統(安裝Guest OS)才能執行應用程式,而Container內不需要安裝作業系統就能執行應用程式。Container技術不是在OS外來建立虛擬環境,而是在OS內 的核心系統層來打造虛擬執行環境,透過共用Host OS的作法,取代一個一個Guest OS的功用。Container也因此被稱為是OS層的虛擬化技術。
Q3:為何Container是輕量級虛擬化技術?
A:因為Container技術採取共用Host OS的作法,而不需在每一個Container內執行Guest OS,因此建立Container不需要等待作業系統開機時間,不用1分鐘或幾秒鐘就可以啟用,遠比需要數分鐘甚至數十分鐘才能開啟的傳統虛擬機器來的快。
Q4:Container技術是全新的技術嗎?
A:不是,早在1982年,Unix系統內建的chroot機制也是一種Container技術。其他如1998年的FreeBSD jails、2005年出現的Solaris Zones和OpenVZ,或像是Windows系統2004年就有的Sandboxie機制都屬於在作業系統內建立孤立虛擬執行環境的作法,都可稱為是 Container的技術。
直到2013年,dotCloud這家PaaS服務公司開源釋出了一套將Container標準化的平臺Docker,大受歡迎,所以,dotCloud決定以Docker為名成立新公司力推。
Q5:Docker如何實現Container標準化?
A:Docker採用了aufs檔案系統來設計一個可以層層堆疊的Container映象檔,將Container內的所有程式(包括應用程式、相 關函式庫、設定檔),都打包進Docker映象檔,並且提供了一個Dockerfile設定檔來記錄建立Container過程的每一個步驟包括參數。只 要在任何支援Docker平臺的環境中,就可以從這個映象檔來建立出一個一模一樣的Container來執行同一個應用程式。如此一來,應用程式等於是可 以透過Docker映象檔,或甚至只需要Dockerfile,就能將程式執行環境帶著走,移動到任何支援Docker的環境中。Docker公司也釋出 API,可以用來控制所有的Container相關指令,任何人只要使用同一套Docker,就等於有了同一套管理和建立Container的方法,也就 等同於將Container運用標準化了。
Q6:一個Container映象檔內可以安裝多少應用程式?
A:一個Container的映象檔內可以安裝多支程式,例如同時安裝Ubuntu、Apache、MySQL、Node.js、Ruby等。不過,Docker官方建議,一隻程式安裝在一個Container內,再把這些Container疊起來提供一個完整的服務。
Docker稱這是一種Microservices(微服務)的新軟體架構,將組成一個應用系統的每一個Stack,拆解成許多小型服務,例如 Apache服務、MySQL服務、Node.js服務、Ruby服務,每一個服務都是包在Container裡的一隻程式,例如MySQL服務就是部署 在Container內的MySQL。
這麼做的好處是可以建立一個鬆散耦合的彈性應用程式架構,也能輕易地抽換其中一個Container,例如要升級MySQL,只需要重新載入新版MySQL的Container映象檔,就可以完成資料庫升級,不用將整套應用系統停機。
Q7:Container內不是不需要OS,為何需要OS的基礎映象檔?
A:OS基礎映象檔的用途是讓Container擁有這OS的檔案系統,例如使用ubuntu基礎映象檔就可以讓Container建立ubuntu的根目錄架構,而不是用來執行一個OS執行實例。
Q8:Docker對Devops有何幫助?
A:因為Docker透過Dockerfile來記錄建立Container映象檔的每一個步驟,可以將建立應用程式執行環境的過程和配置參數,完 整地記錄下來。開發人員和維運人員之間可以利用Dockerfile來溝通對執行環境的討論。甚至結合版本控制服務如GitHub,可以讓 Dockerfile具備版本控制功能,能將基礎架構程式化(Infrastructure as code)來管理。
Q9:可以在Windows Server環境中執行Docker嗎?
A:還不行。目前Docker只能在Linux平臺上執行,但是微軟10月中剛宣布要在下一波Windows Server改版時內建Docker引擎,未來同一份Docker映象檔能否跨Linux和Windows OS,還需待微軟揭露更多細節才能得知。
Q10:在臺灣,如何找到懂Docker技術的人?
A:目前Docker公司還未在臺設點,但有一個Docker Taipei社群,成員截至10月有383人。
Docker Taipei也預計配合Docker總公司舉辦的全球HackDay活動,在11月1日舉辦臺北場HackDay。參加作品將直接發布到美國和全球Docker開發者一起評比,獎品是明年到美國參加Docker技術大會的資格。

TAR handbook

tar zcf foo.tgz foo/ --exclude .git

tar: ais9628_20180412_tmp-glibc/work/mdm9607-oe-linux-gnueabi/tftp-server/git-r1/pseudo/pseudo.socket: socket ignored   

% mkdir /work/bkup/jane
% cd /home/jane
% tar cf - . | (cd /work/bkup/jane && tar xBf -)

2017年12月4日 星期一

Android安全機制與sandbox機制詳解



Source : https://dotblogs.com.tw/cheng/2014/01/28/142415

Android是透過Linux的user與group權限來達到app的sandbox

與iOS中所有的app都是以「mobile」這個user的權限來執行有所不同

以下文章詳述了Android的機制:

第九章 Android安全存取機制
Android是一個多進程系統,在這個系統中,應用程式(或者系統的部分)會在自己的進程中運行。系統和應用之間的安全性通過Linux的 facilities(工具,功能)在進程級別來強制實現的,比如會給應用程式分配user ID和Group ID。更細化的安全特性是通過"Permission"機制對特定的進程的特定的操作進行限制,而"per-URI permissions"可以對獲取特定資料的access專門許可權進行限制。所以,應用程式之間的一般是不可以互相訪問的,但是anroid提供了一種permission機制,用於應用程式之間資料和功能的安全訪問。

9.1 安全架構
Android安全架構中一個中心思想就是:應用程式在預設的情況下不可以執行任何對其他應用程式,系統或者使用者帶來負面影響的操作。這包括讀或寫使用者的私有資料(如連絡人資料或email資料),讀或寫另一個應用程式的檔,網路連接,保持設備處於非睡眠狀態。

一個應用程式的進程就是一個安全的沙箱。它不能幹擾其它應用程式,除非顯式地聲明瞭“permissions”,以便它能夠獲取基本沙箱所不具備的額外的能力。它請求的這些許可權“permissions”可以被各種各樣的操作處理,如自動允許該許可權或者通過使用者提示或者證書來禁止該許可權。應用程式需要的那些“permissions”是靜態的在程式中聲明,所以他們會在程式安裝時被知曉,並不會再改變。

所有的Android應用程式(。apk檔)必須用證書進行簽名認證,而這個證書的私密金鑰是由開發者保有的。該證書可以用以識別應用程式的作者。該證書也不需要CA簽名認證(注:CA就是一個協力廠商的證書認證機構,如verisign等)。Android應用程式允許而且一般也都是使用self- signed證書(即自簽章憑證)。證書是用於在應用程式之間建立信任關係,而不是用於控制程式是否可以安裝。簽名影響安全性的最重要的方式是通過決定誰可以進入基於簽名的permisssions,以及誰可以share 使用者IDs。

9.2 使用者IDs和檔存取

每一個Android應用程式(。apk檔)都會在安裝時就分配一個獨有的Linux使用者ID,這就為它建立了一個沙箱,使其不能與其他應用程式進行接觸(也不會讓其它應用程式接觸它)。這個使用者ID會在安裝時分配給它,並在該設備上一直保持同一個數值。

由於安全性限制措施是發生在進程級,所以兩個package中的代碼不會運行在同一個進程當中,他們要作為不同的Linux使用者出現。我們可以通過使用AndroidManifest.xml檔中的manifest標籤中的sharedUserId屬性,來使不同的package共用同一個使用者 ID。通過這種方式,這兩個package就會被認為是同一個應用程式,擁有同一個使用者ID(實際不一定),並且擁有同樣的檔存取許可權。注意:為了保持安全,只有當兩個應用程式被同一個簽名簽署的時候(並且請求了同一個sharedUserId)才會被分配同樣的使用者ID。

所有存儲在應用程式中的資料都會賦予一個屬性該應用程式的使用者ID,這使得其他package無法訪問這些資料。當 通過這些方法 getSharedPreferences(String, int),openFileOutput(String, int)或者 openOrCreateDatabase(String, int, SQLiteDatabase.CursorFactory)來創建一個新檔時,你可以通過使用MODE_WORLD_READABLE and/or MODE_WORLD_WRITEABLE標誌位來設置是否允許其他package來訪問讀寫這個檔。當設置這些標誌位時,該檔仍然屬於該應用程式,但是它的global read and/or write許可權已經被設置,使得它對於其他任何應用程式都是可見的。


例如:APK A 和APK B 都是C公司的產品,那麼如果使用者從APK A中登陸成功。那麼打開APK B的時候就不用再次登陸。 具體實現就是A和B設置成同一個User ID:
packagename APK A的AndroidManifest:
< manifest xmlns:android="http://schemas.ndroid.com/apk/res/android" package="com.Android.demo.a1" android:sharedUserId="com.c">

packagename APK A的AndroidManifest:
< manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.Android.demo.b1" android:sharedUserId="com.c">
這個"com.c" 就是user id。 APK B就可以像打開本地資料庫那樣打開APK A中的資料庫了。APK A把登陸資訊存放在A的資料目錄下麵。APK B每次啟動的時候讀取APK A下麵的資料庫判斷是否已經登陸:
APK B中通過A的package name 就可以得到A的 packagecontext:
friendContext = this.createPackageContext( "com.android.demo.a1", Context,CONTEXT_IGNORE_SECURITY);
通過這個context就可以直接打開資料庫。


9.3 許可權(permission)

許可權用來描述是否擁有做某件事的權力。Android系統中許可權分為普通級別(Normal),危險級別(dangerous),簽名級別(signature)和系統/簽名級別(signature or system)。系統中所有預定義的許可權根據作用的不同,分別屬於不同的級別。

對於普通和危險級別的許可權,我們稱之為低級許可權,應用申請即授予。其他兩級許可權,我們稱之為高級許可權或系統許可權,應用擁有platform級別的認證才能申請。當應用試圖在沒有許可權的情況下做受限操作,應用將被系統殺掉以警示。

系統應用可以使用任何許可權。許可權的聲明者可無條件使用該許可權。


目前Android系統定義了許多許可權,通過SDK文檔使用者可以查詢到哪些操作需要哪些許可權,然後按需申請。

為了執行你自己的許可權,你必須首先在你的AndroidManifest.xml中使用一個或多個


< permission> 標籤聲明。例如,一個應用程式想用控制誰能啟動一個activities,它可以為聲明一個做這個操作的許可,如下:
< manifest xmlns:android="http://schemas。android。com/apk/res/android"package="com.me.app.myapp" >
< permission android:name="com.me.app.myapp.permission.DEADLY_ACTIVITY" android:label="@string/permlab_deadlyActivity" android:description="@string/permdesc_deadlyActivity" android:permissionGroup="android.permission-group.COST_MONEY" android:protectionLevel="dangerous" />
< /manifest>


9.4 使用許可權(uses-permission)
應用需要的許可權應當在users-permission屬性中申請,所申請的許可權應當被系統或某個應用所定義,否則視為無效申請。同時,使用許可權的申請需要遵循許可權授予條件,非platform認證的應用無法申請高級許可權。
所以,程式間存取權限大致分為兩種:
第一種低級點的(permission的protectlevel屬性為normal或者dangerous),其調用者apk只需聲明即可擁有其permission。
第二種高級點的(permission的protectlevel屬性為signature或者signatureorsystem),其調用者apk就需要和被調用的apk一樣擁有相同的signature。
若想擁有使用許可權,必須在AndroidManifest.xml檔中包含一個或更多的 標籤來聲明此許可權。


例如低級許可權需要監聽來自SMS消息的應用程式將要指定如下內容:

應用程式安裝的時候,應用程式請求的permissions是通過package installer來批准獲取的。packageinstaller是通過檢查該應用程式的簽名來確定是否給予該程式request的許可權。在使用者使用過程中不會去檢查許可權,也就是說要麼在安裝的時候就批准該許可權,使其按照設計可以使用該許可權;要麼就不批准,這樣使用者也就根本無法使用該feature,也不會有任何提示告知使用者嘗試失敗。

例如高級許可權用有system級別許可權設定的api時,需要使其apk擁有system許可權。比如在 android 的API中有供給SystemClock.setCurrentTimeMillis()函數來修改系統時間。有兩個方法:


第一個方法簡單點,不過需要在Android系統源碼的情況下用make來編譯:
1. 在應用程式的AndroidManifest.xml中的manifest節點中插入android:sharedUserId="android.uid.system"這個屬性。
2. 修改Android.mk檔,插入LOCAL_CERTIFICATE := platform這一行
3. 使用mm命令來編譯,生成的apk就有修改系統時間的職權範圍了。
第2個方法麻煩點,不外不消開虛擬機器跑到源碼情況下用make來編譯:
1. 同上,插手android:sharedUserId="android.uid.system"這個屬性。
2. 使用eclipse編譯出apk檔,但是這個apk檔是不能用的。
3. 使用針系統的platform密碼鑰匙來從頭給apk檔簽名。
signapk platform.x509.pem platform.pk8 input.apk output.apk


9.5 自訂Permission


Android系統定義的許可權可以在Manifest.permission中找到。任何一個程式都可以定義並強制執行自己獨有的 permissions,因此Manifest.permission中定義的permissions並不是一個完整的清單(即能有自訂的 permissions)。
一個特定的permission可能會在程式操作的很多地方都被強制實施:
當系統有來電的時候,用以阻止程式執行其它功能。
當啟動一個activity的時候,會阻止應用程式啟動其它應用的Acitivity。
在發送和接收廣播的時候,去控制誰可以接收你的廣播或誰可以發送廣播給你。
當進入並操作一個content provider的時候。
當綁定或開始一個service的時候。


9.6 元件許可權

通過 AndroidManifest.xml 檔可以設置高級許可權,以限制訪問系統的所有元件或者使用應用程式。所有的這些請求都包含在你所需要的元件中的 android:permission屬性,命名這個許可權可以控制訪問此元件。Activity 許可權 (使用 標籤) 限制能夠啟動與 Activity 許可權相關聯的元件或應用程式。在 Context.startActivity() 和 Activity.startActivityForResult() 期間檢查;

Service 許可權(應用 標籤)限制啟動、綁定或啟動和綁定關聯服務的元件或應用程式。此許可權在 Context.startService(), Context.stopService() 和 Context.bindService() 期間要經過檢查;

BroadcastReceiver 許可權(應用 標籤)限制能夠為相關聯的接收者發送廣播的元件或應用程式。在 Context.sendBroadcast() 返回後此許可權將被檢查,同時系統設法將廣播遞送至相關接收者。因此,許可權失敗將會導致拋回給調用者一個異常;它將不能遞送到目的地。在相同方式下,可以使 Context.registerReceiver() 支援一個許可權,使其控制能夠遞送廣播至已登記節目接收者的元件或應用程式。其它的,當調用 Context.sendBroadcast() 以限制能夠被允許接收廣播的廣播接收者物件一個許可權(見下文)。

ContentProvider 許可權(使用 標籤)用於限制能夠訪問 ContentProvider 中的資料的元件或應用程式。

如果調用者沒有請求許可權,那麼會為調用拋出一個安全異常( SecurityException )。在所有這些情況下,一個SecurityException異常從一個調用者那裡拋出時不會存儲請求許可權結果。

9.7 發送廣播時支援許可權

當發送一個廣播時你能總指定一個請求許可權,此許可權除了許可權執行外,其它能發送Intent到一個已註冊的BroadcastReceiver的許可權均可以。通過調用Context.sendBroadcast()及一些許可權字串,為了接收你的廣播,你請求一個接收器應用程式必須持有那個許可權。注意,接收者和廣播者都能夠請求一個許可權。當這樣的事發生了,對於Intent來說,這兩個許可權檢查都必須通過,為了交付到共同的目的地。

9.8 其它許可權支援

在調用service的過程中可以設置任意的fine-grained permissions(更為細化的許可權)。這是通過Context.checkCallingPermission()方法來完成的。使用一個想得到的 permission string來進行呼叫,然後當該許可權獲批的時候可以返回給呼叫方一個Integer(沒有獲批也會返回一個Integer)。需要注意的是這種情況只能發生在來自另一個進程的呼叫,通常是一個service發佈的IDL介面或者是其他方式提供給其他的進程。

Android提供了很多其他的方式用於檢查permissions。如果你有另一個進程的pid,你就可以通過Context的方法Context.checkPermission(String, int, int)去針對那個pid去檢查permission。如果你有另一個應用程式的package name,你可以直接用PackageManager的方法 PackageManager.checkPermission(String, String) 來確定該package是否已經擁有了相應的許可權。

9.9 URI許可權

到目前為止我們討論的標準的permission系統對於content provider來說是不夠的。一個content provider可能想保護它的讀寫許可權,而同時與它對應的直屬用戶端也需要將特定的URI傳遞給其它應用程式,以便其它應用程式對該URI進行操作。一個典型的例子就是郵件程式處理帶有附件的郵件。進入郵件需要使用permission來保護,因為這些是敏感的使用者資料。然而,如果有一個指向圖片附件的 URI需要傳遞給圖片流覽器,那個圖片流覽器是不會有訪問附件的權利的,因為他不可能擁有所有的郵件的存取權限。

針 對這個問題的解決方案就是per-URI permission:當啟動一個activity或者給一個activity返回結果的時候,呼叫方可以設置 Intent.FLAG_GRANT_READ_URI_PERMISSION和/或 Intent.FLAG_GRANT_WRITE_URI_PERMISSION . 這會使接收該intent的activity獲取到進入該Intent指定的URI的許可權,而不論它是否有許可權進入該intent對應的content provider。

這種機制允許一個通常的capability-style模型,這種模型是以使用者交互(如打開一個附件,從清單中選擇一個連絡人)為驅動,特別獲取fine-grained permissions(更細粒化的許可權)。這是一種減少不必要許可權的重要方式,這種方式主要針對的就是那些和程式的行為直接相關的許可權。

這些URI permission的獲取需要content provider(包含那些URI)的配合。強烈推薦在content provider中提供這種能力,並通過android:grantUriPermissions或者標籤來聲明支援。

更多的資訊可以參考Context.grantUriPermission(),Context.revokeUriPermission()和 Context.checkUriPermission() methods。

QA

1.擁有signature的許可權是否可以不用聲明就能access帶normal或dangerous許可權設定的資料或功能?

只要signature相同,就算不顯式聲明也能access設定了normal或dangerous許可權設定的資料或功能。

2.若需要system級別許可權使用系統api(即使用system級別的簽名),如何同時使用其他signature許可權設定(即使用signature級別的簽名)的其他apk的功能?

擁有system級別許可權的使用者可以access其他普通signature許可權聲明設定過的功能。所以,設定為擁有system級別許可權即可。
來源:

http://fecbob.pixnet.net/blog/post/36156917-%5B%E8%BD%89%E8%BC%89%5Dandroid%E5%AE%89%E5%85%A8%E5%AD%98%E5%8F%96%E6%A9%9F%E5%88%B6
http://rritw.com/a/caozuoxitong/android/2011/1113/142239.html

2017年11月23日 星期四

ARM64 Linux的啟動分析

ARM64 Linux的啟動分析

原文 https://blog.csdn.net/leoufung/article/details/50582640
1. 找到Linux啟動流程
Linux啟動,會啟動內核編譯後的文件vmlinux
vmlinux是一個ELF文件,按照./arch/arm64/kernel/vmlinux.lds設定的規則進行連結的./arch/arm64/kernel/vmlinux.lds 是 ./arch/arm64/kernel/vmlinux.lds.S編譯之後生成的
通過readelf可以看到vmlinux的入口地址為:0xfffffe0000080000

這個地址是怎麼來的,對應內核代碼中的哪個代碼呢?最簡單的方法是反彙編
對vmlinux反彙編的命令為:objdump -dxh vmlinux > vmlinux.s
在查找地址0xfffffe0000080000:grep fffffe0000080000 vmlinux.s
可以看到linux的第一條指令為:x13, x18, #0x16,對應的符號是:.head.text efi_head _text

查看ARM64的head.S可以看到如下代碼
/*
* Kernel startup entry point.
*
*
* The requirements are:
* MMU = off, D-cache = off, I-cache = on or off,
* x0 = physical address to the FDT blob.
*
* This code is mostly position independent so you call this at
* __pa(PAGE_OFFSET + TEXT_OFFSET).
*
* Note that the callee-saved registers are used for storing variables
* that are useful before the MMU is enabled. The allocations are described
* in the entry routines.
*/
__HEAD
/*
* DO NOT MODIFY. Image header expected by Linux boot-loaders.
*/
#ifdef CONFIG_EFI
efi_head:
/*
* This add instruction has no meaningful effect except that
* its opcode forms the magic "MZ" signature required by UEFI.
*/
add x13, x18, #0x16
b stext
#else
b stext // branch to kernel start, magic
.long 0 // reserved
#endif
HEAD為一個宏定義:#define __HEAD .section ".head.text","ax"
如此就都對應上了:
第一個地址為__HEAD下面的指令,即efi_head標號內容
第一個指令為add x13, x18,#0x16
緊接著就跳轉到傳統的啟動地方b stext
Stext 最後會調用啟動內核函數 start_kernel
其實,總結一句話,就是在head.S中,緊挨著_HEAD下面的就是第一條執行的指令
在看ARM32的
__HEADENTRY(stext) ……
所以,加載vmlinux後,第一個就開始執行stext
2. Vmlinux地址連結原理
前面講了,vmlinux是按照vmlinux.lds連結的,vmlinux.lds是由vmlinux.lds.S生成的
看看vmlinux.lds.S就可以理解了
 

TEXT_OFFSET是在Makefile中定義的
vim ./arch/arm64/Makefile

可以是隨機的,一般情況下定義為固定的0x80000

3. 地址的跳轉
上電,CPU跳轉到固定的一個地址執行UEFI代碼
UEFI從vmlinux的頭讀取到入口地址為0xfffffe0000080000
因為已經使能過了MMU,所以找到虛擬地址0xfffffe0000080000,跳轉到內核入口,開始加重內核

linux內核啟動地址的確定

linux內核啟動地址的確定http://blog.csdn.net/zht_sir /archive/2007/04/07/1555621 .aspx

                                              
                                                內核編譯鏈接過程是依靠vmlinux.lds文件 ,以arm為例vmlinux.lds文件位於kernel /arch/arm/vmlinux.lds,
                                                vmlinux-armv.lds的生成過程在kernel /arch/arm/Makefile中
ifeq ($(CONFIG_CPU_32),y)
PROCESSOR     = armv
TEXTADDR     = 0xC0008000
LDSCRIPT     = arch/arm/vmlinux-armv.lds.in
endif
arch/arm/vmlinux.lds: $(LDSCRIPT) dummy
    @sed 's/TEXTADDR/$(TEXTADDR)/;s /DATAADDR/$(DATAADDR)/' $(LDSCRIPT) >$@
查看arch/arm/vmlinux.lds 中
OUTPUT_ARCH(arm)
ENTRY(stext)
SECTIONS
{
    . = 0xC0008000;
    .init : {            /* Init code and data        */
        _stext = .;
        __init_begin = .;
            *(.text.init)
        __proc_info_begin = .;
            *(.proc.info)
        __proc_info_end = .;
        __arch_info_begin = .;
            *(.arch.info)
        __arch_info_end = .;
        __tagtable_begin = .;
            *(.taglist)
        __tagtable_end = .;
            *(.data.init)
        . = ALIGN(16);
        __setup_start = .;
            *(.setup.init)
        __setup_end = .;
        __initcall_start = .;
            *(.initcall.init)
        __initcall_end = .;
        . = ALIGN(4096);
        __init_end = .;
    }
    /DISCARD/ : {            /* Exit code and data        */
        *(.text.exit)
        *(.data.exit)
        *(.exitcall.exit)
    }
    .text : {            /* Real text segment        */
        _text = .;        /* Text and read-only data    */
            *(.text)
            *(.fixup)
            *(.gnu.warning)
            *(.rodata)
            *(.rodata.*)
            *(.glue_7)
            *(.glue_7t)
        *(.got)            /* Global offset table        */
        _etext = .;        /* End of text section        */
    }
    .kstrtab : { *(.kstrtab) }
    . = ALIGN(16);
    __ex_table : {            /* Exception table        */
        __start___ex_table = .;
            *(__ex_table)
        __stop___ex_table = .;
    }
    __ksymtab : {            /* Kernel symbol table        */
        __start___ksymtab = .;
            *(__ksymtab)
        __stop___ksymtab = .;
    }
    . = ALIGN(8192);
    .data : {
        /*
         * first, the init task union, aligned
         * to an 8192 byte boundary.
         */
        *(.init.task)
        /*
         * then the cacheline aligned data
         */
        . = ALIGN(32);
        *(.data.cacheline_aligned)
        /*
         * and the usual data section
         */
        *(.data)
        CONSTRUCTORS
        _edata = .;
    }
    .bss : {
        __bss_start = .;    /* BSS                */
        *(.bss)
        *(COMMON)
        _end = . ;
    }
                    /* Stabs debugging sections.    */
    .stab 0 : { *(.stab) }
    .stabstr 0 : { *(.stabstr) }
    .stab.excl 0 : { *(.stab.excl) }
    .stab.exclstr 0 : { *(.stab.exclstr) }
    .stab.index 0 : { *(.stab.index) }
    .stab.indexstr 0 : { *(.stab.indexstr) }
    .comment 0 : { *(.comment) }
}
arch/arm/Makefile中:
vmlinux: arch/arm/vmlinux.lds
arch/arm/vmlinux.lds: $(LDSCRIPT) dummy
    @sed 's/TEXTADDR/$(TEXTADDR)/;s /DATAADDR/$(DATAADDR)/' $(LDSCRIPT) >$@
MAKEBOOT     = $(MAKE) -C arch/$(ARCH)/boot
bzImage zImage zinstall Image bootpImage install: vmlinux
    @$(MAKEBOOT) $@
但在kernel/arch/arm/boot/Makefile
ifeq ($(CONFIG_ARCH_S3C2410),y)
ZTEXTADDR     = 0x30008000
ZRELADDR     = 0x30008000
endif
zImage:    $(CONFIGURE) compressed/vmlinux
    $(OBJCOPY) -O binary -R .note -R .comment -S compressed/vmlinux $@
compressed/vmlinux: $(TOPDIR)/vmlinux dep
    @$(MAKE) -C compressed vmlinux
在compressed目錄下的Makefile中
ZLDFLAGS     = -p -X -T vmlinux.lds
SEDFLAGS    = s/TEXT_START/$(ZTEXTADDR)/;s /LOAD_ADDR/$(ZRELADDR)/;s/BSS _START/$(ZBSSADDR)/
all:        vmlinux
vmlinux:    $(HEAD) $(OBJS) piggy.o vmlinux.lds
        $(LD) $(ZLDFLAGS) $(HEAD) $(OBJS) piggy.o $(LIBGCC) -o vmlinux
vmlinux.lds:    vmlinux.lds.in Makefile $(TOPDIR)/arch/$(ARCH)/boot /Makefile $(TOPDIR)/.config
        @sed "$(SEDFLAGS)"  $@
                                              
                                                vmlinux-armv.lds.in文件的內容:
                                                OUTPUT_ARCH(arm)
ENTRY(_start)
SECTIONS
{
  . = LOAD_ADDR;
  _load_addr = .;
  . = TEXT_START;
  _text = .;
  .text : {
    _start = .;
    *(.start)
    *(.text)
    *(.fixup)
    *(.gnu.warning)
    *(.rodata)
    *(.rodata.*)
    *(.glue_7)
    *(.glue_7t)
    input_data = .;
    piggy.o
    input_data_end = .;
    . = ALIGN(4);
  }
  _etext = .;
  _got_start = .;
  .got            : { *(.got) }
  _got_end = .;
  .got.plt        : { *(.got.plt) }
  .data            : { *(.data) }
  _edata = .;
  . = BSS_START;
  __bss_start = .;
  .bss            : { *(.bss) }
  _end = .;
  .stack (NOLOAD)    : { *(.stack) }
  .stab 0        : { *(.stab) }
  .stabstr 0        : { *(.stabstr) }
  .stab.excl 0        : { *(.stab.excl) }
  .stab.exclstr 0    : { *(.stab.exclstr) }
  .stab.index 0        : { *(.stab.index) }
  .stab.indexstr 0    : { *(.stab.indexstr) }
  .comment 0        : { *(.comment) }
}
vmlinux.lds內容為
OUTPUT_ARCH(arm)
ENTRY(_start)
SECTIONS
{
  . = 0x30008000;
  _load_addr = .;
  . = 0;
  _text = .;
  .text : {
    _start = .;
    *(.start)
    *(.text)
    *(.fixup)
    *(.gnu.warning)
    *(.rodata)
    *(.rodata.*)
    *(.glue_7)
    *(.glue_7t)
    input_data = .;
    piggy.o
    input_data_end = .;
    . = ALIGN(4);
  }
  _etext = .;
  _got_start = .;
  .got            : { *(.got) }
  _got_end = .;
  .got.plt        : { *(.got.plt) }
  .data            : { *(.data) }
  _edata = .;
  . = ALIGN(4);
  __bss_start = .;
  .bss            : { *(.bss) }
  _end = .;
  .stack (NOLOAD)    : { *(.stack) }
  .stab 0        : { *(.stab) }
  .stabstr 0        : { *(.stabstr) }
  .stab.excl 0        : { *(.stab.excl) }
  .stab.exclstr 0    : { *(.stab.exclstr) }
  .stab.index 0        : { *(.stab.index) }
  .stab.indexstr 0    : { *(.stab.indexstr) }
  .comment 0        : { *(.comment) }
}
                                                一般情況下都在生成vmlinux后,再對內核進行壓縮成為zI mage,壓縮的目錄是kernel/arch/arm /boot。
下載到flash中的是壓縮后的zImage文件 ,zImage是由壓縮后的vmlinux和解壓縮程序組成 ,如下圖所示:
                                                            |-----------------|\    |-----------------|
            |                 | \   |                 |
            |                 |  \  | decompress code |
            |     vmlinux     |   \ |-----------------|    zImage
            |                 |    \|                 |
            |                 |     |                 |
            |                 |     |                 |   
            |                 |     |                 |
            |                 |    /|-----------------|
            |                 |   /
            |                 |  /
            |                 | /
            |-----------------|/
          
zImage鏈接腳本也叫做vmlinux.lds ,位於kernel/arch/arm/boot /compressed。
是由同一目錄下的vmlinux.lds.in文件生成的
在kernel/arch/arm/boot/Makefile文 件中定義了:
ifeq ($(CONFIG_ARCH_S3C2410),y)
ZTEXTADDR     = 0x30008000
ZRELADDR     = 0x30008000
endif
                                                ZTEXTADDR就是解壓縮代碼的ram偏移地址 ,ZRELADDR是內核ram啟動的偏移地址,這里看到指定ZT EXTADDR的地址為30008000,
                                              

以上就是我對內核啟動地址的分析,總結一下內核啟動地址的設置:
                                                設置kernel/arch/arm/boot /Makefile文件中的
ifeq ($(CONFIG_ARCH_S3C2410),y)
ZTEXTADDR     = 0x30008000         
ZRELADDR     = 0x30008000
endif
# We now have a PIC decompressor implementation.  Decompressors running
# from RAM should not define ZTEXTADDR.  Decompressors running directly
# from ROM or Flash must define ZTEXTADDR (preferably via the config)
#
查看2410的datasheet ,發現內存映射的基址是0x3000 0000 ,那麼  0x30008000又是如何來的呢?
在內核文檔kernel/Document/arm /Booting 文件中有:
                                              
[url=http://59.69.74.87/lxr/http /source/Documentation/arm /Booting?v=2.6.11.7#L105][/url]
Calling the kernel image
                                              
                                              
Existing boot loaders: MANDATORY
New boot loaders: MANDATORY
                                              
There
are two options for calling the kernel zImage. If the zImage is stored
in flash, and is linked correctly to be run from flash, then it is
legal for the boot loader to call the zImage in flash directly.
                                              
The
zImage may also be placed in system RAM (at any location) and called
there. Note that the kernel uses 16K of RAM below the image to store
page tables. The recommended placement is 32KiB into RAM.
                                              
看來在image下面用了32K(0x8000)的空間存放內核頁 表,
0x30008000就是2410的內核在RAM中的啟動地址 ,這個地址就是這麼來的。
關於內核解壓縮的過程分析
                                              
                                                內核壓縮和解壓縮代碼都在目錄kernel/arch/arm /boot/compressed,
編譯完成后將產生vmlinux、head.o、misc.o 、head-s3c2410.o、piggy.o這幾個文件,
head.o是內核的頭部文件,負責初始設置;
misc.o將主要負責內核的解壓工作,它在head.o之后;
head-s3c2410.o文件主要針對的初始化 ,將在鏈接時與head.o合並;
piggy.o是一個中間文件,其實是一個壓縮的內核 (kernel/vmlinux),只不過沒有和初始化文件及解壓 文件鏈接而已;
vmlinux是沒有(zImage是壓縮過的內核 )壓縮過的內核,就是由piggy.o、head.o、misc .o、head-s3c2410.o組成的。
                                                在BootLoader完成系統的引導以后並將Linux內核調 入內存之后,調用bootLinux(),
這個函數將跳轉到kernel的起始位置。
如果kernel沒有壓縮,就可以啟動了。
如果kernel壓縮過,則要進行解壓,在壓縮過的kernel頭 部有解壓程序。
壓縮過得kernel入口第一個文件源碼位置在arch/arm /boot/compressed/head.S。
它將調用函數decompress_kernel() ,這個函數在文件arch/arm/boot/compresse d/misc.c中,
decompress_kernel()又調用proc _decomp_setup(),arch_decomp _setup()進行設置,
然后使用在打印出信息"Uncompressing Linux..."后,調用gunzip()。將內核放於指定的位 置。
                                              
以下分析head.S文件:
(1)對於各種Arm CPU的DEBUG輸出設定,通過定義宏來統一操作。
(2)設置kernel開始和結束地址,保存architectu re ID。
(3)如果在ARM2以上的CPU中,用的是普通用戶模式 ,則昇到超級用戶模式,然后關中斷。
(4)分析LC0結構delta offset,判斷是否需要重載內核地址(r0存入偏移量 ,判斷r0是否為零)。
   這里是否需要重載內核地址,我以為主要分析arch/arm /boot/Makefile、arch/arm/boot /compressed/Makefile
   和arch/arm/boot/compressed/vmlinux.lds.in三個文件,主要看vmlinux.lds.in鏈接文件的主要段的位置,
   LOAD_ADDR(_load_addr)=0x300080 00,而對於TEXT_START(_text、_start )的位置只設為0,BSS_START(__bss_start )=ALIGN(4)。
   對於這樣的結果依賴於,對內核解壓的運行方式,也就是說 ,內核解壓前是在內存(RAM)中還是在FLASH上,
   因為這里,我們的BOOTLOADER將壓縮內核 (zImage)移到了RAM的0x30008000位置 ,我們的壓縮內核是在內存(RAM)從0x30008000地址開 始順序排列,
                                                   因此我們的r0獲得的偏移量是載入地址(0x30008000 )。
接下來的工作是要把內核鏡像的相對地址轉化為內存的物理地址 ,即重載內核地址。
(5)需要重載內核地址,將r0的偏移量加到BSS region和GOT table中。
(6)清空bss堆棧空間r2-r3。
(7)建立C程序運行需要的緩存,並賦於64K的棧空間。
(8)這時r2是緩存的結束地址,r4是kernel的最后執行地 址,r5是kernel境象文件的開始地址。檢查是否地址有沖突。
   將r5等於r2,使decompress后的kernel地址就 在64K的棧之后。
(9)調用文件misc.c的函數decompress _kernel(),解壓內核於緩存結束的地方(r2地址之后) 。此時各寄存器值有如下變化:
   r0為解壓后kernel的大小
   r4為kernel執行時的地址
   r5為解壓后kernel的起始地址
   r6為CPU類型值(processor ID)
   r7為系統類型值(architecture ID)
(10)將reloc_start代碼拷貝之kernel之后 (r5+r0之后),首先清除緩存,而后執行reloc _start。
(11)reloc_start將r5開始的kernel重載於r 4地址處。
(12)清除cache內容,關閉cache,將r7中archi tecture ID賦於r1,執行r4開始的kernel代碼。
                                                下面簡單介紹一下解壓縮過程,也就是函數decompress _kernel實現的功能:
解壓縮代碼位於kernel/lib/inflate.c ,inflate.c是從gzip源程序中分離出來的 。包含了一些對全局數據的直接引用。
在使用時需要直接嵌入到代碼中。gzip壓縮文件時總是在前32K 字節的範圍內尋找重復的字符串進行編碼,
在解壓時需要一個至少為32K字節的解壓緩沖區,它定義為wind ow[WSIZE]。inflate.c使用get_byte( )讀取輸入文件,
它被定義成宏來提高效率。輸入緩沖區指針必須定義為inptr ,inflate.c中對之有減量操作。inflate .c調用flush_window()
來輸出window緩沖區中的解壓出的字節串,每次輸出長度用ou tcnt變量表示。在flush_window()中,還必
須對輸出字節串計算CRC並且刷新crc變量。在調用gunzip ()開始解壓之前,調用makecrc()初始化CRC計算表。
最后gunzip()返回0表示解壓成功。
                                                我們在內核啟動的開始都會看到這樣的輸出:
Uncompressing Linux...done, booting the kernel.
這也是由decompress_kernel函數內部輸出的 ,它調用了puts()輸出字符串,
puts是在kernel/include/asm-arm /arch-s3c2410/uncompress.h中實現的。
                                                執行完解壓過程,再返回到head.S中,啟動內核:
                                                call_kernel:    bl  cache_clean_flush
         bl  cache_off
         mov r0, #0
         mov r1, r7          @ restore architecture number
         mov pc, r4          @ call kernel
       
下面就開始真正的內核了。
linux2.6 啟動傳遞命令行分析
                                                內核在啟動時可以傳遞一個字符串命令行,來控制內核啟動的過程 ,例如:
"console=ttyS2,115200
[email=mem=64M@0xA0000000]mem=64M@0xA0000000[/email]
"
這里指定了控制台是串口2,波特率是115200 ,內存大小是64M,物理基地址是0xA0000000。
另外我們可以在內核中定義一些全局變量,使用這些全局變量控制內核 的配置,例如usb驅動中定義了
static int nousb; /* Disable USB when built into kernel image */
這個變量為1,則整個usb驅動不初始化,如果想將其置1 ,可在字符串命令行中添加nousb=1。
在操作該變量之前,還要讓系統知道該變量,方法是:
__module_param_call("",nousb ,param_set_bool,param_get_bool ,&nousb,0444);
__module_param_call這個宏定義在kernel \include\linux\moduleparam.h
原型如下:
#define __module_param_call(prefix, name, set, get, arg, perm)  \
static char __param_str_##name[] = prefix #name;  \
static struct kernel_param const __param_##name   \
__attribute_used__      \
    __attribute__ ((unused,__section__ ("__param"),aligned(sizeof(void *)))) \
= { __param_str_##name, perm, set, get, arg }

它定義了一個kernel_param類型的變量 ,這個變量被放到了段__param,
kernel_param結構體的定義是:
struct kernel_param {
const char *name;
unsigned int perm;
param_set_fn set;
param_get_fn get;
void *arg;
};
__param這個段的聲明有些平台是在arch/../.. /vmlinux.lds.S,而大多數平台是放到
kernel\include\asm-generic \vmlinux.lds.h中,定義如下:
__param : AT(ADDR(__param) - LOAD_OFFSET) {   \
  VMLINUX_SYMBOL(__start__ _param) = .;   \
  *(__param)      \
  VMLINUX_SYMBOL(__stop__ _param) = .;   \
}
內核啟動時就會對字符串命令進行解析,在kernel\init \main.c中,內核啟動函數start_kernel中
對外部數組進行了聲明:
                                                extern struct kernel_param __start___param[], __stop___param[];
然后調用函數parse_args對數組進行解析:
parse_args("Booting kernel", command_line, __start___param,
     __stop___param - __start___param,
     &unknown_bootoption);
                                                其中command_line就是要解析的字符串命令行 ,unknown_bootoption是函數指針 ,它用來獲取指定參數的=右邊的值。
parse_args就會在數組中找到和nousb名稱一樣的ke rnel_param變量,並調用它的set函數對其進行付值。