Or: Host, Target, Cross-Compilers, and All That
This wikiHow teaches you how to compile a C program from source code by using the GNU Compiler (GCC) for Linux and Minimalist Gnu (MinGW) for Windows. Open up a terminal window on your Linux system. Its icon usually is a black screen with. This article is intended primarily for students leaning C for the first time on a Mac. It’s not a step-by-step tutorial on how to write and compile code in the applications described. Nov 03, 2005 clone which can compile as a Cocoa application on OS X or can be compiled on Linux using GNUSTEP. It's a tutorial on C programming for Windows. I would think the compiler you use would depend on the platform. The GNU compiler collection is used heavily on Linux, though I don't know enough to recommend it vs. Some other solution. I don't know what you'd use on OSX or whether there's a compiler that would give good results across platforms.
Host vs Target
A compiler is a program that turns source code into executable code. Likeall programs, a compiler runs on a specific type of computer, and the newprograms it outputs also run on a specific type of computer.[1]
Compile On Mac For Linux Bootable
The computer the compiler runs on is called the host, andthe computer the new programs run on is called the target. When the hostand target are the same type of machine, the compiler is a nativecompiler. When the host and target are different, the compiler is across compiler.[2]
Why cross-compile?
In theory, a PC user who wanted to build programs for some devicecould get the appropriate target hardware (or emulator), boot a Linux distro onthat, and compile natively within that environment. While this is a validapproach (and possibly even a good idea when dealing with something like a MacMini), it has a few prominent downsides for things like a linksys router oriPod:
Speed - Target platforms are usually much slower than hosts, by anorder of magnitude or more. Most special-purpose embedded hardware is designedfor low cost and low power consumption, not high performance. Modern emulators(like qemu) are actually faster than a lot of the real world hardware theyemulate, by virtue of running on high-powered desktophardware.[3]
Capability - Compiling is very resource-intensive. The targetplatform usually doesn't have gigabytes of memory and hundreds of gigabytes ofdisk space the way a desktop does; it may not even have the resources to build'hello world', let alone large and complicated packages.
Availability - Bringing Linux up on a hardware platform it'snever run on before requires a cross-compiler. Even on long-establishedplatforms like Arm or Mips, finding an up-to-date full-featured prebuilt nativeenvironment for a given target can be hard. If the platform in question isn'tnormally used as a development workstation, there may not be a recent prebuiltdistro readily available for it, and if there is it's probably out of date.If you have to build your own distro for the target before you can build onthe target, you're back to cross-compiling anyway.
Flexibility - A fully capable Linux distribution consists ofhundreds of packages, but a cross-compile environment can depend on the host'sexisting distro from most things. Cross compiling focuses on building thetarget packages to be deployed, not spending time getting build-onlyprerequisites working on the target system.
Convenience - The user interface of headless boxes tends to be abit crampled. Diagnosing build breaks is frustrating enough as it is.Installing from CD onto a machine that hasn't got a CD-ROM drive is a pain.Rebooting back and forth between your test environment and your developmentenvironment gets old fast, and it's nice to be able to recover from accidentallylobotomizing your test system.
Why is cross-compiling hard?
Portable native compiling is hard.
Most programs are developed on x86 hardware, where they are compilednatively. This means cross-compiling runs into two types of problems:problems with the programs themselves and problems with the build system.
The first type of problem affects all non-x86 targets, both for nativeand for cross-builds. Most programs make assumptions about the type ofmachine they run on, which must match the platform in question orthe program won't work. Common assumptions include:
Run Mac App On Linux
Word size - Copying a pointer into an int may lose data on a 64bit platform, and determining the size of a malloc by multiplying by 4 insteadof sizeof(long) isn't good either. Subtle security flaws due to integeroverflows are also possible, ala 'if (x+y < size) memset(src+x,0,y);', whichresults in a 4 gigabyte memset on 32-bit hardware when x=1000 andy=0xFFFFFFF0...
Endianness - Different systems store binary data iternally indifferent ways, which means that block-reading int or float data from diskor the network may need translation. Type 'man byteorder' for details.
Alignment - Some platforms (such as arm) can onlyread or write ints from addresses that are an even multiple of 4 bytes,otherwise they segfault. Even the ones that can handle arbitrary alignmentsare slower dealing with unaligned data (they have to fetch twice to get bothhalves), so the compiler will often pad structures to align variables.Treating structures as a lump of data that can be sent to disk or across thenetwork thus requires extra work to ensure a consistent representation.
Default signedness - Whether the 'char' data type defaults tosigned or unsigned varies from platform to platform (and in some cases fromcompiler to compiler), which can cause some really surprising bugs. The easyworkaround for this is to provide a compiler argument like'-funsigned-char' to force the default to a known value.
NOMMU - If your target platform doesn't have a memory managementunit, several things need to change. You need vfork() instead of fork(), onlycertain types of mmap() work (shared or read only, but not copy on write),and the stack doesn't grow dynamically.
Most packages aim to be portable when compiled natively, and will at leastaccept patches to fix any of the above problems (with the possible exceptionof NOMMU issues) submitted to the appropriate development mailing list.
And then there's cross-compiling.
In addition to the problems of native compiling, cross-compiling has its ownset of issues:
Configuration issues - Packages with a separate configuration step(the './configure' part of the standard configure/make/make install) often testfor things like endianness or page size, to be portable when natively compiled.When cross-compiling, these values differ between the host system and thetarget system, so running tests on the host system gives the wrong answers.Configuration can also detect the presence of a package on the host and includesupport for it, when the target doesn't have that package or has an incompatibleversion.
HOSTCC vs TARGETCC - Many build processes require compiling thingsto run on the host system, such as the above configuration tests, or programsthat generate code (such as a C program that creates a .h file which is then#included during the main build). Simply replacing the host compilerwith a target compiler breaks packages that need to build things that runduring the build itself. Such packages need access to both a host and atarget compiler, and need to be taught when to use each one.[4]
Toolchain Leaks - An improperly configured cross-compile toolchainmay leak bits of the host system into the compiled programs, resulting infailures that are usually easy to detect but which can be difficult to diagnoseand correct. The toolchain may #include the wrong header files, or search thewrong library paths at link time. Shared libraries often depend on othershared libraries which can also sneak in unexpected link-time references to thehost system.
Libraries - Dynamically linked programs must access the appropriateshared libraries at compile time. Shared libraries to the target systemneed to be added to the cross-compile toolchain so programs can link againstthem.
Testing - On native builds, the development system provides aconvenient testing environment. When cross-compiling, confirming that'hello world' built successfully can require configuring (at least) abootloader, kernel, root file system, and shared libraries.
Footnote 1: The most prominent differencebetween types of computers is what processor is executing the programs,but other differences include library ABIs (such as glibc vs uClibc),machines with configurable endianness (arm vs armeb), or differentmodes of machines that can run both 32 bit and 64 bit code (such as x86 onx86-64).
Footnote 2: When building compilers, there'sa third type called a 'canadian cross', which is a cross compiler that doesn'trun on your host system. A canadian cross builds a compiler that runs onone target platform and produces code for another target machine. Such aforeign compiler can be built by first creating a temporary cross compilerfrom the host to the first target, and then using that to build anothercross-compiler for the second target. The first cross-compiler's targetbecomes the host the new compiler runs on, and the second target is theplatform the new compiler generates output for. This technique is often usedto cross-compile a new native compiler for a target platform.
Footnote 3: Modern desktop systems aresufficiently fast that emulating a target and natively compiling under theemulator is actually a viable strategy. It's significantly slower than crosscompiling, requires finding or generating a native build environment for thetarget (often meaning you have to set up a cross-compiler anyway),and can be tripped up by differences between the emulator and the realhardware to deploy on. But it's an option.
Footnote 4: This is why cross-compiletoolchains tend to prefix the names of their utilities, ala 'armv5l-linux-gcc'.If that was simply called 'gcc' then the host and native compilercouldn't be in the $PATH at the same time.