General

Is Conan CMake based, or is CMake a requirement?

No. It isn’t. Conan is build-system agnostic. Package creators could very well use cmake to create their packages, but you will only need it if you want to build packages from source, or if there are no available precompiled packages for your system/settings. We use CMake extensively in our examples and documentation, but only because it is very convenient and most C/C++ devs are familiar with it.

Is build-system XXXXX supported?

Yes. It is. Conan makes no assumption about the build system. It just wraps any build commands specified by the package creators. There are already some helper methods in code to ease the use of CMake, but similar functions can be very easily added for your favourite build system. Please check out the alternatives explained in generator packages

Is my compiler, version, architecture, or setting supported?

Yes. Conan is very general, and does not restrict any configuration at all. However, conan comes with some compilers, versions, architectures, ..., etc. pre-configured in the ~/.conan/settings.yml file, and you can get an error if using settings not present in that file. Go to invalid settings to learn more about it.

Does it run offline?

Yes. It runs offline very well. Package recipes and binary packages are stored in your machine, per user, and so you can start new projects that depend on the same libraries without any Internet connection at all. Packages can be fully created, tested and consumed locally, without needing to upload them anywhere.

How does conan compare to biicode dependency manager?

A (probably incomplete) list of differences:

  • It has a fully decentralized, git-like architecture. The provided on-premises server is very easy to run.
  • It is build-system agnostic and highly decoupled from builds. It uses CMake extensively and integrates well with it, but there are packages built with perl+nmake, autotools, ..., etc. Consumers can use any available generator (Visual Studio, xcode, gcc, txt, CMake, qmake or qbs). They are not forced to use cmake.
  • Consumers are not locked-in to the technology. They are just provided with a file with include paths, lib paths, libs, ..., etc so that they can use that info as needed. System package managers can perfectly coexist with it.
  • It hosts and manages pre-built binaries. You can decide to build from source or use existing binaries. Binaries are cached on your computer at the user-level, so there is no need to build the same large binary twice for different projects. Many versions of binaries can coexist, different projects can use different versions, and it is easy to switch between versions in the same project (without rebuilding).
  • Python package recipes allow for very advanced configuration: static vs. dynamic; 32- vs. 64-bit; conditional dependencies (depending on OS, version, compiler, settings, options...) in a more intuitive cycle (source-build-package)
  • It is not required to host the source code. It can be retrieved from any origin such as github, sourceforge download+unzip, ..., etc.
  • Creating packages for existing libraries is much, much faster than with biicode, and practically zero-intrusive in the library project. The package itself can be an external repository with a URL to the existing library origin. You can fully create and test packages on your machine, prior to uploading them to any remote (including your own).

Is it possible to install 2 different versions of the same library?

Yes. You can install as many different versions of the same library as you need, and easily switch among them in the same project, or have different projects use different versions simultaneously, and without having to install/uninstall or re-build any of them.

Package binaries are stored per user in (e.g.) ~/.conan/data/Boost/1.59/user/stable/package/{sha_0, sha_1, sha_2...} with a different SHA signature for every different configuration (debug, release, 32-bit, 64-bit, compiler...). Packages are managed per user, but additionally differentiated by version and channel, and also by their configuration. So large packages, like Boost, don’t have to be compiled or downloaded for every project.

Can I run multiple conan isolated instances (virtual environments) on the same machine?

Yes, conan supports the concept of virtual environments; so it manages all the information (packages, remotes, user credentials, ..., etc.) in different, isolated environments. Check virtual environments for more details.

Can I run the conan_server behind a firewall (on-premises)?

Yes. Conan does not require a connection to the conan.io repository at all for its operation. You can install packages from the conan.io repository if you want, test them, and only after approval, upload them to your on-premises server and forget about the original repository. Or you can just get the package recipes, re-build from source on your premises, and then upload the packages to your server.

Can I connect to conan.io or other remotes through a corporate proxy?

Yes, it can be configured in your ~/.conan/conan.conf configuration file or with some environment variables. Check proxy configuration for more details.

Can I create packages for third-party libraries?

Of course, as long as their license allows it.

Can I upload closed source libraries?

Yes. As long as the resulting binary artifact can be distributed freely and free of charge, at least for educational and research purposes, and as long as you comply with all licenses and IP rights of the original authors, as well as the Terms of Service. If you want to distribute your libraries only for your paying customers, please contact us.

Do I always need to specify how to build the package from source?

No. But it is highly recommended. If you want, you can just directly start with the binaries, build elsewhere, and upload them directly. Maybe your build() step can download pre-compiled binaries from another source and unzip them, instead of actually compiling from sources.

Does conan use semantic versioning (semver) for dependencies?

It uses a convention by which package dependencies follow semver by default; thus it intelligently avoids recompilation/repackaging if you update upstream minor versions, but will correctly do so if you update major versions upstream. This behavior can be easily configured and changed in the package_id() method of your conanfile, and any versioning scheme you desire is supported.