<

inxi :: core mission

Page Version: 1.3
Page Updated: 2024-03-07

There are many ways to help with inxi, and coding is only one of them. Testing, debugging, finding issues, bugs, glitches, suggesting improvements or enhancements (ideally including doing the research for those, which is difficult and can be quite time consuming and/or tedious) are absolutely invaluable and critical for all ongoing (and past) inxi development. As noted in summary, finding a feature you really care about and focusing on that can be a really good way to enter into the world of inxi. Most key features, and many minor ones, are the results of a person or group offering their energy/time/passion in developing and testing new features.

If you would like to submit patches for inxi, read and understand these core requirements for inxi. Patches not following these rules will be ignored, or at least I'll ask you why you didn't learn what is going on before spending your time on the code.

If you don't like reading, then working on inxi is almost certainly not for you. Almost all new features require a huge amount of research and reading, and so consider this document itself as sort of a test you can take to see if you'd like working on inxi. inxi has a lot of code, a lot of complicated logic, and a lot of documentation, and requires reading and scanning a lot of 3rd party docs and research and data to develop or advance features.

top

Core Philosophy and Concepts

Just so it's clear, as these may not be obvious to all. inxi has some core requirements that will never be changed. Failure to meet any of the following requirements is always a critical show stopper bug. These requirements have determined many of the critical development decisions in inxi. For example, the Perl version to use (Perl 5.008), the selection of Perl 5 in the firsst place, how features are developedd, tested, and optimized, how fallback cases are handled, how legacy methods are maintained in cascade testing, and so on. It's really not not possible to understand why things are the way they are in inxi without understading the following:

These requirements are done mainly for the users of inxi, and to make developing and debugging and testing possible. They are absolutely non-negotiable, and are the essence of what makes inxi inxi.

  1. Run Everywhere: inxi must run on really old systems. For practical reasons, this backward compatibility is restricted to GNU/Linux. See the LOC/User issue for explanation. See Perl item for Perl version >= 5.008 policy.

    All programs or tools inxi might use internally, all files, system data, etc, will be used if present, and if not, will gracefully fail with clear error messages explaining why the failure happened. In other words, dependencies on a specific program version are never an issue in inxi, except for minimum requirement for Perl >= 5.008.

  2. Install/Upgrade Easily: inxi always is installable, and upgradeable, as a single file, plus the man page once -U is used as root. This will never change, it's impossible to develop inxi without that core feature (see below for the development branch pinxi info). While certain libraries might be added in the future for features that do not exist today, they would never be required to actually run inxi, for example, language support for non english. This feature appears increasingly unlikely to ever happen however.
  3. New and Old Hardware: inxi should work on the oldest hardware, and the newest. Where data is unavailable, it should fail gracefully. Special thanks to the people over at Slackware forums who have been absolutely brilliant in helping test inxi features on the oldest and most arcane hardware available. inxi over the years has been greatly fortunate to attract this type of highly skilled help from different groups, and this help is always greatly appreciated.

    If inxi is not running on some new cutting edge hardware, it's probably because it offers a significantly different perspective on the system in terms of data, data sources etc, and those can often be handled if sufficient debugger dtaa is made available, and ideally, test systems for direct SSH access, which is several orders of magnitude better in terms of reducing dev time on new hardware.

    If you have new hardware that isn't supported yet, like ARM servers, M1/M2 SOC systems, you can help make inxi better by giving us data and debugger information, and testing fixes and updates.

  4. No Dependencies Except Perl: inxi will never have any dependencies except Perl 5.008 or newer (including Perl 7.0), plus a small subset of Perl Core Modules.

    5.008 Perl was picked because it's the first 'modern' Perl that is basically feature complete, and very little is required to make the code run on all Perls since 5.008 in terms of awareness of certain newer features that can't be used. The determination on the which Perl version to use was basically which Perl Redhat shipped with in 2008, give or take. 5.008 was a very good break point I've found, it's quite easy to write all code to work on that, and newer Perls. Any failure of code to run on 5.008 Perl is a critical bug and will be corrected immediately.

    A corollary of this is that all new Perl 5 variants, including Perl 7. must work as well. Initial tests on Perl 5.032 with 7 test modules show inxi works fine on Perl 7.

    Any Perl failure caused by syntax internally + specific Perl version is a show-stopper bug, and will be handled as quickly as possible.

    This means no distro should ever feel any hesitation about upgrading to latest inxi, nor should you ever have to worry about being stuck with an obsolete inxi on say a 2008 RHEL or Debian server, you can always update to current and it will run. Any failures to do so, bugs, errors, etc, should be reported with line numbers and they will be corrected.

    There is a somewhat annoying trend for certain distros to remove Perl Core Modules, or split them out of Core Modules. Redhat+derived in particular has been doing this, which forces some awkward internal inxi hacks to test for missing core modules when they are required, and to exit with missing module error messages. If too many are missing, that is a bug in the distribution, as the guys over and perlmonks.org have clearly stated, and I agree with this. If that practice gets too extreme, I won't support the distro anymore, which has already happened in at least one case. There's a point where the bug is not inxi's problem, it's the distro's problem. Core Modules are designed to always be there as a set, not to be split up.

  5. LOC / Users / Dev Man Hour: I hesitate to add this one, but the reality is, if an OS for instance has essentially no inxi users, it's not really a great idea to add a lot of code to add support for that OS, since those extra lines of code will negatively impact the 99.999+% of users who don't run that OS simply by making inxi bigger. While not absolute, this is also a consideration. The rough formula is LOC/User-helped. The cost of not considering this is massive code bloat that in the end really doesn't help users at all.

    There is a point I think when such an OS, if they are actually interested, should consider forking inxi and doing that work themselves. If none of their users or devs care enough to do the work or spend the time, it's unclear to me why I should do that for them.

    This isn't absolute, for example if I like a certain project, and run it, I will tend to support it no matter how many inxi users it may have. But that's my choice as a developer of how and where to spend my time/energy. TinyCore is an example, where I not only support it fully, but am the packager, just because I find the entire concept of TinyCore to be really cool.

  6. Non-Free? Not Interested: I am not going to spend my free time supporting billion dollar corporations who as an organization stand in the way of Free Software. Like Apple's OSX, or Microsoft, Oracle Solaris, etc. I will generally correct actual Perl errors etc, but I won't use my own time on that stuff (see the LOC/User issue).

    Also, if users want to run non free operating systems (OSX, Windows) then that's their choice, but that does not then obligate me to waste my life supporting this choice. I will if the issue is interesting, for example a recent WSL on Windows situation, spend a bit of time on it, but in that case, that's actually Ubuntu running on Windows, with Wayland, etc, not Microsoft Windows, so that's a slightly different situation.

    I would also do it if I were paid my normal professional rates, which is only appropriate for highly profitable, huge market capped companies. But not if it violated the LOC/User issue, at that point, a fork would be called for, which is not ever going to be practical because inxi simply evolves and changes too much.

    There are some gray areas, for example, I try to keep Android supported, but Google has increasingly locked down /sys file system, and /proc, so the data inxi can get unless the phone is rooted, is quite restricted, but that support is ongoing, and I do test that. Note that inxi is also packaged by termux, the Android CLI environment project.

top

Coding Requirements

These are the core technical requirements to work on the code. Note that these may evolve and change over time as my interests and abilities change or develop. I got better at Perl as I worked on Perl inxi, so there are certain ways of doing things I used at first during the rewrite to Perl I don't tend to use now. Those get updated as time goes along, but it's not a priority since it's time consuming. Other than that, just look at how things are done and that should be enough.

I consider inxi to be quite complicated, and some features are extremely interconnected, so really, make sure you understand why something is the way it is before assuming it was done wrong. Not everything is perfectly commented, but I do try to make unclear logics clear via comments where I notice that issue, but of course, sometimes something is clear when it's created...

These requirements are slowly evolving to suite my current style and preferences. Note that I do almost all the work of coding inxi, so this basically is how I want to develop and run inxi as a very large and very complicated system data program. If you don't like this way of doing things, then do yourself a favor and find something else to contribute to. I don't get paid to do this, so I'm not going to adopt methods or techniques I don't like.

I try to make the internal inxi code better and better over time, more and more clear, consistent, and organized. As inxi gets bigger and more powerful, this type of ongoing refactoring and improvement becomes increasingly critical to be able to manage the increasingly complex logic and data handling and parsing inxi engages in. I view this effort as increasingly successful, which is evidenced by how rare it is now for the code to actually be in the way of the solution or feature being considered.

  1. Work on pinxi, not inxi!: First and foremost: don't EVER work on the inxi master branch!!! The development version of inxi is pinxi, which has its own repository. This came about during the rewrite to Perl, so that I could compare pinxi (the perl version) against inxi (the bash version) to make sure the Perl inxi was feature complete or better on launch. This was so useful that I decided to just keep pinxi as the development version. Further, pinxi does NOT get merged into inxi, it gets copied over to inxi then committed. The two are standalone programs, that is. pinxi can best be thought of as next inxi, or inxi unstable, or inxi rolling.

    inxi components are largely split into their own repos, inxi, pinxi, binxi, inxi-tarballs, and so on.

    If you submit patches to the inxi master branch, it means you did NOT read the source repo README, and that's not a great way to start any collaboration.

  2. pinxi is leading edge: pinxi is always ahead of inxi, except the days right around next inxi release, but pinxi is also very dynamic, which basically means, really, you need to talk to me before doing anything substantial, I don't use git merge features, I don't use branches in the way you might be used to, and the real master version of inxi is actually my local dev pinxi, which gets committed now and then by copying pinxi to inxi, updating a few values, and commiting that to the master branch as the new, next, inxi version. This is the version that is tagged.

    Debugger data should always come using current updated pinxi, since it may have newer debuggers added to deal with newer issues.

  3. Automated Release Processing: tools/release.pl: You can follow the release process by looking at inxi-perl/tools/release.pl, which was created in 2022 to help make the release process smoother, and to automate the many little manual steps I'd do for a new release, plus automating man and inxi testing, to avoid creating a broken commit to master branch. This tool is working very well, and shows pretty much exactly what is involved in creating a new inxi release.

    The release -d option only creates the new doc html pages, inxi-man.htm, inxi-options.htm, and inxi-changelog.htm, from the 3 -temp.htm files. This automation has made proofreading the man, options, and changelog a lot easier.

    Once that has passed all tests, and I'm happy with the release, it's committed to the master branch (using usl -i), then, a few hours later, it's tagged (using usl -it) if no edits or last minute fixes show up. usl completely automates the commit and tagging process as well. usl is an smxi tool, and lives in the smxi repo.

  4. Tabs, Not Spaces: In terms of the actual code: tabs, not spaces. Really. You can always adjust your code editor's tab width.
  5. Code Folding: inxi is a very long program, in one file. If your code editor can't do code folding, then find one that does, otherwise you can't really work on inxi/pinxi, it's not practical. If you can't deal with very long files or code bases, then inxi is not for you. The single file is due to core requirement to install/update inxi easily everywhere always. I use Kate mostly to work on pinxi, and Geany also is reasonably ok. Note that code folding can have bugs, so inxi code is carefully maintained to not trigger some such bugs (like Geany folding wrong if there is no space after a comment # in many cases). I have not tested inxi code folding on any other editors.
  6. Clear Perl: Readable code is preferred over terse code, unless the terse Perl is faster than the expanded, which happens. If you don't know how to use Perl optimizers, then you probably want to skip trying to optimize stuff unless it's very obvious and clear gain, and reproducible. In some cases, after running:
    use Benchmark qw(:all);
    cmpthese(5, {...
    tests on various methods, difficult to read Perl may be used in utility functions which users don't have to interact with beyond calling them.
  7. Internal Tools Docs: Many and ideally most of the inxi utility functions (aka subs) are documented in inxi-perl/docs/inxi-tools.txt. This is not always fully up to date, but I try to keep it reasonably current. Not all tools are documented, but probably will be one of these days. There's almost always an internal inxi tool to do standard repeated operations, and if there isn't, there's either a reason for that, or it's simply because I didn't do it.
  8. Internal Values Docs: Many internal inxi hash/array values are documented in inxi-perl/docs/inxi-values.txt. These are very useful and help you avoid trying to reinvent the wheel. I try to keep this one quite up to date since I refer to it whenever I add or change internal test / boolean values.
  9. Cascading Tests: Since the codebase and logic for inxi are quite old,and handle legacy situations as well as modern ones, in general, old methods are preserved, and a cascade of tests are used to determine which method to use. In other words, something that was required to get data in 2008 remains in place, and newer methods are simply integrated into the flow, via testing. This way the old stuff keeps working as expected, and new methods and solutions are used when available. This basically means you can't just jam in something that works only on your system, and which hasn't been tested anywhere else, you have to understand the cascades of testing and conditions before interacting with them. This can be quite difficult, and requires/forces some discipline, otherwise things devolve into chaos quite rapidly.
  10. Global Data Storage: data types, like dmidecode output, that can be used by more than one primary output type, are stored after parsing and processing into global variables, usually arrays or hashes. Great effort is made to avoid ever calling or creating a subshell more than once for the same program, because those are by far and away the most expensive operations inxi carries out while running and parsing and collecting data.

    A small set of other variables are also stored globally, like test booleans, arrays, and hashes, and configuration hashes. Most of these are outlined in inxi-perl/docs/inxi-values.txt. Note that doing docs is boring, but I try to keep that file in particular up to date since I use it when adding new values to existing items.

    Globals are however kept to an absolute minimum, and are often required only because of using Perl 5.008, not Perl 5.010, which had 'state' scalars, which remove in many cases the need for using global variables. But the set that is used now is the most barebones I've found to be practical.

    To keep things manageable, in many cases, Package (aka Class) scope is used to store values internally within a Package, which helps avoid global clutter.

  11. Optimization: there is a further requirement that code that can be optimized in terms of execution speed should be optimized. If any technique or method can be demonstrated clearly to be faster than something else that produces the same reesult, that faster technique should be used. Now and then I revisit certain features that have expanded and expanded over time and test them, and have more than once sped up the feature by 100s of times, literally, by simply refactoring it. Differences that are only a few percent are unlikely to worth the time because they won't result in much improvement and could be CPU / system dependent, but loop tests of methods and syntaxes generally reveal real optimizations readily. Devel::NYTProf Perl optimizer is used now and then to catch glaring bottlenecks.

    Since a lot of these operations run in very large loops, an optimizaiton of a few ms in a big loop can really add up, but if it's something only running once, it's not always worth doing unless it makes the code better.

    A last part of this is that once you run Devel::NTYProf you quickly realize that loading modules and running subshells is at least 90% of the execution time of inxi, so in particular, anything that can get rid of a subshell command and give the same or better results is good. That's particularly relevant with RAM files like /proc/ or /sys, which are super fast to read.

    Optimizations that yield for a total run > 0.01 seconds are generally very desirable, and 0.1 seconds or greater will always be added as long as they don't damage code readability and usability, which some highly optimized Perl can do.

    Once all this is clear, go back, and run inxi on a 100 mghz machine with a few 100 MB ram, and you'll quickly see why these things matter. inxi is actually tested on an ancient 200mghz mmx laptop now and then, just to keep it honest, and runs quite well on it, relatively speaking.

  12. Perl Modules: No modules that are not in standard core modules are used for standard features, that is, inxi will never require any Perl module that is not in all the core modules since 5.008. Note that Redhat split out some modules from core modules, and that is handled where relevant by internal tests for the module, but it should in general not impact much for regular users.

    Modules are only used if that is the absolute only way to achieve the result in a meaningful way. Modules are almost never required, that is, you can see from the top load / use section that very few are required. A few modules for special features regular users won't use, like export to json or xml, are loaded after the fact, and tested for, etc, if that feature is requested. But even there, the most basic module is used that will achieve the functionality.

  13. Subshells: Use of programs in subshells, like lm-sensors, lsblk, smartclt, hddtemp, etc, are always handled by testing for the tool, then showing appropriate alert messages if it's missing, or if it requires root to run.

    As a subset of this rule, that means that it's actually fine to have new features that use tools that old operating systems didn't have, or that different platforms like BSDs, don't have, as long as the tools are tested for, handled, and error output where appropriate, is put in place if the feature can't work without the tool.

    Subshells are EXPENSIVE!! Most of inxi's execution time is spent on running subshells or importing modules. Try to get a sense of what a command actually requires re time with:

    time [command]
    but also be sure to remember that just because you've got a 16 core MT Ryzen 7000 with 64 GiB RAM doesn't mean that an old system will be anywhere close to that fast. Ask yourself this: is there any other way, reading files, that I can get this data as well? If so, that way is almost always faster. Particularly if the data is in /sys or /proc, or already in a pre-parsed inxi internal data structure.

  14. Test On Wide Range Of Hardware: it's very useful to test on a wide range of hardware, from early 2000s to present. Bonus for pre 2000s hardware. This can expose a lot of bad assumptions made, and also trigger null data results and errors, which we want to get handled. There is no substitute for hardware testing for many features.
  15. Test On Wide Range of OS: For software based features, which are not hardware dependent, it's quite useful to also have a sizable set of varying age distros in VMs. I have a lot of isos, from a wide range of time, and can supply you with those so you can make your own VM based test systems. This is very useful for things like Desktop data, Distro IDs, and anything else that is primarily software, not hardware, based. Or things like software raid or block devices, for instance, which is much easier to test in vms where you can add disks endlessly, etc.

    Note that the more enduring distros often archive their legacy repos, so you can actually setup, update, and install packages, on very old systems after installing them into a vm and adding their archived repo urls. Debian does this, so does Ubuntu, and this is a very useful feature, particularly when testing legacy stuff that is not in current distros, or to verify that stuff working in new systems also works in legacy systems.

top

Summary

I think that's it for the actual true core requirements of inxi/pinxi. Some of these might not be obvious as core requirements when all you've seen is distro packaged inxis, but once you start running inxi on very old hardware and operating systems, the reason for the requirement grows very obvious very quickly.

Note that I welcome people able to contribute in a way that is positive, and does not cost me time and energy, and benefits realworld inxi users and use cases. I do not welcome time sucks, whining, or complaining, or being unable to adapt to the way inxi does things.

Often the best way to learn and help with inxi is to test, find features you are really passionate about, and start focusing in on those, and learning inxi/pinxi well enough to where you can start finding bugs, issues, and things that could be better or improved. In other words, contributions of time, skill, and energy are valued just as much as code contributions, if not more.

top