About NetInvent

What we do:

Based in Melbourne Australia, NetInvent provides systems analysis, IT consultancy, software development and IT support services to a number of partners.

NetInvent provides technical support services to:

NetInvent provides software development and consultancy services to:

Code Re-use and Consequences of the Hidden Costs of Development

There is an often ignored, but still apparent contradiction in the games development process: attempts at code reuse often seem to end up costing more time (and thus money) than writing new code. Is this really the case? How apparent is it? Most importantly, if it is true, why?

Annecdotal Example - Game Development

First an example of one pattern that drew my attention to the issues discussed here; issues that arose repeatedly during game engine development made me think about this.
It seemed that developing a game engine rarely ended well, and while there are reasons for that with no relation to code re-use, game engines tend to strive to be well architected and avoid code duplication, so they ought to be a poster-child for code re-use.

It is not infrequent to find that developers who have invested a great deal of time and money into engine development do one of these things:

  • Abandon their internal engine and turn to middleware
  • Go bankrupt
  • In search of financial security, sell themselves to a bigger player (who promptly kills their engine development)
  • Sell their engine to other companies as a middleware product

Only the last outcome seems to indicate that developing 'game engine' software is a wise decision. Currently, it's much more the vogue for companies to employ middleware rather than develop their own engine internally. Many arguments have been made in favour of middleware, so there's no point to reiterate them here. The case of 'game engine development' is just one of a number of similar cases or variations. The rest of the discussion is not limited to game engines at all, and should be generally applicable to any situation where the authoring of code intended for re-use is planned.

In conclusion, game engine development is one anecdotal example of problems with code reuse that are mirrored (at least, to a relative extent) in the small scale as well as the large scale.

Code Reused Is Still Not Free Code

The benefits of code reuse have been lauded since the early days of programming, and there's no doubt that progress in the software industry would be all but impossible without reuse and refinement of code. However, it has been noted that in practice it's often difficult to realize the benefits of code reuse.

It often implied that the only cost of code reuse is the time for the user to become familiar with the API. The implication is that reusable code is as cheap (or almost as cheap) to produce as single-purpose code.

I hope to precis below some reasons why code reuse has other difficulties that must be overcome to exploit its benefits.

It's well understood that there is a cost in taking a piece of software from a working program to a 'product'. This issue was documented and explained in detail in the famous (and now ancient) book on the Mythical Man Month. What most people remember from this book is that adding more people to a project can often slow it down rather than speed it up. However, one of its key assertions is that turning a software routine into something fit for use by others is an act of creating a product, and that the act of productization is many times more expensive than the simple act of creation.

A genuine product must be developed more carefully, it must attempt to deal with usage cases that are distinctly non-obvious, it must be tested and it must be documented. It is typically the case that productized software must cope with a wider range of inputs and applications than one that was written for one special and specific purpose. This applies to the level of the smallest and most basic routine upwards. This (necessary) generalization effects not only the cost but also the performance of the final product.

When we look at well known software products in the marketplace, we can see that they have been expensive to develop, and that they do not perform as well as more specialized products, either in terms of usability or speed. e.g. Microsoft Windows, Photoshop, Microsoft Word and the Quickbooks accounting package. When we consider the example of Windows, it is burdened with difficulties both in its immense and ever-expanding feature set, and its immense and ever expanding API set, which must be made available to external developers if Windows is to be of any use. It is the most extreme example of a general-purpose software product: it cost an immense amount to develop and in return it has managed to generate considerable rewards for its developer and users.

When we look at the micro-scale, we can see, even in the case of a simple (geometric) vector class, that making it product-worthy, is a non-trivial exercise, loaded with negative consequences, which we must endure to reap the eventual benefit of code reuse.

Considering the component of just one simple operation: vector normalization, we can see at once there are several issues:

  • the designer must decide what means to pass the vector to the routine, some means are faster, others more secure, in some cases the best approach is situational
  • the designer must decide how to return the normalized result, or whether it will be written over the input, again questions of safety and performance arise
  • the designer must decide how to inform the user a failure to normalize the vector (if at all)
  • the designer must decide what to do with vectors that cannot be normalized
  • the designer must decide at what point the accuracy of normalization is inadequate
  • the designer must decide what costs to bear from handling the above issues that will impact on the efficient performance of the operation
  • all this must be implemented with diligent care and attention
  • rigorous tests are required to ensure that the routine functions under a wide range of circumstances, most of which will occur rarely, if ever, in normal use

In this sort of scenario, it's fairly likely that the designer is going to choose an approach that is safer rather than most efficient, though there are likely to be several compromises, they are not named that for nothing. Ultimately, a compromised design is employed for each component, with consequences and complexities for the final assembly that require additional documentation and understanding on the part of the user.

There is no decision that can be made without a cost, every element of safety or flexibility that looks good in a product has a cost in performance and development time. Of course, safety may return rewards in time saved elsewhere, but it is not achieved for free. The point is that the developer of the product must put in the effort so that users may reap the reward. The idea is of a one to many relationship, and ultimately cost benefits. However, the performance can rarely be recovered at all: the safe, general purpose routine is almost inevitably slower than the specialized one, and as product services are employed by developers these performance penalties are multiplied and multiplied as they ripple up through the entire system.

In the end, either faster hardware is required (not an option for consoles or embedded systems), or more time must be invested in various optimizations at both the high and low level, to recover some of the lost performance.

Critically, for a product to have an extended lifetime, it must meet a higher standard: the longer you want to use a product the more effort must go into it. This effort is typically not considered because it's considered a routine part of maintenance. Ironically (or perhaps obviously), designing and building for low maintenance is a time consuming task that increases the cost of the product. Once you understand this you can appreciate that there's no such thing as free low-maintenance code: maintenance is not only, in many cases, an ongoing consequence of incomplete productization - it is ongoing productization.

Put another way, low-maintenance code is effectively closer to being a product than high maintenance code.

It would be something like a breach of the third law of thermodynamics to magically produce productized code for no additional cost. In programming, as well as physics, nothing is produced without effort.

If we are to believe that making product-quality code is never free, we can see that making low maintenance code can never be free, though this is rarely appreciated. Far too often the dogma that low maintenance code is cheaper to write, and yet this is clearly false. The undeniable speed and productivity of hacky cowboy coding should confirm this: high maintenance, single-purpose, discardable code is cheaper to produce ... though it may not be cheaper to own.

It should be obvious that even a little wisely placed productization can realize some major benefits, but this doesn't mean that all code needs to be product quality.

While for the Windows developer, this is a minor issue: new hardware will always appear, for the game or embedded system developer it is a serious matter. Competition on a console platform is all on a level playing field, and specialized, single platform (perhaps even single game) engines typically deliver better user experiences. Similarly, embedded systems software targets fixed hardware and engineering revisions are unlikely to be made to resolve performance issues - new hardware would incur many (and possibly ongoing) costs - invariably a software solution is sought. Hardware specifications are typically driven by high-level management objectives for the target price of a system, and that hardware will likely be locked in long before a firmware programmer gets anywhere near a prototype.

In games, we see this reflected in products that are usually bad performers (multi-platform products) that chase sales across a range of platforms, competing with high performance products that are usually intended to run only on a single platform and usually have a custom built, or highly customized supporting engine. In embedded systems, we see sluggish menus and equipment with long start-up times that does not run as quickly as it could (or should). Whether the product is a camera, a mass spectrometer or a machine-tool, the end user may get less value from their purchase.

A comparison of UT with Final Fantasy serves here: UT is for multiple platforms, massive time and effort has been invested in its engine development, and yet it performs underwhelmingly on some platforms (or performed, as those platforms are effectively defunct), and doesn't support some other platforms at all. The underlying FFX.2 engine runs only on PS2, but is a highly evolved single-platform engine customized further for a single game. Both products have been used as the basis for games that made money but they employ very different strategies.

The Cost of Reuse is Much Higher than Expected

Ignoring issues of performance for the moment, the main cost of developing reusable code is financial. Normally, code reuse is seen as a saving, and this is the main issue here. Both developers and managers are inclined to underestimate the true cost of making product quality code.

In The Mythical Man month, Fred Brooks had some approximate numbers (alas I don't have the book, so I can't check) and IIRC suggested that the cost of a product was (and presumably still is) around thirty times that of the naive prototype code.

It is to be expected that to some extent this 'cost of reuse' is one of the factors in schedule slippage and programmer underestimation of time for tasks: the programmer is inclined to estimate the time to write something more like a working prototype, not a fully designed, documented, tested and refined product.

While it's possible that the cost of making a multi-platform version of a trivial component, such as vector normalize to a product quality is not thirty times the cost of the naive normalize, we might well find that if we added in all the meetings, arguments, use issues, bugs, platform variations, workarounds, refactors (to conform with other design changes and optimizations) and other shenannigans, we might find it isn't hard to get a thirty-times multiplier in cost at all (regardless of what Fred Brooks had to say).

And we still haven't factored in performance costs...

If We Believe The Theory

If we are prepared to believe that creating genuinely reusable code is extremely expensive, then we should be able to draw some conclusions and see how they match up to reality, or at least our experiences.

The first easy conclusion is that game engines should suffer badly from the cost of productization: e.g. producing a product quality, multi-platform game engine is far more expensive than estimated, and the performance can be expected to be below that of more specialized engines.

That the program to develop a grand and encompassing multi-purpose, multi-platform engine is typically the death knell of any developer that undertakes it seems to confirm this theory, at least anecdotally. The exceptions are all companies that actually managed to sell their engine to others to recover development costs.

We should also expect to see that it's possible to product a single purpose, single game engine, largely from scratch, and get a product to market that performs adequately on a single platform. I observe that this has been true in the past, and will probably be true in the future. While the features required for an engine have increased, the number of low-level products available (and their quality) to enable an engine, particularly on a single platform, continue to increase.

We should expect to see that 'code from the internet' rarely survives intact unless it is of product standard. That is to say, much code that is downloaded and incorporated will be extensively modified: it's a free prototype, but it's not a free product. While some libraries are highly focussed, and are effectively good products, many others are not - despite the massive programming effort that goes into many open-source developments.

The open-source process helps to make product quality code when the process is genuinely diversified amongst many developers, but when the open source development is largely the work of very few, the cost of absorbing the code is likely to be much higher. In the case of pseudo-code in academic papers, or nVidia slides, etc, you can expect the cost to be as high as writing from scratch.

Conclusion

It's largely in the hands of the reader to determine for themselves whether product quality code is at least an order (perhaps two orders) more expensive than use-once-and-discard code.

However, if this 'extent of cost' is really as large as suggested, then how does this relate to various commonly accepted 'truths' of game development? (And by corollary, many other kinds of software development). Make no mistake, these are things that most people developers accept to be self evident or simply obvious. The more subtle implications are rarely considered.

Alleged 'Truths'

  • A multi-platform engine is cheaper than three or four unique engines with fairly common application interfaces.
  • It's better to produce one good, ongoing engine, that is continually refined and improved for all platforms than to produce code per product that is largely discarded on completion.
  • Middleware is the answer and you should develop as little engine technology as possible
  • The above 'truths' form a theoretical argument in support of prototyping as a general methodology.

Yes, I know that last one is an obvious logical fallacy when you see it like this, but that doesn't stop people making the argument that there is some kind of linkage between middleware and prototyping, simply because you can use middleware for prototyping. Which is not to say I'm against prototyping ... see below.

Let's Examine the 'Truths'

In the first case, the multi-platform engine has to cost less than three or four times as much as each cowboy engine, and yet deliver the same level of performance across all platforms. Actually, if you intend to reuse the engine, then you might write off some of this cost, but that takes us to point two.

If you can spread the cost of engine development over several products, there appears to be an immediate win, but this has to take into account the significant increase in cost as the quality of the product is required to increase. Remember, for a product to have an extended lifetime it must reach higher standards. Some engines have achieved that sort of lifetime, but they are few in number. Even the more successful and long lived engines have undergone considerable and expensive revisions. We really can't pretend that the original Quake engine is the same code as found in the Doom III engine, though they may have a few low-level product quality components in common. By the time your first product hits the market there are bound to be parts of your engine that are already obsolete in some respect.

It should be clear that the cost benefits of a multi-platform, high-quality engine product are not as obvious as might first appear, and perhaps the benefits are quite closely balanced against the alternatives unless you can sell your product to recover development costs.

As far as middleware goes, we can see that it's largely a good idea, as long as the product is of sufficient quality and we can accept the performance limitations.

Unfortunately, the amount of high-quality middleware is low, and there's no obvious sign of forthcoming new products that might change this evaluation, rather, for 'political' reasons Renderware has diminished substantially as an option since the previous console generation, and arguably nothing has appeared to replace it (unless you count Unity). Some developers find themselves forced into using Unreal when it isn't really a good fit for their product.

It can be seen that prototype code typifies the non-product, no-reuse approach. Historically in games prototypes have often evolved into finished products. These products have often succeeded in their first incarnation. The problems with them have only emerged when somebody (incorrectly) assumed that 'finished prototype' code could be reused to make a sequel game, etc.

If you write disposable code you need to be sure you do dispose of it in a timely manner. If you cling onto it after its best-before date expires you can expect stale and nasty tasting products.

In my personal experience, it is tempting for management to assume that all existing code is reusable code, despite protests from programmers, or warnings that certain code is not particularly suitable or helpful. When this is used as an excuse to deny necessary development resources, or to massage schedules, disaster inevitably ensues.

Proper analysis and recognition of when existing code is appropriate for reuse is essential; even when time is allowed for further development and productization of existing code, it can push development in inappropriate and time-consuming directions and do untold damage to morale.

The Tomb Raider series springs to mind as an example; it can not have been easy to add new features to the existing PSone era code base; so presumably a decision was (apparently) taken to attempt to create a product quality rewrite (Angel of Darkness) with lots of new features. It foundered in time and cost overruns (presumably) because they engineered the engine for reuse (longevity) not rapid development, thus running out of time to develop the gameplay aspects properly.

We could also describe this as a case of sequelitis, but such speculations are probably worth of their own article, and when considered too deeply lose their power as simple examples.

Finally:

  • I don't mean to say that creating productized code is never cost effective, but rather, that is less frequently effective than widely imagined.
  • Clearly, there are cases where reusing code has obvious benefits, however, those cases may not be be obvious in themselves.
  • If prototyping typifies non-product, non-reusable code, then prototyping is as relevant to engine and tool development as it is to other parts of the process. Put another way, prototyping is not just for gameplay.
  • You should resist temptation to reuse code that is not really fit for reuse.
  • Proper analysis and recognition of when existing code is appropriate for reuse is essential.
  • A small amount of well placed productization can delivery substantial benefits, but once the low hanging fruit are plucked, productization becomes increasingly likely to cost time and money - rather than creating savings - unless you can sell the immediate products themselves to recoup costs.
  • The cheapest productized code is code you didn't have to write internally, as long as it is off sufficiently high quality and fits your needs well, you are very likely to 'win' by using it.

Configure bind9 as master/slave pair on CentOS / RHEL 5

There are plenty of HOWTO documents on this and I've discovered that nearly all of them are full of unnecessary fluff that can be both confusing and misleading. It's probably because they are reworks of older HOWTOs from the bind8 era. Configuring bind9 in master/slave is far easier and requires far fewer configuration changes than the existing HOWTOs suggest.

You need to edit the named.conf files on both systems. I'm assuming that you begin with a functioning but otherwise default configuration on both machines.

  • On the master machine, create a dns key and put it in a secure file that you can include.

    sudo dns-keygen > /etc/named-tsig-key.conf

    Then edit the output file: /etc/named-tsig-key.conf so it looks something like this:

    key ddns_key
    {
      algorithm hmac-md5;
      secret "lw8MPJFqapeAG3ehTuvDBPUhRWzX1hyz5Ov3UvIXhmGh4XkPKIcPNCsz5f8v";
    };
    

    Where the hex string is the one you just generated (not the one I've used as an example), which you should find in the file, because we just put it there.

    Fix up the permissions on the key file:

    sudo chmod og-rw /etc/named-tsig-key.conf
    sudo chown named:named /etc/named-tsig-key.conf
    
  • Include the dns key file in your named.conf on both machines:

    In your named.conf, after the logging section in named.conf add:

    include "/etc/named-tsig-key.conf";

    You can call this key file whatever you like, and put it wherever you choose (think is most secure), but I've put mine in /etc/ and named it in a way that makes sense to me.

  • Copy the key file you generated on the master to the slave machine. You will need to put it in the same place and ensure that the permissions and ownership are the same as used on the master. There are lots of ways you could copy the file, from sftp/scp to simple cut and paste from one putty session to another.
  • Add server descriptions to both machines following the include directive:
    server 123.456.789.123 {
        keys {  ddns_key;
        };
    };
    

    The address here is the address of the other machine. If you are on the master, list the address of the slave. If you are on the slave, list the address of the master.

    You need to use an IP here, don't use the server's name.

    This directive tells bind that it should use the key we generated for communication with that server.

  • Add the zones to the slave configuration:

    I'm going to assume you already know how to add zones to the master, and that they are already configured - there is no additional change required to the master. The master does not need any special configuration to serve zones to the slave. Setting up an ordinary master zone is nothing specific to the master-slave set-up, so no point going into it here, you can find info on it almost anywhere.

    There is already an example internal slave zone in the default config file in the internal view:

    //zone "my.slave.internal.zone" {
    //      type slave;
    //      file "slaves/my.slave.internal.zone.db";
    //      masters { // put master nameserver IPs here
    //      127.0.0.1;
    //    } ;
    // put slave zones in the slaves/ directory so named can update them
    

    However, if you are using your slave as a second nameserver of an authoritative pair (and this is probably the reason you want to set up master slave) you need to put your slave zones in the external view not the internal, or else your slave nameserver will not serve them to anyone else. You don't really need the zone in internal view at all on the slave, as internal look-ups on the name through localhost_resolver will do an external DNS lookup (which might end up eventually looking at our external view anyway) and then be cached - one external lookup, hardly a big overhead, and saves the complexity of adding the zone to different views.

    So for a typical zone, add something like this to the external view:

    zone "mydomain.tld" IN {
      type slave;
      file "slaves/db.mydomain.zone";
      masters { 123.456.789.122; }; // The IP of the master server
    };
    
  • So, that's all you really need to do to get master-slave working. Most of the HOWTO docs I found suggested making all kinds of pervasive changes to named.conf, particularly in the options section. When I looked at what those changes were intended to do, they either replicated default behaviour and were thus utterly pointless, or they were actually wrong and would only cause trouble.

    You can do other stuff, like enabling zone transfers to the slave on the master, but there's really no need to do that.

    All you need to get a working setup is the key declaration, a server declaration, and addition of the slave zones on the slave server (assuming the zones already exist on the master).

    All the other changes that people suggest are simply not necessary to get a working master-slave pair under CentOS / RHEL 5.X using bind9, and are as likely to result in harm rather than any benefit. The defaults of bind9 are pretty much what you want, and I don't recommend changing any of them unless you really know why you are doing it.

Basic configuration of bind9 (named) on CentOS / RHEL 5

The basics:

sudo yum bind9
sudo chkconfig named on

OK, now you've installed bind but want to know where the examples are, or how to set up the configuration files. The bind configuration samples are in:

/usr/share/doc/bind-9.x.x/sample/etc/ and /usr/share/doc/bind-9.x.x/sample/var/

Copy the default configuration files above to /etc/ and /var/named/ respectively. If you are serving several local domains. In addition, I suggest you create a named.conf.local file or something similar that contains all the zones for your system, and include it into your main configuration file.

Comment out the example zone definitions in the internal view.

If you intend to be authoritative nameserver for a zone, add it to your external view using the format show in the example zones in the internal view. You will need to create a zone description file in /var/named, which is beyond the scope of this simple explanation.

The bind executable is in /usr/sbin/named ... to test your configuration before running properly as a service, run with -g to run in the foreground and dump to stdout, -d controls debug level

You should probably also add -u named to run as the 'named' user, because otherwise you will run as the current user, which is almost certainly going to cause problems, even if you are root :)

e.g. sudo named -g -u named

You can use dns-keygen to generate DNS TSIG keys (no, it doesn't need any complex parameters, it just spits out a key to the stdout, or wherever you pipe it).

You will need to edit /etc/named.conf to replace the note about this with a real key if you have a master/slave config. Otherwise comment out that whole key section because you don't need it.

This key (if you use it) is a 'secret', you need to get it to the other machines securely (you copy that key manually into a file on the other machine). It is not a certificate. Normally the key should be kept in a file with no read permission for group or others and included into the main config file, which (allegedly) needs to be globally readable (though I suspect this is not entirely true).

To check a bind config file for errors:

named-checkconf /etc/named.conf

named-checkzone domainname.tld /var/named/db.domainname.tld.zone

Assuming that you have called your zone file db.*.zone

When you've finished configuring, sudo service named start (or restart if you already started it).

Finally, you need to open up the ports for bind9. Traditionally, bind has used port 53, and doing anything else is more likely to cause trouble, despite the 'security benefits'. Ultimately, if you are running bind as anything other than a caching nameserver, then you want people to be able to find you.

You should probably open up both UDP and TCP on port 53. While some will say that UDP is enough, there are some servers that only use TCP, so again, you're asking for trouble if you try to avoid opening it for TCP as well. I can't see any realistic benefit of not opening it for both anyway. If bind has a vulnerability, it's probably not going to be restricted to TCP connections, and returning to the point of this, a bind that other people can't see doesn't need any open ports, and one that other people do need to see should be as conformant and interoperable as possible with other nameservers; if people can't find your servers, your service is useless.

Your iptables is going to need something like the following in it...

-A INPUT -p udp -m udp --dport 53 -m state --state NEW -j ACCEPT
-A INPUT -p tcp -m tcp --dport 53 -m state --state NEW -j ACCEPT

How you add these is up to you. I maintain my iptables file manually, but many people prefer to use the iptables administration tool.

What FTP daemon to use with CentOS or RHEL 5.X

What ftp daemon to use on CentOS or RHEL? I recommend vsftpd - you can install it through yum and it's very secure.

The set-up info on the vsftpd site could be better though. In particular, there is no explanation of how vsftpd uses pam to authenticate.

Look in /etc/pam.d/vsftpd and /etc/pam/ftp or whatever you've set in the pam_service_name option e.g. ftp=/etc/pam/ftp to see how pam is set up for this service.

After editing passwords for vsftpd use something like db_load -T -t hash -f accounts.tmp accounts.db

Where you have your users and passwords in accounts.tmp, with usernames and passwords on alternate lines. You might find some contradictory instructions about on the net, but they won't work for CentOS 5.X

CentOS and RHEL Access and Permission Problems

Trying to configure mail systems and you're hitting mysterious 'access denied' or 'permission denied' issues, even though permissions allow access? This often happens when trying to get postfix and dovecot to talk to each other.

The problem is most likely SELinux. Most people cop out by disabling SELinux; and if you disable it, it might solve your problem. In some cases, SELinux appears to interfere with access even when disabled - do a full rebuild of the permissions if you seem to have this problem. Check out this guide to CentOS security for information on how to enable and disable SELinux, and how to fix your security without disabling it.

Here is a full guide on SELinux in CentOS.

This wiki on SELinux booleans may also be helpful for the serious SELinux user (e.g. people who don't simply want to turn it off) .

Use sestatus to check your SELinux status.

The setenforce command allows you to change between Enforcing and Permissive modes on the fly but such changes do not persist through reboot.

To make changes persistent through a system reboot, edit the SELINUX= line in /etc/selinux/config to either 'enforcing', 'permissive', or 'disabled'.
e.g. SELINUX=permissive.

To relabel the entire filesystem (this seems to fix some problems where you have changed SELinux status):

# touch /.autorelabel
# reboot 

Users of Ubuntu may experience similar problems due to AppArmor. Problems are usually resolved by setting the AppArmor properties correctly.

Network Services don't work

You've installed a package, configured it, started it up and it seems to be running fine, but nothing is happening.

Did you forget to add iptables rules for the ports used by your new service? Don't forget that bind9 uses UDP as well as TCP.

Edit /etc/sysconfig/iptables and add rules to open up the necessary ports. You can also use iptables -e commands, but this stops you putting useful comments in your iptables file because iptables gets auto-generated. OTOH, don't hand edit iptables unless you are confident you aren't going to save a broken one and lock yourself out of the system. Never run system-config-securitylevel unless you want to trash your hand-edited iptables file.

Don't forget to 'service iptables restart' after making updates to the file.

Use 'netstat' to debug ports and service issues. For example, netstat -l -t -u -p -e will give a nice display of listening tcp and udp ports. Add --numeric-ports to see port numbers instead of names. The man page for netstat is fairly comprehensible, unlike some.

RHEL/CentOS/Linux/OSX Handy Hints

  • You've moved the location of an executable but the shell keeps trying to run the old one and giving you file not found error?
    Shell path caching is the problem. Use PATH=$PATH to reset the shell's path cache.

  • Use nmap to scan your own open ports for vulnerabilities, or just to check that you really have opened the ports you think you have.

  • Everything you need to know about site certificate generation can be found in this OpenSSL howto section.

  • Where did my install files go?
    rpm -ql <package>

  • For more info see links below...
Syndicate content