[NTLUG:Discuss] Upgrade breaks things
brian@pongonova.net
brian at pongonova.net
Fri Apr 12 18:36:05 CDT 2002
On Fri, Apr 12, 2002 at 03:07:08PM -0500, Steve Baker wrote:
> 3) Some libraries require a *structure* to be passed as a parameter
> to some library call. If at some future time, a new variable
> has to be added into that structure, it's almost impossible to
> do it without breaking reverse compatibility. This problem
> is especially acute for C++ applications that actually make
> use of a class-based interface.
If the developer knows what he/she is doing, this should *never* be a reason for
backward-compatibility issues. That's the whole point of specifying a struct or
class pointer: You should be able to modify the *internal* structure of the
struct/class, along with related implementation details, without ever breaking
older applications which depend upon the previous interface.
What often happens is that a developer without a clue (or a clue that is just
flat-out wrong) will pass a struct/class by value (which will obviously break older
apps during compilation since the binary "footprint" is different), or the
developer will modify something in the implementation which breaks
backwards-compatibility (usually evident as a segfault or other run-time error).
What I'm seeing more of lately, especially in stalwart low-level libs like glibc
and zlib, is the *addition* of interfaces, along with the deprecation of others,
which simply renders older code uncompilable against the new interface. This is an
issue of poor/inadequate planning, as a new contingent of developers
with ideas that are different from the original lib developers seek to change the
interfaces to better align them with their beliefs of what the interfaces should
really look like. I've come across this mindset in a number of GNU-based
libs.
> The way I deal with it in my libraries is to have applications
> statically link to them...but that's only appropriate because
> there are very few applications using them. You couldn't have
> glibc or OpenGL work like that (for example).
One of the problems here is that your statically-linked executable will be
larger than their dynamically-linked counterparts. But with larger drives at cheap
prices now the norm, Linux users may want to start re-visiting the issue of static
vs. dynamic linking. (Of course, "updating" a buggy library will require rebuilding
all your carefully compiled executables, another problem to think about.)
> The other way out is not to attempt to maintain 100% compatibility,
> but instead to make sure that multiple versions of the same library
> can co-exist on the same computer and that applications always pick
> the right version. Linux makes an attempt at that - but it's far
> from 100% sucessful.
I have 7 different lib paths currently in use on my Linux system, and my head hurts
from having to keep it all sorted :) Library versioning works, but I've seen way
too many broken configure scripts which detect wrong library versions (or even miss
them). After a while, trying to keep track of several different libraries becomes
a sizable task.
--Brian
More information about the Discuss
mailing list