龙芯开源社区

 找回密码
 注册新用户(newuser)
楼主: cb001

闲聊:Free Pascal,FPC党,P党

[复制链接]
 楼主| 发表于 2015-10-26 10:53:44 | 显示全部楼层
国际上有一个Decimal BASIC,是用Free Pascal写的,开源,一直在更新和维护。而且还有一个配套的把Decimal BASIC源代码自动转换成Free Pascal源代码的工具,也是用Free Pascal写的。
 楼主| 发表于 2015-10-26 11:02:01 | 显示全部楼层
hdst 发表于 2015-10-26 10:49
:=赋值还要打两个符号,很麻烦,还有这个标识符大小写,看着倒是清楚,还要转换不是,shift,x,shift...

由于C的赋值符号只要一个=号,为了避免出错把等于的比较运算搞成赋值,专家建议判断c==4写成
4==c

就冲这个,我喜欢:=

F-35如此推迟,用C族语言写系统软件应该也是灾难原因之一。
F-22的软件是用Pascal族的Ada写的,比F35顺利多了。
我就是喜欢这个:=
发表于 2015-10-26 11:30:52 | 显示全部楼层
cb001 发表于 2015-10-26 11:02
由于C的赋值符号只要一个=号,为了避免出错把等于的比较运算搞成赋值,专家建议判断c==4写成
4==c

各有所好吧。
不过呢,根据李莉教授的说法,f22和f35都是用c++写的,所以故障频频,参见2013年央视某军事节目
 楼主| 发表于 2015-12-11 07:00:13 | 显示全部楼层
Why C is not My Favorite Language


1. Popularity - Accidents of History

C is the most popular programming language in use today. Its popularity derives from the widespread belief that C is a simple, fast and portable programming language. While actually, the popularity of C is the result of historical accidents. At a time, when people need a system implementation language, which should be efficient enough to replace the assembly language, as well as has greater readability and writability, C happened to satisfy these requirements and became the language for nascent Unix operating system. It is the success of Unix system led to C's popularity today. Is C really a language as good as its enthusiasts' belief? I don't think so.
2. What make a C programmer crazy?

Every C programmer must have experienced irritating debugging problems with C, which sometime can drive people to mad for spending hours to find the bugs while finally it turns out to be some misleading features of C. Now I will give some examples.

Bug 1: Unsafe returned values

char * f() {
   char result[80];
   sprintf(result,"I am a char array.");
   return(result);    /*  result is allocated on the stack. */
}

int test()
{
   char *p;
   p = f();
   printf("f() returns: %s\n",p);
}

The char array result is a local variable. The "wonderful" thing about this bug is that the compiler can't detect this error and sometimes the program seems to be a correct as long as nothing has reused the particular piece of stack occupied by result. This is not a rare bug but sometime it will take you hours figure out when it is mixed in a somewhat long and complex program.

Bug 2: Bug with pointer, parameter and ++

Sometimes, C programmers are always very happy with the conciseness of C program. But the conciseness of C is either at the expense of readability or compromised by detailed comments. Also the lack of readability always causes errors. See the following procedure:

substring(char** pstr, char ch)
{
while( ( *(*pstr)++ != ch) && (**pstr != '\0'));
}
test() //move the buf pointer after 5, which is commonly in some kind of buf reading routine.
{
char * buf[12];
strcpy(buf, "0123456789");
substring(&buf,'4')
printf("Result:%s",buf);
}

Result:56789

This program is error prone for the following reasons:
1. All the parameters in C are pass by value. The only way to change a variable is to pass the pointer pointed to it. When the variable itself is a pointer then we have to pass the pointer of pointer, here it is **pstr. This sometime really causes headache.
2. (*(*pstr)++ != ch) is a very confusing expression. A programmer has to understand very well the tricky precedence rules here in order to make the correct expression. If use ( (**pstr)++ != ch), we will get the result "5123456789"; while ((**pstr++) != ch) will result in the increase of pstr , thus cause unspecified result according to what pstr might point to after it increases by 1.  

Case3: Unhygienic macros

Sometimes, the Macro definition with augments can cause some unwanted result. In the following case, parentheses must be used.

#define abs(x) x>0?x:-x
/* abs(a-b) is expanded as : a-b>0?a-b:-a-b..... that's wrong!!!*/

#define correct_abs(x) (((x)>0?(x):-(x))

Above are some most common problems. More problems can be found in references[3][4][5]. All these problems are caused by the bad design decisions of C, which prove that C is a flawed language.  
3. The Dark Side of C

If the above discussion can only prove that C is flawed, now I will further reveal the dark side of C by illustrating four bad features that are intrinsically born with C.   

3.1. Type Checking

C evolved from two typeless luanguages B and BCPL, and was designed based on the bold principle that "almost everything should be legal". So it was no surprise that C has weak type checking. I can at least give three examples:

1.C treats an array identical as a pointer. It doesn't provide run-time boundary check for the array. When a pointer of the array pass the end of the array, there is no mechanism to prevent them from corrupting other entities.

2. C doesn't tie structure pointers firmly to the structures they point to and permits programmers to write pointer->member almost without regard to the type of pointer.

3. Although type cast enforces the explicit type conversions. There will still be problems with the void*. If a typed entity is assigned to a reference of  void* , then it will loose its static type information. Thus, the void* reference can pass to any other typed reference later without using type cast. With no type checking, it might be assigned to a wrong type.

3.2. Garbage Collection

As an advanced and high-level language as it declares to be, C should support good garbage collection mechanisms. However, C really does a bad job in this aspect. Although C provides off-stack, dynamical storage allocation routines, C supports no automatic garbage collection. The programmer has to handle the garbage explicitly, which is very dangerous and error prone.  Following are one example.

char* f(){
    char* p = malloc(30*sizeof(char));
   sprintf(p,"%s", "this is a string");
    return p;
}
test(){
char str[30];
char * p = f();
strcpy(str, p);
}

If the programmer forgets to explicitly delete p, which is not rare even in expert programmers, then the storage allocated to p will not be collected correctly after the strcpy. Another example is in a linked list. If the header node has been deleted,  the rest of the linked list could not be reachable. Thus these storage can never be collected properly. Poor garbage collection might use up the system memory , eventually results in the failure of the whole system. It is really strange that, as a program originally designed for a tiny machine with very limited memory, how can C survive without good garbage collection mechanisms?

  3.3. No Modularity

With the ever-growing sizes of current software systems, modularity-- which is critical for reusability and maintainability,  becomes an essential requirement for modern languages. Modularity includes data  encapsulation and information hiding. The basic ideas is that each module takes care of a collection of data and these data are only accessible through the interfaces provided by this module. C has no direct support for modularity: its naming structure provides only two main levels, 'external' ( visible everywhere) and 'internal' ( within a single procedure); the same problem exists with the global validity of #define. There is no way for one function to make its internal variables accessible to another function without making these variables globally accessible to all the other procedures. So project designers have to achieve modularity by create their own conventions based on good software engineering practices--by setting up rigid rules for codes and modules management.[5][6] This inherent defect makes C not a suitable language for today, where most software systems are large-scale.



3.4. Weak Portability

Although the C compilers available in various platforms, that does not necessarily mean that C has great portability. On the contrary, I argue that C has poor portability. C was not originally designed with the portability as a prime goal because it actually a typed assembler aiming at machine level. Many C types and operations are closely tied with the machines and make it very hard to port programs to other type of machines. There is no common standard for C.What's more, within a standard( for examle ANSI C), there are unspecified and undefined behaviors of preprocessors and compilers. This adds to the weak portability of C programs. Following are some undefined examples:

1. The size of the int and the order of bytes within the int; the representation of floating-point types.
2. Initial value of variables
3.The order of evaluation, for example following statements will have different result on different environments
a[i++] = b[i++] ;
f(pointer->member, pointer = &buffer[0]);

Give such weak portability, C programmers have to take care of machine level details, which should actually be the concern of the target compilation system. The weak portability also makes C programs have poor reusability. So C is not an ideal choice for portable programs.
4. Conclusion

In summary, C is a language for some domains but not a language for all. There are many fields C doesn't excel, such as scientific computing and business management applications. C succeeded as a system implementation language in the days when it was design. But it is no longer the good choice of today because of its unsaft type checking, poor modularity, weak portability and reusability. What's more, with so many unpleasant design flaws and the unreadability of so-called concise codes, it is not a simple and elegant language. That's why C is not my favorite language.
References

[1] Ritchie, Dennis. "The Development of the C Language." Murray Hill, New Jersey: AT&T Bell Laboratories, 1993

[2] Moylan, P.J. The Case Against C. University of Newcastle, July 1992.

[3] Joyner, Ian. "C++??: A Critique of C++." October 1996: http://www.progsoc.uts.edu.au/~g ... v3/sect4/index.html

[4] Tim Love. "ANSI C for Programmers on Unix Systems." Cambridge University Engineering Department,1996

[5] Dave Dyer's top ten ways to be screwed by c

[6] Dianni Di Caro, "Style and organization rules for C programmings : how to enjoy freedom without stress"

[7] A.Dolenc, A.Lemmke, D.Keppel, G.V.Reilly, "Notes on Writing Portable Programs in C" 1995

[8] Linda Wharton, "Should C Replace FORTRAN as the Language of Scientific Programming?", Fall 1995
Collaborators

None.
 楼主| 发表于 2015-12-11 07:03:08 | 显示全部楼层
Why C# Is Not My Favorite Programming Language
1. Default Object Lifetime Is Non-Deterministic

In most object-oriented languages, there is a very specific time when an object constructor is called (namely, when an object is instantiated) and when its destructor is called (namely, when it falls out of scope).

In C#, they have taken the "garbage collection" paradigm one step too far. Not only does memory management rely on it, but even the object destructor is called "somewhen", at an unpredictable time! In a previous version of this document I wrote "somewhen after the object falls out of scope", but it turns out to be even worse, so I'll devote a separate section to the gruesome truth below.

In any case, this means that handy constructs such as an AutoLock can no longer work (example in C++):

class AutoLock
{
public:
    AutoLock(Mutex& m): m_mutex(m)  { m_mutex.Lock(); }
    ~AutoLock()                     { m_mutex.Unlock(); }
private:
    Mutex&   m_mutex;
};

In a typical Microsoft-way of thinking, they added a "special case" for this particular example by means of the lock keyword (which, incidentally, would be trivial to simulate in C++ should you like the keyword-taste of it). However, other "automatic" resource management using object lifetime (for example, for handles, GDI object, etc.) still won't work.

To resolve this, objects can implement the IDisposable interface, which has a Dispose() method. When object lifetime is important to you, you should put the relevant cleanup code in the Dispose() implementation and remember to call Dispose() on the object yourself. It is good practise to have the destructor call Dispose() on the object too, but as Professional C#, 2nd Edition puts it: "The destructor is only there as a backup mechanism in case some badly behaved client doesn't call Dispose()" (emphasis mine). You see, only badly behaved clients would forget to clean up after themselves, so I guess only badly behaved clients would need a garbage collector in the first place, right?

The proposed "better solutions" for this are the using keyword, like so:

using (AutoLock theLock = new AutoLock(m_lock))
{
    // your protected code here
}

or using the finally clause (which is often recommended over the using statement), like so:

AutoLock theLock = new AutoLock(m_lock);
try
{
    // your protected code here
}
finally
{
    theLock.Dispose();
}

In the former case, it gets a nuisance if I have more than one object which I'd like to have a deterministic life time for, and in the latter case (which, incidentally, is very similar to the code that gets emitted by the compiler when you use the lock keyword) I still need to remember typing Dispose() by hand.

And it gets worse! Even program termination doesn't trigger proper cleanup. You can verify this with the following program:

using System;
using System.IO;

class TestClass
{
    static void Main(string[] args)
    {
        StreamWriter sw = File.CreateText("C:\\foo.txt");
        sw.WriteLine("Hello, World?");
        // Note: We "forget" sw.Close().
        // Incidentally, StreamWriter.Dispose(bool) is protected, so we can't call it directly.
    }
}

The foo.txt file will be created, but it will be empty. Note that even C specifies that all unflushed data is written out, and files will be closed, at program termination. And even if I did remember to call Close() myself (I wouldn't want to be a badly behaved client, now would I?), this wouldn't be exception-safe. I am supposed to remember to use using, or litter my code with finally blocks.
2. Object Lifetime is Not Determined by Scope

I wrote above that I initially thought that objects are destroyed "somewhen after they go out of scope", but in reality it seems to be far, far worse. As it turns out, the JIT compiler can do "lookahead optimization", and may mark any object for collection after what it considers it's "last use", ignoring scope!

I have had a colleague ask me about the following code:

{
    ReadAccessor access(image);
    IntPtr p = access.GetPtr();
    // lengthy piece of code here doing stuff with the pixels from the image
}

A ReadAccessor is an object which provides access to the pixel data in an image, which is stored in a memory mapped file for performance reasons. When a ReadAccessor is constructed, it maps in the memory, after which you can call GetPtr() to get at the actual data. Once it goes out of scope, it unmaps the memory again. So, the "validity" of the data is guaranteed for the lifetime of the ReadAccessor.

Incidentally, there is also a WriteAccessor, which makes sure that there be only a single writer at any given time. Of course, people using this code in C# quickly found out that they had to dispose of these WriteAccessors manually, because otherwise they'd get the error that this WriteAccessor would still be sitting in the garbage bin while they were trying to acquire a new one. But that's the problem mentioned in the item above. This one is far, far worse.

The colleague told me that his code crashed somewhere in the pixel-processing code.

It took me a while to figure out what was happening: The JIT optimizer looked ahead a little bit, decided that access wasn't being used after the GetPtr() call, and marked it for collection. Later on in the code, in the same scope, mind you, the GC apparently decided it was a good time to destroy the ReadAccessor, which unmapped the memory still being used by the code.

I still find this hard to believe (even C# can't be this stupid), but the crash went away by modifying the code like so:

{
    ReadAccessor access(image);
    IntPtr p = access.GetPtr();
    // lengthy piece of code here doing stuff with the pixels from the image
    System.GC.KeepAlive(access);
}

This particular item is so mind-boggling that I hope some dear reader can tell me it's just a bad dream and scope is, in fact, honored by the GC.
3. Every Function Must Be A Method

C# imposes an object-oriented paradigm and enforces it by prohibiting the definition of stand-alone functions: every function must be a member of a class.

If you take object-orientation to the extreme, you would not say
float b = sin(a);
but rather
float b = a.sin();

This is clearly unpractical. (Ignore the question of how you would take the sine of a number instead of a variable.)

C# (and Java, for that matter) still try to go about half-way there by making the sine function a member of the Math class (or namespace — I can never tell them apart in C#):
float b = Math.sin(a);

If I want to add my own mathematical functions, I either have to extend the Math class (which I can't, because it's sealed) or put up with the strange distinction that I need to write
float h = Math.sqrt(a*a + b*b);
but
float h = MyMath.hypot(a, b);

It gets even more scary if you look at the OracleNumber class, which also has a sin method. Luckily, it's static, and you can't call static member functions on instances.

This is related to the following item, but that is bad enough that I think it warrants its own item:
4. Containers Have Algorithms As Methods

The popular ArrayList container (an auto-resizing container, comparable to C++'s vector template) has a Sort() method. And a Reverse() method. But not a Randomize() method. Why should some algorithms be member functions, but not others? The answer is that no algorithms should be member functions. What if I wanted to use a different sorting algorithm than the one the original implementers of ArrayList had in mind?

Note that an ArrayList sorts itself, while Array.Sort(...) is a static member function of the Array class.

If I decide, late in a project, at the performance-tuning stage perhaps, that I could better use an ArrayList for some particular collection than the Array I used up to now, I will likely have to modify my code in multiple places.

Note that this is not a shortcoming of the language, but it is partly a consequence of item number 2, above. Also note that C# shares this problem with some other languages – even C++ has a few quirks here (the string class comes to mind).
5. Default Comparison Behavior Is Dangerous

Given a class Vector, which doesn't overload the comparison operator==, I can still write

Vector a, b;
if (a == b)
{
        ...
}

In C++, the compiler will have the courtesy of telling me there is no operator== defined for Vectors; in C#, this will simply compile, but it means "compare the references a and b", i.e. it is true when a and b are the same Vector, not when their value is equal. Also, because of the following item, you can't add such an operator yourself without altering the Vector class:
6. Operator Overloading Is Severely Broken

In C++, given a class Vector, you can define an operator for adding two Vectors without altering the Vector class itself:

class Vector {};

Vector operator+(const Vector& lhs, const Vector& rhs)
{
    return Vector(whatever it means to add two Vector);
}

In C#, this is not possible without altering the Vector class itself. Because of the limitation mentioned in item number 2, above, you cannot make this operator a "free-standing" one. Of course, adding this operator has nothing to do with the interface to the Vector class, so you'd probably try something like this:

public class VectorOps
{
    public static Vector operator+(Vector lhs, Vector rhs)
    {
        return new Vector(whatever it means to add two Vectors);
    }
}

but this doesn't work. You'll get the error "One of the parameters of a binary operator must be the containing type". In other words, if someone hands you a Vector class without overloaded operators, you'll have to modify the class itself, also introducing a dependency of your class on the module which happens to implement these operators.

But wait, there's more.

 楼主| 发表于 2015-12-11 07:04:46 | 显示全部楼层
Note that when you overload the operator==, you also have to overload operator!= – but we'll forgive the compiler for not being able to auto-generate it. It will do a similar "helpful" trick with arithmetic and bitwise assignment operators – when it most definitely shouldn't. You cannot overload the arithmetic and bitwise assignment operators +=, -=, etc. Instead, they are evaluated in terms of other operators that can be overloaded. This is exactly the wrong way around; most C++ programmers implement an operator+ in terms of operator+=.

Suppose you have a class Image, representing an image. Also, suppose you have some kind of image processing library, offering functionality to add two images together. For performance reasons, this library will likely have separate functions for adding one image in-place, overwriting the old contents, and for returning a new image containing the result of the addition:

public class ImageProcessing
{
    public static Image Add(Image lhs, Image rhs);
    public static void AddInPlace(Image lhs, Image rhs);
}

You may decide that it's a nice service to clients of your Image class to offer operators for this, so they can write code like

Image a, b, c;
c = a + b;   // really c = ImageProcessing.Add(a, b)
a += b;      // really ImageProcessing.AddInPlace(a, b)

(Of course, you'll have to send them a new Image class, because you have to modify it for this; in addition, your Image class can now not be used without the ImageProcessing class). You would think you'd override operator+ for ImageProcessing.Add() and operator+= for ImageProcessing.AddInPlace(), but you can't. Instead, when your client types a += b, a whole temporary Image will have to be constructed, holding the result of the addition, after which the left operand is replaced with the result. Good bye performance!

Update: In the 3.0 version of the language, a new feature called "extension methods" has appeared. It is now possible to add methods to classes without modifying the original class file, so you could make img.AddInPlace(otherImage) work. However, extension methods don't work together with operator overloading.
7. Events Without Subscribers Raise Exceptions

If a tree falls down in the woods and there is nobody there to hear it, does it still make a sound? C# has a very interesting view on this popular Philosophy 101 question.

In C#, there is a concept called delegates. A multicast delegate is a set of methods to be called successively when the delegate is called. When the set of methods is empty, trying to call the delegate raises an exception.

However, events are implemented in terms of multicast delegates, too. You declare a delegate and an event like so:

public delegate void TreeListener();

class Tree
{
    public event TreeListener Fell;
       
    public void Fall()
    {
        // Fall down, and make some noise.  To be discussed.
    }
}

The idea is that clients interested in hearing trees fall can subscribe themselves to the event using a very fancy syntax:

class Client
{
    public Client(Tree tree)
    {
        tree.Fell += new TreeListener(TreeFell);
    }

    private void TreeFell() // This will be called when the tree falls.
    {
        Console.WriteLine("I heard it!");
    }
}

In the Tree.Fall() implementation, you'd simply call the event:

class Tree
{
    public event TreeListener Fell;
       
    public void Fall()
    {
        // Fall down, and make some noise:
        Fell();
    }
}

So, now comes the important question. What if nobody has subscribed to the Tree.Fell event? In that case, the multicast delegate will be empty, and calling it will raise an exception. You heard it right (or did you?): Trees simply aren't supposed to fall over when nobody's around.

The suggested solution is to check whether anybody's listening first (if the event is empty, it will be null):

class Tree
{
    public event TreeListener Fell;
       
    public void Fall()
    {
        if (Fell != null)
            Fell();
    }
}

This, of course, is not thread-safe. To make it thread-safe, you will have to define your own event add/remove functions and take a lock in them, taking the same lock around the if (Fell != null) above.

Conclusion

C# is very nice for quickly building GUI applications. Especially programmers used to MFC can't seem to praise C# loudly enough. But then again, if you are used to rusty pins being driven under your fingernails daily, the prospect of being kicked in the groin at unpredicable times but only once a week must sound really attractive.

By the way: my introductionary computer programming book is available here. As you can guess, it doesn't use C#. But I promise I didn't rant against it in the book.
© 2009 Sander Stoks – Last edit: 10 May 2011
 楼主| 发表于 2015-12-11 07:05:34 | 显示全部楼层
Why Java is Not My Favorite Programming Language

A newly designed language has recently been the subject of much excitement in the programming world. This language – Java – claims to be safer, more portable, and on the whole, better designed than languages currently used for large-scale applications, and because of these claims, many programmers have been quick to pick up the language in preparation for a newfound demand in Java programmers. The demand, although growing, has not yet reached the level many programmers first anticipated, and the reasons are neither coincidental or trivial. In fact, Java may never be in demand like C and C++ have been for years, because Java is not suitable for many critical programming tasks.
Speed

By its nature, Java is an interpreted language. This means that user code is temporarily compiled into "Java byte code", and does not become executable code until the program is actually run. The most important benefit of interpreted languages like Java is the fact that the code can execute on any computer architecture that is equipped with the Java run-time compiler, thus adding code portability and spawning the popular phrase, "write once, run anywhere."

However, the portability does not come without a price. The need to compile code at run time causes a tremendous slow-down in execution speed. Java doesn’t improve on the execution speed of most older languages; in fact, Java doesn’t even come close.

Some proponents of Java argue that execution time is less of an issue these days, as processor speeds continue to double every 18 or so months, in synchronization with Moore’s law. This is a ludicrous claim—speed always has been and always will be an issue in programming languages. As long as processor speeds double, there will be an application that will take full advantage of the new power in processing computations. Additionally, the current trend towards embedded applications gives more reason for quickly executable code, as most embedded processors will not necessarily match the speed and power of today’s quickest CPU’s.

I cannot deny that there are places where interpreted code is more appropriate for the situation. When speed is not an issue, and portability is key, there is nothing wrong with using Java, or one of many other preexisting interpreted languages for the task. However, the severe decrease in performance should be enough for many serious applications programmers to steer clear of interpreted languages.
Object-Oriented Programming

No one can deny the value of Object-Oriented programming (OOP) in its contribution to programming methods. OOP encourages all the fundamental coding practices that make large-scale applications programming possible (encapsulation, data hiding, etc.). However, Java’s take on OOP differs in significant ways from its predecessors, and some of these differences introduce fundamental problems in Java.

Java supports only a limited version of class inheritance, and this limitation does more than just create confusion; it forces Java to lose generality. Firstly, if a class is to inherit from a base class, the base class must be declared "abstract". However, in large-scale applications, it is often cumbersome to keep tabs on which classes have been declared as such, and this can reduce readability and code reuse significantly.

Furthermore, Java does not support any type of multiple inheritance. This is a fundamental design decision that limits OOP and Object-Oriented design (OOD) for programmers, and reduces Java’s generality.

Operator overloading is not possible for user-defined classes in Java. Again, readability is hindered when, for instance, the summation of an enumeration or record type must be calculated from a summation of each element in the list, rather than with an overloaded "+" operator.

Finally, Java class implementation cannot be separated from class specification. Packaging both together goes against a sound principle of OOD: information hiding. A library user may not want have to know the implementation to understand how to use the library’s classes, but without a specification file, the use of these classes may not be apparently clear. Even worse, the implementation might be proprietary information that needs to be hid from its users. A company might value their implementation as code that should remain in-house, but there is no easy way in Java to supply the class specifications without including the implementation with it.
Garbage Collection

One of the most popular claims of Java proponents is that Java eliminates the use of pointers. This statement is partly true and partly mistaken. It is true that the programmer does not have the ability to explicitly use pointer indirection to manipulate variables by their address in memory. However, Java uses memory addressing for objects and variables all the time; this referencing is simply abstracted away from the programmer, rather than remaining exposed. In turn, the memory allocation is abstracted away from the programmer as well, and is instead handled by a built-in garbage collector.

Garbage collectors are not entirely new; in fact, there are garbage collector libraries freely available for C++. However, Java included garbage collection as part of its core language, and by doing so, has introduced a fundamental flaw in its language design.

Firstly, garbage collection greatly increases the overhead of maintaining allocated memory. Each cell in the heap must include an extra indicator bit for the garbage collection algorithm, requiring more space in memory [1]. Additionally, the garbage collector must trace every pointer in the program back to the heap, to mark the heap as used (non-garbage) storage. Thus, garbage collection requires more space and time to implement.

However, the Java garbage collector does more than just increase time and space requirements—it adds the element of unpredictable performance in program code. For reasons beyond the programmer’s control, the garbage collector may choose to begin performing its costly collection operations at any time during the execution of the program. This is not a problem for code without performance requirements, but it makes consistent performance impossible to predict. Consider the effect this has on real-time computing applications: how could you possibly guarantee a deadline will be met when the timeliness of garbage collection is nondeterministic? Thus, Java is completely unsuitable for any type of real-time computing.

Some people are willing to grant Java code this unpredictable behavior, and receive increased safety in memory references in return. Java won’t allow programmers to misuse pointers because it doesn’t allow pointers in the first place. This is good for beginning programmers who make memory referencing errors, but not for application programmers who demand serious performance in their running apps. With all its safety features, perhaps Java is best suited to serve as an instructional language for beginning programmers, replacing Pascal in that respect [2], and it should leave the programming of large-scale, reliable code to others (*).



[1] Sebesta, Robert. Concepts of Programming Languages. Reading: Addison-Wesley, 1999, p. 250-251.

[2] Kernighan, Brian W. "Why Pascal is Not My Favorite Programming Language", AT&T Bell Laboratories, 1981.

* The opinions expressed in this paper are somewhat exaggerated to make a point, but I have to admit (at least) that the absence of pointers in Java can be an added security feature useful to more than just beginning programmers.

本版积分规则

小黑屋|手机版|Archiver|Lemote Inc.  

GMT+8, 2019-6-17 19:14 , Processed in 0.193476 second(s), 14 queries .

快速回复 返回顶部 返回列表