分享
 
 
 

TheServerSide.net Tech talks with Danny Thorpe

王朝delphi·作者佚名  2006-01-09
窄屏简体版  字體: |||超大  

Danny Thorpe - Borland Chief Scientist

Danny Thorpe is a Borland Chief Scientist responsible for strategic research in software development tools for the Microsoft Windows, Microsoft .NET, and Linux platforms. He was a member of the team that created the Borland Delphi rapid application development environment in 1995, a founding member of the Borland Kylix project on the Linux platform, and is now the Delphi compiler architect and lead engineer of the Delphi for .NET development team at Borland. He also had a hand in the creation of "Space Cadet Pinball" distributed by Microsoft, and "Full Tilt! Pinball" distributed by Maxis.

So Danny, why don't you tell us little bit about yourself and what you do at Borland and a little bit about some of the work you have been doing over the past couple of years?

I've been with Borland for almost 14 years now and just recently been promoted to Chief Scientist in the .NET tools group and past couple of years I have been mostly been working on the Delphi compiler, first for Linux porting it to Linux and then porting it to the .NET platform and that was a massive undertaking and going further back, I actually started in Borland in software testing and went over to the dark side to engineering, right around Delphi moved to 32 bit.

Now you said you're part of the Delphi .NET?

We have the developer tools group within Borland that handles JBuilder, Delphi, C++, C# and especially with a fairly recent merge, used to be Java was separate from the Window tools and now they have been put together so we can share more ideas and more technology that way. So I'm a chief scientist, we have several chief scientists in the company usually over technology silos and so I'm sort of trying to figure out what is going on with .NET.

Okay. What exactly does a chief scientist do?

I did. I continued to work on SourceSafe directly for about a year, and then I moved on to a related project, which was the Microsoft Repository, and I did that for about a year, and after that I went to the Com team and worked there for probably another six or seven months. As this whole notion of what I said was called, the ComPlus runtime, was beginning to develop and got very, very, early involved in that effort and went from there.

So let me ask the question, right let me ask the potentially rude question which is why Delphi .NET, we have C++ on the one side, we have certainly Visual Basic on the other side Delphi made sense, but now we have C#, which is sort of in that same space, why Delphi.NET?

Well from two perspectives, one is the Delphi tradition and the other is from the Delphi language model, per se. C# is really in the C language family; C#, Java, C++ all have very similar patterns as far as constructors with no names, the squiglies syntax and so forth and the Delphi Pascal modular-type languages, they are quite a bit different in the way that you approach that and from a linguistic prospective that actually does frame the way that you think about the problem. So certain people relate better to the Delphi programming model in terms of language, other people relate better to the C style programming. So there is little bit there. In general the Delphi, why Delphi, why would we want to do Delphi for .NET is because when we were first seeing the .NET platform even before Microsoft was brave enough to say this, the Borlanders were saying we need to get on this because this is the replacement for Windows. And the Microsoft guys were saying, whoa we don't want to scare anybody about this. But we had to work two to three years in advance, so we had to jump on this early and yes it is a scary playing field for a tools vendor because .NET makes such interoperability, such great interoperability between languages you lose a lot of the firewalls between vendors. So if you define yourself by how you prevent people from moving around then that is scary, but if you define yourself by how you innovate and how you push the envelope and add value and still interoperate with everybody else, then it is exciting and there were massive discussion and debates and raging arguments within Borland about why would we put ourselves into that position of injecting ourselves into a Microsoft platform that has Microsoft tool and Microsoft domination. Why would we do that? The simple answer is we already do, we're in the Windows platform, we did it before that we were on the DOS platform, so it is not really that much different. The rules have changed a little bit but really it is all about Borland has to innovate and stay up and stay ahead and we have to push the envelope.

Now we see Delphi .NET, we see C# builder, obviously you guys have JBuilder, one question is, are there any plans from Borland to do some kind of Java like crossover to .NET along the lines of J# or is that something you are just going to leave to them?

It is a possibility. Actually, we were a little a bit taken aback by some of the Microsoft Java announcements recently. We are still sort of processing that to find out what it really means.

TSS: The whole Microsoft Sun agreement? Yyou and everybody else!

But given that J# is the snap shot of Java way back when and is not very current. sure Borland will be an excellent candidate to sort of take on that role and bring that .NET implementation in Java up to speed. Nothing really I can say any further on that. It would make very much sense to do that. We are also looking at multi-language IDEs, we have the JBuilder IDE technology platform for Java and that's also what we used to implement the C++ Mobile Edition IDE. So there's a trend there, and then on the other side we have another IDE technology base, the Galileo code base which is Delphi and C++ Builder, and we are advancing that forward with the .NET stuff. The Java IDE is going deeper into Java and the Delphi IDE is going deep into Win32 and .NET. And that's also the basis for the C# builder.

So, all IDEs become one?

Almost. We actually have debated having uber IDE that has Java and Win32 and .NET. but do you really want to have two or three different VMs running around the same process.

TSS: Well, there is a certain argument from the standpoint of interoperability.

But if we can get the same amount of inoperability for example debugging, you can have the .NET debugger that we implement hand off to the Java debugger and then hand back off to the.NET using step-through code, seamlessly. It doesn't necessarily have to be in the same process. It would be nice to have consistent features across the products, which we are working toward we're borrowing ideas from the JBuilder team for our next implementation of the .NET tools, specifically we are sort of following their lead on ALM integration. The source control, Caliber Requirements Management is to be integrated into the IDEs.

TSS: Is this the Team System stuff that Microsoft just announced.

Yeah, similar to that. So within the IDE the developer is basically one of the control points in this large web that you can also talk about having enterprise wide or department wide measurement. So you have Caliber, you have requirements management, spec tracking. How far completed on the spec, how far for off the spec, late arrivals, early completions, thing like this and that plugs into Gantt charts and things in Project. And then the source control we are actually upping the ante on the source control to further integrate that into the IDE and again for the Delphi customers they can look at JBuilder to see the direction we are going in because JBuilder actually got implemented last release. So we are doing a little bit of catch up there.

Interesting and then more on the languages front then, the stories that we are getting out of Redmond is that managed C++ is going to become an ECMA standard from mapping the C++ language to the .NET framework and of course Borland has always been historically one of the big well, big three, now big two C++ compiler vendors in the world. Is that something you guys are participating in?

Actually it is. We were actually wondering when to announce this so let's do it now. We actually filed for membership in ECMA and we have had members, Borlanders that have been going to ECMA committee meetings since last October. And actually they just recently had a meeting in New Jersey about the managed C++ and our C++ compiler architect drove down from Boston to sit in on that, and he said it was very entertaining. It was very interesting and it was very educational, you would probably say it was entertaining to, but he was also very excited about it because it is a participation level that is very engaging and we can contribute. And also not only see the spec after the fact, but also understand the arguments that went behind why it wound up that way. The information behind the scenes, the lore that led up to a particular standard. Because when you see the standard by itself it is cold and static, you say why did they do it this way and it's well I was there when we argued about it and this was the agreement over here and this was the argument over there and that is very valuable. So we are participating in ECMA and not only the managed C++ but also CLI and C# language standards.

Is that potentially a CLI implementation from Borland?

No, but we definitely want to be on board for things like the discussions about the generic classes, the standard type libraries that they are implementing for .NET because there are debates within that group about how should this be represented for multiple languages and since we have not only the C# and C++ languages, but also our own Delphi language. It's a different view and along with ??? from the COBOL space, these are the folks that are participating, it makes a nice resonance and things actually click, all these people are now agreeing okay, this must mean something.

So how much did the experience of doing Delphi and C++, I think that was Kylix on Linux. How much of that do you think will factor into your discussion about the CLI in terms of keeping some of the principles pure, not getting bogged down in implementation details?

I guess the one thing I'd say about the Kylix experience is since we were building a non-C language on an operating system that was entirely built in C and C++ and our goal was to use the Libc C and C++ standard library. We have got a lot of exposure to language bias and Delphi language works this way and we would like to have things represented a certain way and we couldn't get there in Libc, so would have to turn things around and figure out how to conform and wedge it in there. We're able to spot that kind of thing very quickly in .NET, the few places where it shows up. There are a couple of weak spots in the CLR standards and so forth and we can harp on those, but overall the difference between working in Linux space and working in the .NET framework and architecture space is night and day. Linux is great, Linux is widely used and a standard operating system, but Microsoft has this acceleration factor that they can push things through and the .NET stuff evolved very quickly and its been thought through to a high level. I mean there are people to sit there like Anders Hejlsberg who just look at the whole story arc and say, this things is poking up strangely, why is this strange, why is it weird, and try to get all this put together. Linux really hasn't had that much overbearingness on conformity and so forth. So we are still working on Linux tools and working on Windows 32 tools and we are working on .NET tools. So we get to complain about everything.

True cross platform experience. One question that sort of has to come out in terms of your experience, in terms of developing for the .NET platform. You've mentioned Microsoft and its accelerated perspective and that there is an Anders Hejlsberg and there is a Jim Miller from the CLI and so forth, guys who very definitely have directions and opinions and so forth. How much influence does Borland really have with these guys or are you just taking whatever Microsoft gives you, a standard is supposed to be at least the ideal, standard is supposed to be a community and how much of this is just Microsoft thing, it shall be this way and if you don't like it make your own standard?

I wouldn't say so much that Borland can swagger in as a cowboy and say well that sucks and you got to change it. I mean of course just in personality relationships, that is gonna send people off right there. But we do have good opportunities so that when Microsoft is rolling out an idea, they go to their partners and so forth and then bounce ideas off of people and being able to participate in that cycle frequently is helping Microsoft to decide between the lesser of two evils. Or we are not really sure A or B and we can say well we would much prefer to have B. So there is a guidance kind of thing going on there. I wouldn't say Borland is going in there and saying that is absolutely bit stupid, but we could say this is missing something, this isn't solid, this isn't well cooked, you need to go back and think about it some more and in two cases in the last year I know of that's what happened. Microsoft said, okay thanks for your feedback, we'll have to recook this a little bit. It's the matter of it might not be that much difficult for Microsoft to roll out a solution, but if they go back and correct it later, pain for everybody. In the last three to five years I would say Microsoft appears to have become a lot more engaging in that kind of feedback.

If you see something that you think is fundamentally wrong with the CLR, you've got no qualms about bringing it up with them and saying guys I think this is half-baked?

Yeah. Especially if it is an early review cycle, if it is already shipped and we had access to the early review and we didn't catch it then, well shame on us. But if we are in the early review cycle and Microsoft has invited us to provide feedback or asks for access to certain areas for feedback then we try to do our diligence and actually participate because it is an actually pretty good chance that our feedback will be incorporated somehow.Of course we do butt heads occasionally and don't may any progress but...

TSS: Would not be a committee if you didn't.

So Whidbey .NET 2.0, lot of interesting features are coming as part of that, generics of course are being baked into the runtime, you've got some additional support coming in the runtime for some of the C# features. First question, I assume but will phrase this in the form of a question, that there will be Delphi .NET 2.0 or whatever version you're up to correspond with the Whidbey release.

Absolutely.

TSS: The second question is what sort of features is Delphi getting as part of the .NET 2.0 release?

Right off the top, basically for a language to be a first class citizen in the Whidbey .NET 2.0, you really have to have generics. So we are fully committed to implementing generic syntax in the Delphi language. We actually have had various permutations of syntax sitting on our white boards for a couple of years now and if weren't chasing all these different platforms, we probably would have implemented them. So the plan now is that we will implement generic syntax in Delphi to coincide with the Whidbey release because trying to do it on the .NET 1.1 framework would be very, very difficult. Might as well be lazy and let the CLR do it. Now the interesting aspect of that is because we have been thinking about it for so long there is a subset of that syntax that we believe could be implemented on Windows, Win32 native, without having to go into garbage collection and runtime co-generation or anything like that. So it is an interesting play for our Win32 .NET bridge people. People that are sitting on both sides of the fence that could get a little bit of the benefit out of some of the research and development effort that is going into Win32 to .NET development

Now you had when I met you a couple of years ago, you were giving a talk on the mapping of Delphi, the language the Pascal base language to the .NET framework. I remember you were saying that there were some pain points about doing that?

If you thought I was frustrated then, you should hear me now.

Well. Give us a taste of it. Most of us do not build our own languages, but we hear this interoperability story and give us a taste of what it's like in terms of to build a language.

The interoperability thing is the easiest part. What the pain points are for language implementer frequently are the aspects of that language that are not already implemented by CLR. Number one on the Delphi list and actually also for C++ is the lack of deterministic finalization. That languages have a contract with the programmer saying this is how you bring things up and this is how you bring things down and it is very difficult to get that second part done in .NET.

TSS: You have been talking with Chris Sells?

Yes and Chris Broom and a couple of other folks and basically after much hair pulling and so forth we did find ways to implement the language contract for initialization semantics where we have a module level initialization as well as a type level and instance-level initialization. But on the deterministic finalization, we kind of have the bale on the syntax. We advise our Delphi customers that your unit finalization blocks will be run when you are in EXE but they might not be run if you're in a DLL loaded by a different language, because there is nothing we can plug into. It is frustrating that the process exit event is protected, but it is a security policy restriction that if you're running off a network share you can't hook that event.

And in fact caught us by surprise where actually we had a couple of betas that used it and they blew up, So we backed way from that and we're looking for some other techniques. So number one in frustration for me in CLR in general is the deterministic finalization. We have destructors in Delphi syntax and we chose, because we have existing source code that we want to migrate to .NET very easily as easily as possible, we know there are tons of destructors out there and so to map those directly to finalizers in .NET, finalizers execute in weird context everything is special about it, bad idea. So instead we mapped destructors onto the IDispose pattern. So if we see this pattern in source code the compiler will implement that as a IDisposable, with the instance multiple call checking. And of course we do all that work for you. So we implement that pattern and that seems to be working out fairly well.

TSS: Yeah I know. Stan Lipman has said some of the same things about C++ destructors mapping to finalizers and so forth.

Second area, pain point. CLS is very admirable. The common language specification is admirable in that it has some not only hard rules, but it must be implemented this way for Metadata but also soft rules, these are patterns that you should implement so that you can be interoperable. So not all languages may understand the notion of a property. So you should have this name pattern of get_foo and set_foo so that you can get to a functional.

Java like, procedurally and that's great. The one chink in the armor that we've run into is CLS does not require case sensitivity but you have to be case preserving and all types have to have a name except for one; Arrays. Delphi is a very strongly named, strong typed system where everything which you can refer to in program code generally have to have a name. So types have names. Arrays in .NET cannot have names. So we are forced into a case where in Delphi if you declared an array of integers over here and give it a name that is not the same type as an array of integers over there, with a different name. You do name comparison or name compatibility.

TSS: So like how enums separate themselves from straight ints.

The Delphi programmers would know this pain point within language that if you wanted to pass something by reference, the variable type which you pass in has to match the declared reference type exactly. It can't be a compatible type, it can't be similar or similar shape, but the exact same name. We cannot do that in .NET. You just cannot represent an array with a name. So we have to bend our language rules and accept that in some situations we have to do structural compatibility analysis rather than name identity.

It is almost like Arrays in .NET 1.X are a very early version of generics because it describes this thing that has an element type and that element type and the shape of the array determines whether it is compatible with others. And that is exactly what generics are. So we had to bend the rules a little bit for the Delphi syntax to make that work and we'll be doing structural analysis for generic type compatibility in the future versions of Delphi as well.

Let me to ask this question. I have asked it of others before including Anders. How important is this mixed language compatibility thing, I mean we talked about in the abstract sense, in the academic sense. It is a good thing, we love it and the VB team loves the fact that they can inherit the C# code and blah blah blah. But in your experience somebody who has seen a lot of I am sure customers who are currently using Delphi and are looking at a lot of the samples that are C# and so forth, do you think this is something that people need or are we just sort of fooling ourselves and creating technology for technology's sake?

I think we are fooling ourselves when we make arguments that language interop is not important. I think it is critical. Because if you want to have this argument in .NET that is about freedom of choice that you can use the tools that feel right to you and be able to use all the other stuff as well, then the interop is critical. Our interop experience at Borland goes back further than .NET because we have had this C++ and Delphi products in Win32 that had a common object model. We made sure the vtables are in the same format and they talk the same parameter passing conventions and things like this. Even the runtime type information had to be synchronized, the exceptions had to be synchronized. These types of things were pain points in Linux because Linux platform doesn't define an exception model, it is done by convention by the C language. So there is all this ABI type stuff and they are still debating ABI issues, the binary interface. So for a language today to sit on its own and to be isolated from everything else is really fairly naive because you got so much code which you need to make use of. You're standing on the shoulder of giants and you just can't stand in your own little bubble anymore. You are always going, there is always something out there that somebody else did with a different mindset and a different language that is really cool functionality that you want to be able to get to and without interoperability you are cut off.

So I guess, I want to bring it back to does that mean I as a developer, I am working on an enterprise application. At the end of the day, I just want to ship code and go home. Is it reasonable or is it unreasonable for me to consider using C# for part of my project, using Delphi for part of my project, using Jscript, the typeless language as part of my project, is that feasible or is that just...

It is feasible but it needs to be balanced with a little bit of modicum of reason. I would not advise somebody to use 17 different languages in one project but if JScript or if you have client-side scripting requirements on your thing, JScript makes sense there. It is not yet to the point where you would be reasonable to say you ought to have client-side scripting for C# and Delphi syntax in your browser. Just that infrastructure isn't there yet. So JScript makes sense there or ECMA script. C++ has certain strengths particularly within the device driver and low level bit twiddler stuff. C++ has certain strengths there. It also needs to match your skill set, your developer skill set and tossing C++ people and saying they have to implement something in VB, you can have a riot on your hands, not because of any technical reasons just for philosophical reasons.

TSS: Right, the two languages are constructed so differently.

So it is a matter of picking the right tool for the job including the tools that suit not only the problem but also the developers.

What do you think if Bill Gates were to show up at your office one day and say Danny I want you to do what needs to be done to the CLR to make all this language interop stuff work seamlessly. What do you tell him needs to be done? Where are the pain points that need to be addressed and fixed to make all these different languages happy citizens?

That's a rough one. If we could roll back time and fix this from start then I would make some more arguments in these areas of the deterministic finalization and at least have some sort of mechanism so that an array type could be named and have a name identity associated with it, but since we are sitting here with .NET 1.1 it is implemented, everybody has built on that. Even in the .NET 2.0 timeframe to make that kind of change would probably be breaking philosophy even more than just binary breaking. So I am not sure that those can specifically be changed now.

TSS: We are just stuck.

Yeah. So we'll figure out how to deal with it and move on. In terms of other stuff, what can we do for improving language interop? Being able to identify a language that piece of code is written in, is not very clear in the PDB debug format how you're supposed to register yourself as a language, there's a GUID in there. You can just make something up and put in there. But where does that go and who consumes it. There may be standards for that I am not aware of, but I look into the API and it wants a GUID and so I find two GUIDs that talk about VB and C# so we go off and Ctrl+Shift+G we got our own GUID and we throw it in there and we have our own language. But on the other hand the debugger guys implement our debugger they are actually just now discovering that and saying use that to determine evaluators. So how do you get evaluators plugged in. Every debugger has a different mechanism for that. Visual Studio has one particular model, Delphi has one model, JBuilder has another model. So the next level for language integration is how did you get these things to cross-pollinate and propagate each other. How do you plug in a scripted environment for your runtime, that is not well defined across the board.

Speaking of scripting languages, there has been a sort of a rise in interest within the Java community certainly, but also across programming languages in general in loosely typed, untyped, loosely bound, we are using different terms but basically the languages that don't have a strong sense of type system. What are your thoughts on this? Is this just a reaction to strongly typed systems? Is there some is there some goodness here or is this the clear future and we should all just stop fussing around type systems and get back to work?

No, again I think it has to do with the role that the developer is in when they are doing this code. I would not recommend loosely typed for a major architecture, but I would recommend loosely typed for an ad hoc programmer who needs to hop in and do something and hop out. These are the folks that tend to be most frustrated by formal requirements and in the Delhi language even your variables have to be declared in advance of their use as opposed to C# where things you ae referring to could be just about anywhere. So Delphi has topped down which is one of our strengths. But the looseness there is the strength for people who need to get in and get out and are not I don't want use the word professional but are not career programmers, lets say. And this is where VB and Delphi actually got started was in the departmental areas where people are needed to get some programming done, but they are not programmers by job. And I think it is also possible, Chuck J and I have discussed this quite a bit the last year or so, he has some great ideas on this, I am hoping I can follow through on. It should even be possible to have a language that has the flexibility and looseness what looks like a loosely typed language and yet the compiler still has strong type information, but it infers as you go through it. So the fact that you use this variable in this way here means that we need to continue that on down the line. So you sort of build up some information about how that thing is being used by observation. So I think as far as language evolution it might be possible to actually evolve a strongly typed language into a mode where it still strongly typed but you're not required to put as much up front to explicitly state this is the only thing this could be.

TSS: Interesting. So some kind of variability effect.

Not so much variability but the compiler being more observant of how you are using this stuff. Programming constraints come into this. If the compiler can observe that you have already checked this variable for null then it should be able to pass through it anything that has a non null assertion. So it has already met the constraint. Well that actually becomes a type modification so this variable has now mutated types to say this is not just an integer this is an integer that is not null. Because they have already established that by the code path. This is one of the areas.

TSS: To sort of bake in some deeper structural analysis into the compiling.

Yeah, types would be lot more numerous in this type of system and the compiler will have to be lot smarter about stuff.

TSS: Does this create type bloat?

It could if it were done just slapstick together, but if we look at it as being type of annotations then it shouldn't be bloated that much.

So switching gears for a second, what do you see like post Whidbey in terms of evolution for a) Delphi and b) what sort of evolution do you want to see for the CLR as a whole? Where do you think that they need to go next?

So evolution beyond Whidbey for Delphi. Certainly the Longhorn time frame is going to be very interesting with a lot of new user interface models, user interface modes and the .NET language will be able to get all of that stuff. How does that dovetail into legacy code? That is a big question? Is there a way that VCL our WinForms sister; VCL comes from Win32 it is abstraction above the Windows API and actually has much of a similar evolution as WinForm has and there has been some confusion about VCL.

Lets ask the question what is going on with the VCL with respect to Delphi .NET?

So there has been a little bit of concern in the industry, in our customer base about why did we VCL for .NET because there is already the WinForm language that is very similar to what VCL was set out to do. That's true. WinForms did have a common ancestry with VCL I mean it was derived from I think WFC in J# or J++ and actually the people that worked on that are some of the same people who worked on VCL at Borland. So as Anders Heljsberg says, good ideas don't just go away, so these systems build on top of each other even though they are come from different companies and time frames.

TSS: Plagiarism is the sure sign of respect, or something like that.

Greatest form of flattery.

TSS: There you go.

When Delphi people first started looking at .NET, Oh My God look at all this stuff that has been taken and after we sorted that out internal to Borland we said well actually no, what this means is .NET was built for Delphi. There are so many things that we have been building up and figuring out and what works and what doesn't work in the Delphi space over the eight years prior then when we look at .NET, okay C# was a new language, but what does it do that is new in terms of language philosophy. Not really that much new stuff. Properties and events exception models, unified object model and delegation as a principle programming technique. So these are all new to C and JAVA and some other folks but not for the Delphi folks. So we take that in stride and we just keep going. So in the Whidbey timeframe, looking out beyond Whidbey I see the Delphi space, looking into the Longhorn areas and can we evolve VCL one more time or many more times. So in the Longhorn timeframe, the question for me and I have to do the research on this is can we evolve VCL to adapt to the Avalon and the Longhorn UI model. It's radically different from Win32, but then again VCL was specifically built to transition between different platforms. It was actually built for Win with Win32 migration in mind because we knew that Delphi OWL could not survive the transition, another quote from Anders is "if you are going to break something lets break it good and break it for a reason and do it once." So we decided to break OWL and do the VCL stuff and carry that over to Win32 and then we carried it into Linux and we carried it into .NET. And so the question is, is that enough of an abstraction, can we evolve that again into the Avalon timeframe. It is early to say really whether that is going to happen or not, but I would certainly be very happy if we could.

Now you know why would we continue VCL for .NET when WinForms is sitting right there. Yes they do compete against each other a little bit in that they have window representations and so forth. For folks that are starting a new application WinForms is really the de facto choice, so you have to decide for yourself is it that you have VCL familiarity, is that your skill set that you want to keep going without having to retrain on WinForms? Then VCL for .NET might be a better choice there. So you have these choices. In terms of the platform beyond Whidbey it's all about the devices. There are so many different devices that have been opened up by this virtual machine idea. Java was going after this before and .NET has made it even bigger. It is kind of odd that we have this symbiotic relationship between Intel and Microsoft. They love to hate each other, Intel would love to get rid of Microsoft from their backs and Microsoft would love to break free of Intel and yet they can't. They are just deadlocked.

So you know, Intel is funding all of this Linux stuff and Microsoft is developing .NET for other chips and it will be interesting to see how the fracas turns out. But in the Longhorn timeframe beyond Whidbey I think a lot of the platform growth as far as Microsoft platform will probably be in the terms of places that is not Win32. The SPOT watches whatever those end up being. The car systems or just in the recent press announcements about the TV stuff picked up by Comcast. These are really kind of cool. Particularly if that means that you can take your .NET IL code, 100% managed code like Delphi produces and drop it on all these different platforms and have some sort of consistent execution model. It has been kind of interesting that Microsoft specifically does not claim "Write once, run anywhere". They ridicule Java for that, but I think there is a hybrid that we will wind up with.

TSS: Write once, run many places?

Run many places or run in various places under compatibility boxes and talking with Microsoft about their .NET 64 plans and betas that is going on. There is this debate going on within Microsoft if we are given a .NET32 application should we bubble it up to native 64 or not? Because there is still some code patterns in .NET that would not survive that, it could have some side effects and so as a Win32, .NET developer I would love to be able to take my Win32 apps, my .NET apps and run them on 64 bit and have be really 64 bit code, but that is still up in the air as to whether Microsoft would favor safety and actually run that .NET code in a .NET32 box on the 64 bit platform. I can understand that safety measure, their conservative stance, but I still have a button that says "Go For It". Perhaps something like an application manifest that the vendor can distribute saying this application binary is okay to run in 64 bit mode.

TSS: You know something in a config file.

Yeah, manifests, the config files, these things should be produced by the vendor but they could also be produced by and administrator who knows what they are doing. Hopefully not my grandma.

TSS: Yes, yes. Let's hope a lot.

So you sort of get into similar questions about Compact Frameworks. Frankly I was really disappointed that Compact Frameworks was so different from .NET. There are so many pieces missing and I understand they had to restrict everything down so they can get to a certain footprint size. Their number one goal is to be smaller than J2NE and they did, but in so paring things down they also lost things like ListBox.BeginUpdate. Begin update and end update is one of your critical pattern is in your code for performance, now you have to skip that or say well, we are not really worried about performance of that code that goes between the two platforms. It's a little bit frustrating. On the other hand you know, does it make sense to take an application that was built for 1600 x 1200 and try to scram it into a 1" square? That is not going to work. So you do have to tailor the application.

TSS: The presentation layer to the target device?

And even if you talk about abstracting the presentation layer there is such a big difference there you just can't talk about having menus and things like that on the same scale. So the UI itself would need to be re-thought. You might have common code behind that, your engines and so forth and unfortunately right now in CF you have build the engine as well for that specific platform and it can't be shared with the desktop.

You brought up Longhorn, which then brings up to the discussion of XAML. Is that something you guys are looking at, is that something that Microsoft considers proprietary or will you be able to partially understand XAML as well.

Microsoft has actually already invited us into discussions about XAML and very exciting stuff about that. It is basically a persistence description and ironically enough, of course, this is our own ego talking, but the XAML description model is very similar to what we develop for VCL, which is a DFM file. It is a data representation of the object network at runtime. This is different from the Java/C# model where you have this Init controls method. You override this method and you have to instantiate and set method properties in code. The Delphi model has always been, that gets done by the designer and gets stored in the data that gets put into your resources. And them at runtime the VCL streaming system opens up that data again constructing an object instance and assigning their properties based on RTTI. That is a very similar to what XAML is doing. XAML is just doing it in spades and with XML and possibly embedded scripting and lots of other cool stuff on it. So we are very excited about that one.

Very cool. Danny I want to thank you for your time. It has been a blast and I wish you all the luck in the world.

Thank you very much. It has been a pleasure. Appreciate the invitation.

(http://www.theserverside.net/talks/index.tss)

 
 
 
免责声明:本文为网络用户发布,其观点仅代表作者个人观点,与本站无关,本站仅提供信息存储服务。文中陈述内容未经本站证实,其真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
2023年上半年GDP全球前十五强
 百态   2023-10-24
美众议院议长启动对拜登的弹劾调查
 百态   2023-09-13
上海、济南、武汉等多地出现不明坠落物
 探索   2023-09-06
印度或要将国名改为“巴拉特”
 百态   2023-09-06
男子为女友送行,买票不登机被捕
 百态   2023-08-20
手机地震预警功能怎么开?
 干货   2023-08-06
女子4年卖2套房花700多万做美容:不但没变美脸,面部还出现变形
 百态   2023-08-04
住户一楼被水淹 还冲来8头猪
 百态   2023-07-31
女子体内爬出大量瓜子状活虫
 百态   2023-07-25
地球连续35年收到神秘规律性信号,网友:不要回答!
 探索   2023-07-21
全球镓价格本周大涨27%
 探索   2023-07-09
钱都流向了那些不缺钱的人,苦都留给了能吃苦的人
 探索   2023-07-02
倩女手游刀客魅者强控制(强混乱强眩晕强睡眠)和对应控制抗性的关系
 百态   2020-08-20
美国5月9日最新疫情:美国确诊人数突破131万
 百态   2020-05-09
荷兰政府宣布将集体辞职
 干货   2020-04-30
倩女幽魂手游师徒任务情义春秋猜成语答案逍遥观:鹏程万里
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案神机营:射石饮羽
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案昆仑山:拔刀相助
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案天工阁:鬼斧神工
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案丝路古道:单枪匹马
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:与虎谋皮
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:李代桃僵
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案镇郊荒野:指鹿为马
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案金陵:小鸟依人
 干货   2019-11-12
倩女幽魂手游师徒任务情义春秋猜成语答案金陵:千金买邻
 干货   2019-11-12
 
推荐阅读
 
 
 
>>返回首頁<<
 
靜靜地坐在廢墟上,四周的荒凉一望無際,忽然覺得,淒涼也很美
© 2005- 王朝網路 版權所有