Telerik RadNumericTextBox Parser Error: System.Web.HttpException : Cannot create an object of type ‘System.Type’ from its string representation ‘System.Int64′ for the ‘DataType’ property
System.Web.HttpParseException (0×80004005): Cannot create an object of type ‘System.Type’ from its string representation ‘System.Int64′ for the ‘DataType’ property. —> System.Web.HttpParseException (0×80004005): Cannot create an object of type ‘System.Type’ from its string representation ‘System.Int64′ for the ‘DataType’ property. —> System.Web.HttpException (0×80004005): Cannot create an object of type ‘System.Type’ from its string representation ‘System.Int64′ for the ‘DataType’ property. at System.Web.UI.PropertyConverter.ObjectFromString(Type objType, MemberInfo propertyInfo, String value) at System.Web.UI.ControlBuilder.AddProperty(String filter, String name, String value, Boolean mainDirectiveMode) at System.Web.UI.ControlBuilder.PreprocessAttributes(ParsedAttributeCollection attribs) at System.Web.UI.ControlBuilder.Init(TemplateParser parser, ControlBuilder parentBuilder, Type type, String tagName, String id, IDictionary attribs) at System.Web.UI.ControlBuilder.CreateBuilderFromType(TemplateParser parser, ControlBuilder parentBuilder, Type type, String tagName, String id, IDictionary attribs, Int32 line, String sourceFileName) at System.Web.UI.ControlBuilder.CreateChildBuilder(String filter, String tagName, IDictionary attribs, TemplateParser parser, ControlBuilder parentBuilder, String id, Int32 line, VirtualPath virtualPath, Type& childType, Boolean defaultProperty) at System.Web.UI.TemplateParser.ProcessBeginTag(Match match, String inputText) at System.Web.UI.TemplateParser.ParseStringInternal(String text, Encoding fileEncoding) at System.Web.UI.TemplateParser.ProcessException(Exception ex) at System.Web.UI.TemplateParser.ParseStringInternal(String text, Encoding fileEncoding) at System.Web.UI.TemplateParser.ParseString(String text, VirtualPath virtualPath, Encoding fileEncoding) at System.Web.UI.TemplateParser.ParseString(String text, VirtualPath virtualPath, Encoding fileEncoding) at System.Web.UI.TemplateParser.ParseFile(String physicalPath, VirtualPath virtualPath) at System.Web.UI.TemplateParser.Parse() at System.Web.Compilation.BaseTemplateBuildProvider.get_CodeCompilerType() at System.Web.Compilation.BuildProvider.GetCompilerTypeFromBuildProvider(BuildProvider buildProvider) at System.Web.Compilation.BuildProvidersCompiler.ProcessBuildProviders() at System.Web.Compilation.BuildProvidersCompiler.PerformBuild() at System.Web.Compilation.BuildManager.CompileWebFile(VirtualPath virtualPath) at System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean throwIfNotFound, Boolean ensureIsUpToDate) at System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile, Boolean throwIfNotFound, Boolean ensureIsUpToDate) at System.Web.Compilation.BuildManager.GetVirtualPathObjectFactory(VirtualPath virtualPath, HttpContext context, Boolean allowCrossApp, Boolean throwIfNotFound) at System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath(VirtualPath virtualPath, Type requiredBaseType, HttpContext context, Boolean allowCrossApp) at System.Web.UI.PageHandlerFactory.GetHandlerHelper(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) at System.Web.HttpApplication.MaterializeHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
This page was working perfectly on 2 development environments and 1 staging environment, and they all had the same version of the controls installed (BIN installed). But when deployed to the production server, the page errors out with this stupid error.
Telerik’s explanation is that the telerik.web.ui.dll and the telerik.web.design.dll have different versions, but this was not the case as I verified versions were correct, rolled back to the previous DLLs prior to the deployment, etc. I cleared the .NET temporary file cache, restarted IIS, restarted the web service, re-referenced the controls in the web.config, even tried updating the GAC. Nothing worked.
I followed Telerik’s instructions (different than above) as well, to no avail.
Since I was working against a deadline for this deploy, my only solution was to remove the RadNumericTextBox and replace it with a RadTextBox and deal with the validation of the input on the server-side.
Great solution? No. A solution at all? Definitely not. But I wasn’t about to wait to hear back from Telerik with their generic responses and then start shooting in the dark all over again.
This is an updated post to the previous post regarding the READ ONLY / WRITE PROTECTED VOLUMES IN SERVER 2008.
So the work-around presented in the previous post will get you by, but it’s not a solution. I set up a script to run the diskpart script every hour, and still, I found that it was almost happening randomly. I noticed that it only happened on my Disk 2 which was on the built-in SATA controller using the JMircron chipset. Interesting. After I installed the new RAID array (here) I noticed that now my system drive was now listed as Drive 2, and that in fact it was being listed as read-only and my script no longer worked. What a pain in the butt. It was especially annoying since SQL Server and IIS would start failing since they couldn’t write to the system drive. What a mess.
At this point you have to step back and consider the situation. I just upgraded some hardware and the situation changed, but after numerous software changes the issue remained. So what’s hardware related but lives in software such that it can tell the OS that a disk is read only? The answer is, a driver.
Then began my quest in searching for a driver issues with the JMicron chipset. Low and behold, it’s a known issue. Once I installed an updated the driver, the issue that would bring my server to a halt VANISHED.
Link to JMb36X Windows Driver.
Make sure you research your set-up first before installing a random driver. You’ll only make a bad situation worse.
I purchased 2 WDC WD20EARS-00M drives and raided them in a RAID 0 configuration (124 KB Stripe) for performance of non-crucial operations. Meaning, anything I have on there I can live with losing OR have backed up at least twice elsewhere. This includes virtual machines, movies and music. Anyway, I wanted to post these benchmarks using HD Tach as there have been a ton of reports that these drives are no good in RAID configurations. They’re probably true, especially since these drives have variable spin rates, which fluctuate independent of each other and can pose problems.
When I first set them up I noticed HUGE fluctuations and large differences in transfer speeds. From 200+ MB/s to ~80 MB/s. I could not duplicate them (not yet at least) but the HD Tach results are promising. Let’s see how this works out. I will update if I have any problems.
The first 3 images are the RAID configuration, with the last being a single drive.
NOTE: I was unable to utilize HD Tune Pro 3.5 to test the raid configuration as it only showed the drives at 2199 GB and reported read speeds of 12460.9 MB/s. There’s obviously something wrong there, probably caching on the RAID controller and within Windows Server 2008, and the fact it’s over 2TB.
UPDATE: I upgraded to HD Tune Pro 3.6 and it is able to benchmark the configuration. It shows that performance ranges from 250 MB/s to 80 MB/s at the end of the drives. Which is great, it’s roughly twice the performance of a single drive, which is what we expected. I also posted the Random Access benchmarks for the single and raided drives. You can tell which is which by the drop down list in the top left hand corner of HD Tune Pro.
The low IOPS on the RAID configuration shows that these drives are not intended for high I/O environments, such as a web server or SQL Server. They do, however, work just fine for low – medium I/O file servers as the good sequential read speeds are perfect for that kind of work.
I will be doing this configuration on my test machine very soon since my previous guide (here) is a bit outdated. For now you can follow the previous instructions and modify them per the instructions below:
The problem is known and posted on many forums.
My solution was:
- Encrypt Windows7 system partition using truecrypt, selecting Single boot and overwriting Grub2 loader with truecrypt loader
- Boot Debian from Rescue CD and install grub2 bootloader NOT on MBR but on /dev/sda3 which is Debian / partition (so truecrypt loader was not overrided)
Now while booting truecrypt bootmenu is shown and if I’d access Win7 I’m entering password, but if I’d enter debian (via Grub2) I hit esc key and then truecrypt loader is searching all other partitions for boot loader and finding Grub2 which resides on /dev/sda3 and load system properly.
I think its the best way to do this for now (until sb find resolution for Grub2 to read /boot/truecrypt.mbr without errors).
So for the longest time I’ve been drinking beer while studying. I’d drink until I felt a little buzz and then space out the beers to try and maintain my productivity and creativity while either programming for work or studying for school. It turns out, I had rediscovered an already well-known phenomenon called the Ballmer’s peak. Go figure… at least I’m justified in my drinking!
Borrowed from xkcd.
I have always loved this tablet / notebook (click here to read about my experience ordering it). Not only was it very portable, but it was sufficiently powerful for my needs as a technology professional. Unfortunately, approximately 2 years after I purchased this laptop, it is dead.
Once the power button is pressed the screen remains black, LEDs flash, all lights are on and the CPU / GPU fan spins. That’s the extent of the laptop’s operation. Prior to searching on the internet I began troubleshooting it myself, swapping out HDD, RAM, etc. Nothing worked.
Then I started searching for a cause, since this was obviously a hardware issue, specifically the motherboard. To my horror, I found my answers here:
Here’s HP’s weak attempt of assisting with the troubleshooting (link).
I’ve tried all the fixes, including pressing down on J, K and L while booting, pushing down with my palm on the enter key, and even went through the trouble of fixing the heat sink gap (link, and link) that seems to be the culprit of this all. Unfortunately, and though there are no signs of morphing or burning on the board, it appears as if the damage was too great. The laptop is dead… and HP stole hundreds of dollars from me.
There are a few motherboards available on eBay, ranging from $140 – 180 (which is cheap considering the motherboard costs $300+ from HP). Unfortunately, and in my case, I’m not sure a new motherboard will completely remedy the situation, since the GPU and / or CPU may be damaged. If that’s the case, then that’s even more time and money to invest in this worthless laptop from a company who’s integrity mimics that of hitler.
In conclusion, I will save the money I would have used to repair this laptop, and purchase a higher quality laptop either a Dell, Toshiba or some other reputable company. HP has been added to my BLACK LIST.
This article is pretty amazing. The world is changing, where a computer program is used as a weapon against an enemy. What impresses me the most is the strategic exploitation of both social customs, human behavior and secure systems. Definitely worth a read.
Use this simple script to truncate the log file of your database. Where yourDB is the database name. By default in MSSQL 2008 the log file name is the same as the database file name, with _log appended at the end. If your database deviates from this (possible if the server was upgraded from 2005 or the file name intentionally changed), use the next snippet of code to find the name of the log file.
TRUNCATE DATABASE LOG FILE
ALTER DATABASE [
] SET RECOVERY SIMPLE WITH NO_WAIT
ALTER DATABASE [
] SET RECOVERY FULL WITH NO_WAIT
FIND LOG FILE NAME
Robert L. Glass
This month’s column is simply a collection of what I consider to be facts—truths, if you will—about software engineering. I’m presenting this software engineering laundry list because far too many people who call themselves software engineers, or computer scientists, or programmers, or whatever nom du jour you prefer, either aren’t familiar with these facts or have forgotten them.
I don’t expect you to agree with all these facts; some of them might even upset you. Great! Then we can begin a dialog about which facts really are facts and which are merely figments of my vivid loyal opposition imagination! Enough preliminaries. Here are the most frequently forgotten fundamental facts about software engineering. Some are of vital importance—we forget them at considerable risk.
C1. For every 10-percent increase in problem complexity, there is a 100-percent increase in the software solution�s complexity. That’s not a condition to try to change (even though reducing complexity is always desirable); that’s just the way it is. (For one explanation of why this is so, see RD2 in the section “Requirements and design.”)
P1. The most important factor in attacking complexity is not the tools and techniques that programmers use but rather the quality of the programmers themselves.
P2. Good programmers are up to 30 times better than mediocre programmers, according to “individual differences” research. Given that their pay is never commensurate, they are the biggest bargains in the software field.
Tools and techniques
T1. Most software tool and technique improvements account for about a 5- to 30-percent increase in productivity and quality. But at one time or another, most of these improvements have been claimed by someone to have “order of magnitude” (factor of 10) benefits. Hype is the plague on the house of software.
T2. Learning a new tool or technique actually lowers programmer productivity and product quality initially. You achieve the eventual benefit only after overcoming this learning curve.
T3. Therefore, adopting new tools and techniques is worthwhile, but only if you (a) realistically view their value and (b) use patience in measuring their benefits.
Q1. Quality is a collection of attributes. Various people define those attributes differently, but a commonly accepted collection is portability, reliability, efficiency, human engineering, testability, understandability, and modifiability.
Q2. Quality is not the same as satisfying users, meeting requirements, or meeting cost and schedule targets. However, all these things have an interesting relationship: User satisfaction = quality product + meets requirements + delivered when needed + appropriate cost.
Q3. Because quality is not simply reliability, it is about much more than software defects.
Q4. Trying to improve one quality attribute often degrades another. For example, attempts to improve efficiency often degrade modifiability.
RE2. There are certain kinds of software errors that most programmers make frequently. These include off-by-one indexing, definition or reference inconsistency, and omitting deep design details. That is why, for example, N-version programming, which attempts to create multiple diverse solutions through multiple programmers, can never completely achieve its promise.
RE3. Software that a typical programmer believes to be thoroughly tested has often had only about 55 to 60 percent of its logic paths executed. Automated support, such as coverage analyzers, can raise that to roughly 85 to 90 percent. Testing at the 100-percent level is nearly impossible.
RE4. Even if 100-percent test coverage (see RE3) were possible, that criteria would be insufficient for testing. Roughly 35 percent of software defects emerge from missing logic paths, and another 40 percent are from the execution of a unique combination of logic paths. They will not be caught by 100-percent coverage (100-percent coverage can, therefore, potentially detect only about 25 percent of the errors!).
RE5. There is no single best approach to software error removal. A combination of several approaches, such as inspections and several kinds of testing and fault tolerance, is necessary.
RE6. (corollary to RE5) Software will always contain residual defects, after even the most rigorous error removal. The goal is to minimize the number and especially the severity of those defects.
EF1. Efficiency is more often a matter of good design than of good coding. So, if a project requires efficiency, efficiency must be considered early in the life cycle.
EF2. High-order language (HOL) code, with appropriate compiler optimizations, can be made about 90 percent as efficient as the comparable assembler code. But that statement is highly task dependent; some tasks are much harder than others to code efficiently in HOL.
EF3. There are trade-offs between size and time optimization. Often, improving one degrades the other.
M1. Quality and maintenance have an interesting relationship (see Q3 and Q4).
M2. Maintenance typically consumes about 40 to 80 percent (60 percent average) of software costs. Therefore, it is probably the most important life cycle phase.
M3. Enhancement is responsible for roughly 60 percent of software maintenance costs. Error correction is roughly 17 percent. So, software maintenance is largely about adding new capability to old software, not about fixing it.
M4. The previous two facts constitute what you could call the “60/60″ rule of software.
M5. Most software development tasks and software maintenance tasks are the same—except for the additional maintenance task of “understanding the existing product.” This task is the dominant maintenance activity, consuming roughly 30 percent of maintenance time. So, you could claim that maintenance is more difficult than development.
Requirements and design
RD1. One of the two most common causes of runaway projects is unstable requirements. (For the other, see ES1.)
RD2. When a project moves from requirements to design, the solution process’s complexity causes an explosion of “derived requirements.” The list of requirements for the design phase is often 50 times longer than the list of original requirements.
RD3. This requirements explosion is partly why it is difficult to implement requirements traceability (tracing the original requirements through the artifacts of the succeeding lifecycle phases), even though everyone agrees this is desirable.
RD4. A software problem seldom has one best design solution. (Bill Curtis has said that in a room full of expert software designers, if any two agree, that’s a majority!) That’s why, for example, trying to provide reusable design solutions has so long resisted significant progress.
Reviews and inspections
RI1. Rigorous reviews commonly remove up to 90 percent of errors from a software product before the first test case is run. (Many research findings support this; of course, it’s extremely difficult to know when you’ve found 100 percent of a software product’s errors!)
RI2. Rigorous reviews are more effective, and more cost effective, than any other error-removal strategy, including testing. But they cannot and should not replace testing (see RE5).
RI3. Rigorous reviews are extremely challenging to do well, and most organizations do not do them, at least not for 100 percent of their software artifacts.
RI4. Post-delivery reviews are generally acknowledged to be important, both for determining customer satisfaction and for process improvement, but most organizations do not perform them. By the time such reviews should be held (three to 12 months after delivery), potential review participants have generally scattered to other projects.
REU1. Reuse-in-the-small (libraries of subroutines) began nearly 50 years ago and is a well-solved problem.
REU2. Reuse-in-the-large (components) remains largely unsolved, even though everyone agrees it is important and desirable.
REU3. Disagreement exists about why reuse-in-the-large is unsolved, although most agree that it is a management, not technology, problem (will, not skill). (Others say that finding sufficiently common subproblems across programming tasks is difficult. This would make reuse-in-the-large a problem inherent in the nature of software and the problems it solves, and thus relatively unsolvable).
REU4. Reuse-in-the-large works best in families of related systems, and thus is domain dependent. This narrows its potential applicability.
REU5. Pattern reuse is one solution to the problems inherent in code reuse.
ES1. One of the two most common causes of runaway projects is optimistic estimation. (For the other, see RD1.)
ES2. Most software estimates are performed at the beginning of the life cycle. This makes sense until we realize that this occurs before the requirements phase and thus before the problem is understood. Estimation therefore usually occurs at the wrong time.
ES3. Most software estimates are made, according to several researchers, by either upper management or marketing, not by the people who will build the software or by their managers. Therefore, the wrong people are doing estimation.
ES4. Software estimates are rarely adjusted as the project proceeds. So, those estimates done at the wrong time by the wrong people are usually not corrected.
ES5. Because estimates are so faulty, there is little reason to be concerned when software projects do not meet cost or schedule targets. But everyone is concerned anyway!
ES6. In one study of a project that failed to meet its estimates, the management saw the project as a failure, but the technical participants saw it as the most successful project they had ever worked on! This illustrates the disconnect regarding the role of estimation, and project success, between management and technologists. Given the previous facts, that is hardly surprising.
ES7. Pressure to achieve estimation targets is common and tends to cause programmers to skip good software process. This constitutes an absurd result done for an absurd reason.
RES1. Many software researchers advocate rather than investigate. As a result, (a) some advocated concepts are worth less than their advocates believe and (b) there is a shortage of evaluative research to help determine the actual value of new tools and techniques.
There, that’s my two cents’ worth of software engineering fundamental facts. What are yours? I expect, if we can get a dialog going here, that there are a lot of similar facts that I have forgotten—or am not aware of. I’m especially eager to hear what additional facts you can contribute.
And, of course, I realize that some will disagree (perhaps even violently!) with some of the facts I’ve presented. I want to hear about that as well.
Robert L. Glass is the editor of Elsevier’s Journal of Systems and Software and the publisher and editor of The Software Practitioner newsletter. Contact him at rglass@indiana (dot) education; he’d be pleased to hear from you.
Reprinted from IEEE Software, vol. 18, no. 3, 2001, pp. 112, 110–111.