text
stringlengths 8
267k
| meta
dict |
---|---|
Q: Is it possible to Embed Gecko or Webkit in a Windows Form just like a WebView? I'd love to know if there is such a thing as a Gecko.NET ;) I mean, just like we can embed a WebView and that is an "instance" of IE7 inside any Windows Forms application (and tell it to navigateto(fancy_url);). I'd love to use Firefox or WebKit.
Anybody tried this?
UPDATE: Please bear in mind that although it is possible to embed Gecko using the mentioned controls, it is still impossible to print while using Gecko.
UPDATE March 2010: It’s still not possible to print natively using GeckoFX, however a couple of methods exist that may be enough, depending upon what you’re trying to do.
See: http://geckofx.org/viewtopic.php?id=796 for more information.
UPDATE October 2013: I am no longer doing Windows development so I have no interest in this, but seems like the development of Gecko can be found here: https://bitbucket.org/geckofx and it seems to be recently updated. Leaving this here for future Windows devs ;)
UPDATE January 2017: I have gotten an email from a company called TeamDev. They created a Chromium-based .NET browser component called "DotNetBrowser" which can be used to display modern web pages in Windows Forms applications.
To quote the email directly:
Here are some details about the component, which might be helpful:
*
*DotNetBrowser is based on Chromium, thus supporting HTML5, CSS3, JS and the latest web standards. The underlying Chromium version of the library is regularly updated.
*The component is suitable for WPF as well as Windows Forms desktop applications, and works both for C# and VB.NET.
*The library is licensed commercially, however free licences are provided for Open Source and academic projects.
Disclaimer: I have not used this DotNetBrowser for I no longer do Windows Development but may be worth checking if you're looking for a solution to this.
A: GeckoFX is no longer being updated. The alternative is the MozNet XulRunner wrapper by Se7en Soft. MozNet has a ton of features that GeckoFX doesn't and is being actively updated and maintained.
A: I'd just like to point out, to all looking to embed Gecko into their applications, that the GeckoFX project appears to have been abandoned by its creators (Skybound Software). MozNET, while previously based on GeckoFX, sorta' picked up the ball and ran with it. It has the full ability to print, do print previews and allows you to set it all up via the native Windows print dialog, even - and a whole lot more.
A: http://code.google.com/p/geckofx/
This is a nice .NET-wrapped version of Gecko
A: OpenWebKitSharp is a wrapper arount the WebKit engine (nightly) and is very advanced. Take a look at here (OpenWebKitSharp section): http://code.google.com/p/open-webkit-sharp/
A: Update 2016:
BrowseEmAll.Gecko
A .Net component which can be used to integrate the Firefox engine into your .Net application. This is based on Geckofx but unlike the current version of Geckofx this will work with a normal release build of Firefox. To use Geckofx you will need to build Firefox yourself. Again commercial support is available but the component itself is fully open source.
(Full disclosure: I work for this company so take everything I say with a grain of salt)
A: @Martin: Yes, the Adam Locke version is outdated. But that's because a separate distribution is not necessary. It's built with the rest of the Mozilla codebase now.
If you download Prism (ie XulRunner), that will give you a base that you can customize to your needs, and this includes the most recent version of the control (in the \Prism\xulrunner directory, you'll find mozctlx.dll).
@Greg: Actually, it is an ActiveX control. Incidentally, all ActiveX controls are COM controls. ActiveX is built on COM.
A: As of October 30, 2011, there is new information to add since the time of the previous posts. Specifically, while Skybound stopped maintaining their version, there is at least one actively maintained, free, open-source fork available.
I'm using Hindle's fork at BitBucket, which, by virtue of his tool which parses XpCom idls and creates c# wrappers, is rapidly updated with support for each new version of Firefox/Gecko.
See this post for an overview of other choices.
A: Additionally, if you find yourself using Gtk instead of Windows.Forms, there is a tarball of webkit-sharp available that allows for easy embedding of WebViews into Gtk# applications.
A: I Belive "Gecko FX"[1] is the thing you need.
To Quote from the web site
"""
GeckoFX is a Windows Forms control written in clean, commented C# that embeds the Mozilla Gecko browser control in any Windows Forms Application. It also contains a simple class model providing access to the HTML and CSS DOM.
"""
1) I can't post a link as "new users aren't allowed to add hyperlinks" Search for "geckofx" on google code.
A: It certainly is possible. All you need to do is register the Mozilla ActiveX control (mozctlx.dll I believe), and you can drag it onto your form as any ActiveX control. The programming interface is similar (though not identical) to the IE one, and you can even use the Microsoft.MSHTML.dll managed library for control in some cases.
I believe this is packaged with Firefox. If not, you can get just the embeddable bits from Mozilla as well. Just do a Google search for Mozilla ActiveX control or Mozilla Embedding C# and that should take you down the right path.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "71"
} |
Q: Windows Service Increasing CPU Consumption At my job, I have a clutch of six Windows services that I am responsible for, written in C# 2003. Each of these services contain a timer that fires every minute or so, where the majority of their work happens.
My problem is that, as these services run, they start to consume more and more CPU time through each iteration of the loop, even if there is no meaningful work for them to do (ie, they're just idling, looking through the database for something to do). When they start up, each service uses an average of (about) 2-3% of 4 CPUs, which is fine. After 24 hours, each service will be consuming an entire processor for the duration of its loop's run.
Can anyone help? I'm at a loss as to what could be causing this. Our current solution is to restart the services once a day (they shut themselves down, then a script sees that they're offline and restarts them at about 3AM). But this is not a long term solution; my concern is that as the services get busier, restarting them once a day may not be sufficient... but as there's a significant startup penalty (they all use NHibernate for data access), as they get busier, exactly what we don't want to be doing is restarting them more frequently.
@akmad: True, it is very difficult.
*
*Yes, a service run in isolation will show the same symptom over time.
*No, it doesn't. We've looked at that. This can happen at 10AM or 6PM or in the middle of the night. There's no consistency.
*We do; and they are. The services are doing exactly what they should be, and nothing else.
*Unfortunately, that requires foreknowledge of exactly when the services are going to be maxing out CPUs, which happens on an unpredictable schedule, and never very quickly... which makes things doubly difficult, because my boss will run and restart them when they start having problems without thinking of debug issues.
*No, they're using a fairly consistent amount of RAM (approx. 60-80MB each, out of 4GB on the machine).
Good suggestions, but rest assured, we have tried all of the usual troubleshooting. What I'm hoping is that this is a .NET issue that someone might know about, that we can work on solving. My boss' solution (which I emphatically don't want to implement) is to put a field in the database which holds multiple times for the services to restart during the day, so that he can make the problem go away and not think about it. I'm desperately seeking the cause of the real problem so that I can fix it, because that solution will become a disaster in about six months.
@Yaakov Ellis: They each have a different function. One reads records out of an Oracle database somewhere offsite; another one processes those records and transfers files belonging to those records over to our system; a third checks those files to make sure they're what we expect them to be; another is a maintenance service that constantly checks things like disk space (that we have enough) and polls other servers to make sure they're alive; one is running only to make sure all of these other ones are running and doing their jobs, monitors and reports errors, and restarts anything that's failed to keep the whole system going 24 hours a day.
So, if you're asking what I think you're asking, no, there isn't one common thing that all these services do (other than database access via NHibernate) that I can point to as a potential problem. Unfortunately, if that turns out to be the actual issue (which wouldn't surprise me greatly), the whole thing might be screwed -- and I'll end up rewriting all of them in simple SQL. I'm hoping it's a garbage collector problem or something easier to deal with than NHibernate.
@Joshdan: No secret. As I said, we've tried all the usual troubleshooting. Profiling was unhelpful: the profiler we use was unable to point to any code that was actually executing when the CPU usage was high. These services were torn apart about a month ago looking for this problem. Every section of code was analyzed to attempt to figure out if our code was the issue; I'm not here asking because I haven't done my homework. Were this a simple case of the services doing more work than anticipated, that's something that would have been caught.
The problem here is that, most of the time, the services are not doing anything at all, yet still manage to consume 25% or more of four CPU cores: they're finding no work to do, and exiting their loop and waiting for the next iteration. This should, quite literally, take almost no CPU time at all.
Here's a example of behaviour we're seeing, on a service with no work to do for two days (in an unchanging environment). This was captured last week:
Day 1, 8AM: Avg. CPU usage approx 3%
Day 1, 6PM: Avg. CPU usage approx 8%
Day 2, 7AM: Avg. CPU usage approx 20%
Day 2, 11AM: Avg. CPU usage approx 30%
Having looked at all of the possible mundane reasons for this, I've asked this question here because I figured (rightly, as it turns out) that I'd get more innovative answers (like Ubiguchi's), or pointers to things I hadn't thought of (like Ian's suggestion).
So does the CPU spike happen
immediately preceding the timer
callback, within the timer callback,
or immediately following the timer
callback?
You misunderstand. This is not a spike. If it were, there would be no problem; I can deal with spikes. But it's not... the CPU usage is going up generally. Even when the service is doing nothing, waiting for the next timer hit. When the service starts up, things are nice and calm, and the graph looks like what you'd expect... generally, 0% usage, with spikes to 10% as NHibernate hits the database or the service does some trivial amount of work. But this increases to an across-the-board 25% (more if I let it go too far) usage at all times while the process is running.
That made Ian's suggestion the logical silver bullet (NHibernate does a lot of stuff when you're not looking). Alas, I've implemented his solution, but it hasn't had an effect (I have no proof of this, but I actually think it's made things worse... average usage is seeming to go up much faster now). Note that stripping out the NHibernate "sections" (as you recommend) is not feasible, since that would strip out about 90% of the code in the service, which would let me rule out the timer as a problem (which I absolutely intend to try), but can't help me rule out NHibernate as the issue, because if NHibernate is causing this, then the dodgy fix that's implemented (see below) is just going to have to become The Way The System Works; we are so dependent on NHibernate for this project that the PM simply won't accept that it's causing an unresolvable structural problem.
I just noted a sense of desperation in
the question -- that your problems
would continue barring a small miracle
Don't mean for it to come off that way. At the moment, the services are being restarted daily (with an option to input any number of hours of the day for them to shutdown and restart), which patches the problem but cannot be a long-term solution once they go onto the production machine and start to become busy. The problems will not continue, whether I fix them or the PM maintains this constraint on them. Obviously, I would prefer to implement a real fix, but since the initial testing revealed no reason for this, and the services have already been extensively reviewed, the PM would rather just have them restart multiple times than spend any more time trying to fix them. That's entirely out of my control and makes the miracle you were talking about more important than it would otherwise be.
That is extremely intriguing (insofar
as you trust your profiler).
I don't. But then, these are Windows services written in .NET 1.1 running on a Windows 2000 machine, deployed by a dodgy Nant script, using an old version of NHibernate for database access. There's little on that machine I would actually say I trust.
A: You mentioned that you're using NHibernate - are you closing your NHibernate sessions at appropriate points (such as the end of each iteration?)
If not, then the size of the object map loaded into memory will be gradually increasing over time, and each session flush will take increasingly more CPU time.
A: Here's where I'd start:
*
*Get Process Explorer and show %Time in JIT, %Time in GC, CPU Cycles Delta, CPU Time, CPU %, and Threads.
*You'll also want kernel and user time, and a couple of representative stack traces but I think you have to hit Properties to get snapshots.
*Compare before and after shots.
A couple of thoughts on possibilities:
*
*excessive GC (% Time in GC going up. Also, Perfmon GC and CPU counters would correspond)
*excessive threads and associated context switches (# of threads going up)
*polling (stack traces are consistently caught in a single function)
*excessive kernel time (kernel times are high - Task Manager shows large kernel time numbers when CPU is high)
*exceptions (PE .NET tab Exceptions thrown is high and getting higher. There's also a Perfmon counter)
*virus/rootkit (OK, this is a last ditch scenario - but it is possible to construct a rootkit that hides from TaskManager. I'd suspect that you could then allocate your inevitable CPU usage to another process if you were cunning enough. Besides, if you've ruled out all of the above, I'm out of ideas right now)
A: It's obviously pretty difficult to remotely debug you're unknown application... but here are some things I'd look at:
*
*What happens when you only run one of the services at a time? Do you still see the slow-down? This may indicate that there is some contention between the services.
*Does the problem always occur around the same time, regardless of how long the service has been running? This may indicate that something else (a backup, virus scan, etc) is causing the machine (or db) as a whole to slow down.
*Do you have logging or some other mechanism to be sure that the service is only doing work as often as you think it should?
*If you can see the performance degradation over a short time period, try running the service for a while and then attach a profiler to see exactly what is pegging the CPU.
*You don't mention anything about memory usage. Do you have any of this information for the services? It's possible that your using up most of the RAM and causing the disk the trash, or some similar problem.
Best of luck!
A: I suggest to hack the problem into pieces.
First, find a way to reproduce the problem 100% of the times and quickly. Lower the timer so that the services fire up more frequently (for example, 10 times quicker than normal). If the problem arises 10 times quicker, then it's related to the number of iterations and not to real time or to real work done by the services). And you will be able to do the next steps quicker than once a day.
Second, comment out all the real work code, and let only the services, the timers and the synchronization mechanism. If the problem still shows up, than it will be in that part of the code.
If it doesn't, then start adding back the code you commented out, one piece at a time. Eventually, you should find out what part of the code is causing the problem.
A: 'Fraid this answer is only going to suggest some directions for you to look in, but having seen similar problems in .NET Windows Services I have a couple of thoughts you might find helpful.
My first suggestion is your services might have some bugs in either the way they handle memory, or perhaps in the way they handle unmanaged memory. The last time I tracked down a similar issue it turned out a 3rd party OSS libray we were using stored handles to unmanaged objects in static memory. The longer the service ran the more handles the service picked up which caused the process' CPU performance to nose-dive very quickly. The way to try and resolve this sort of issue to ensure your services store nothing in memory inbetween the timer invocations, although if your 3rd party libraries use static memory you might have to do something clever like create an app domain for the timer invocation and ditch the app doamin (and its static memory) once processing is complete.
The other issue I've seen in similar circumstances was with the timer synchronization code being suspect, which in effect allowed more than one thread to be running the processing code at once. When we debugged the code we found the 1st thread was blocking the 2nd, and by the time the 2nd kicked off there was a 3rd being blocked. Over time the blocking was lasting longer and longer and the CPU usage was therefore heading to the top. The solution we used to fix the issue was to implement proper synchronization code so the timer only kicked off another thread if it wouldn't be blocked.
Hope this helps, but apologies up front if both my thoughts are red herrings.
A: Sounds like a threading issue with the timer. You might have one unit of work blocking another running on different worker threads, causing them to stack up every time the timer fires. Or you might have instances living and working longer than you expect.
I'd suggest refactoring out the timer. Replace it with a single thread that queues up work on the ThreadPool. You can Sleep() the thread to control how often it looks for new work. Make sure this is the only place where your code is multithreaded. All other objects should be instantiated as work is readied for processing and destroyed after that work is completed. STATE IS THE ENEMY in multithreaded code.
Another area where the design is lacking appears to be that you have multiple services that are polling resources to do something. I'd suggest unifying them under a single service. They might do seperate things, but they're working in unison; you're just using the filesystem, database, etc as a substitution for method calls. Also, 2003? I feel bad for you.
A:
Good suggestions, but rest assured, we have tried all of the usual troubleshooting. What I'm hoping is that this is a .NET issue that someone might know about, that we can work on solving.
My feeling is that no matter how bizarre the underlying cause, the usual troubleshooting steps are your best bet for locating the issue.
Since this is a performance issue, good measurements are invaluable. The overall process CPU usage is far too broad a measurement. Where is your service spending its time? You could use a profiler to measure this, or just log various section start and stops. If you aren't able to do even that, then use Andrea Bertani's suggestion -- isolate sections by removing others.
Once you've located the general area, then you can make even finer-grained measurements, until you sort out the source of the CPU usage. If it's not obvious how to fix it at that point, you at least have ammunition for a much more specific question.
If you have in fact already done all this usual troubleshooting, please do let us in on the secret.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Template typedefs - What's your work around? C++ 0x has template aliases (sometimes referred to as template typedefs). See here. Current spec of C++ does not.
What do you like to use as work around ? Container objects or Macros ?
Do you feel its worth it ?
A:
template <typename T>
struct my_string_map : public std::map<std::string,T>
{
};
You shouldn't inherit from classes that do not have a virtual destructor. It's related to destructors in derived classes not being called when they should be and you could end up with unallocated memory.
That being said you could *****probably***** get away with it in the instance above because you're not adding any more data to your derived type. Note that this is not an endorsement. I still advice you don't do it. The fact that you can do it doesn't mean you should.
EDIT: Yes, this is a reply to ShaChris23's post. I probably missed something because it showed up above his/her message instead of below.
A:
What do you like to use as work around ? Container objects or Macros ? Do you feel its worth it ?
The canonical way is to use a metafunction like thus:
template <typename T>
struct my_string_map {
typedef std::map<std::string, T> type;
};
// Invoke:
my_string_map<int>::type my_str_int_map;
This is also used in the STL (allocator::rebind<U>) and in many libraries including Boost. We use it extensively in a bioinformatical library.
It's bloated, but it's the best alternative 99% of the time. Using macros here is not worth the many downsides.
(EDIT: I've amended the code to reflect Boost/STL conventions as pointed out by Daniel in his comment.)
A: Sometimes you can just explicitly write out the untemplated typedefs for all the necessary types. If the base class is templated on multiple template args with only one type desired to be typedefed you can inherit a specialized class with typedef effectively included in the inherited class name. This approach is less abstruse than the metafunction approach.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "73"
} |
Q: How does a "stack overflow" occur and how do you prevent it? How does a stack overflow occur and what are the ways to make sure it doesn't happen, or ways to prevent one?
A: Infinite recursion is a common way to get a stack overflow error. To prevent - always make sure there's an exit path that will be hit. :-)
Another way to get a stack overflow (in C/C++, at least) is to declare some enormous variable on the stack.
char hugeArray[100000000];
That'll do it.
A: Aside from the form of stack overflow that you get from a direct recursion (eg Fibonacci(1000000)), a more subtle form of it that I have experienced many times is an indirect recursion, where a function calls another function, which calls another, and then one of those functions calls the first one again.
This can commonly occur in functions that are called in response to events but which themselves may generate new events, for example:
void WindowSizeChanged(Size& newsize) {
// override window size to constrain width
newSize.width=200;
ResizeWindow(newSize);
}
In this case the call to ResizeWindow may cause the WindowSizeChanged() callback to be triggered again, which calls ResizeWindow again, until you run out of stack. In situations like these you often need to defer responding to the event until the stack frame has returned, eg by posting a message.
A: Usually a stack overflow is the result of an infinite recursive call (given the usual amount of memory in standard computers nowadays).
When you make a call to a method, function or procedure the "standard" way or making the call consists on:
*
*Pushing the return direction for the call into the stack(that's the next sentence after the call)
*Usually the space for the return value get reserved into the stack
*Pushing each parameter into the stack (the order diverges and depends on each compiler, also some of them are sometimes stored on the CPU registers for performance improvements)
*Making the actual call.
So, usually this takes a few bytes depeding on the number and type of the parameters as well as the machine architecture.
You'll see then that if you start making recursive calls the stack will begin to grow. Now, stack is usually reserved in memory in such a way that it grows in opposite direction to the heap so, given a big number of calls without "coming back" the stack begins to get full.
Now, on older times stack overflow could occur simply because you exausted all available memory, just like that. With the virtual memory model (up to 4GB on a X86 system) that was out of the scope so usually, if you get an stack overflow error, look for an infinite recursive call.
A: I have recreated the stack overflow issue while getting a most common Fibonacci number i.e. 1, 1, 2, 3, 5..... so calculation for fib(1) = 1 or fib(3) = 2.. fib(n) = ??.
for n, let say we will interested - what if n = 100,000 then what will be the corresponding Fibonacci number ??
The one loop approach is as below -
package com.company.dynamicProgramming;
import java.math.BigInteger;
public class FibonacciByBigDecimal {
public static void main(String ...args) {
int n = 100000;
BigInteger[] fibOfnS = new BigInteger[n + 1];
System.out.println("fibonacci of "+ n + " is : " + fibByLoop(n));
}
static BigInteger fibByLoop(int n){
if(n==1 || n==2 ){
return BigInteger.ONE;
}
BigInteger fib = BigInteger.ONE;
BigInteger fip = BigInteger.ONE;
for (int i = 3; i <= n; i++){
BigInteger p = fib;
fib = fib.add(fip);
fip = p;
}
return fib;
}
}
this quite straight forward and result is -
fibonacci of 100000 is : 2597406934722172416615503402127591541488048538651769658472477070395253454351127368626555677283671674475463758722307443211163839947387509103096569738218830449305228763853133492135302679278956701051276578271635608073050532200243233114383986516137827238124777453778337299916214634050054669860390862750996639366409211890125271960172105060300350586894028558103675117658251368377438684936413457338834365158775425371912410500332195991330062204363035213756525421823998690848556374080179251761629391754963458558616300762819916081109836526352995440694284206571046044903805647136346033000520852277707554446794723709030979019014860432846819857961015951001850608264919234587313399150133919932363102301864172536477136266475080133982431231703431452964181790051187957316766834979901682011849907756686456845066287392485603914047605199550066288826345877189410680370091879365001733011710028310473947456256091444932821374855573864080579813028266640270354294412104919995803131876805899186513425175959911520563155337703996941035518275274919959802257507902037798103089922984996304496255814045517000250299764322193462165366210841876745428298261398234478366581588040819003307382939500082132009374715485131027220817305432264866949630987914714362925554252624043999615326979876807510646819068792118299167964409178271868561702918102212679267401362650499784968843680975254700131004574186406448299485872551744746695651879126916993244564817673322257149314967763345846623830333820239702436859478287641875788572910710133700300094229333597292779191409212804901545976262791057055248158884051779418192905216769576608748815567860128818354354292307397810154785701328438612728620176653953444993001980062953893698550072328665131718113588661353747268458543254898113717660519461693791688442534259478126310388952047956594380715301911253964847112638900713362856910155145342332944128435722099628674611942095166100230974070996553190050815866991144544264788287264284501725332048648319457892039984893823636745618220375097348566847433887249049337031633826571760729778891798913667325190623247118037280173921572390822769228077292456662750538337500692607721059361942126892030256744356537800831830637593334502350256972906515285327194367756015666039916404882563967693079290502951488693413799125174856667074717514938979038653338139534684837808612673755438382110844897653836848318258836339917310455850905663846202501463131183108742907729262215943020429159474030610183981685506695026197376150857176119947587572212987205312060791864980361596092339594104118635168854883911918517906151156275293615849000872150192226511785315089251027528045151238603792184692121533829287136924321527332714157478829590260157195485316444794546750285840236000238344790520345108033282013803880708980734832620122795263360677366987578332625485944906021917368867786241120562109836985019729017715780112040458649153935115783499546100636635745448508241888279067531359950519206222976015376529797308588164873117308237059828489404487403932053592935976454165560795472477862029969232956138971989467942218727360512336559521133108778758228879597580320459608479024506385194174312616377510459921102486879496341706862092908893068525234805692599833377510390101316617812305114571932706629167125446512151746802548190358351688971707570677865618800822034683632101813026232996027599403579997774046244952114531588370357904483293150007246173417355805567832153454341170020258560809166294198637401514569572272836921963229511187762530753402594781448204657460288485500062806934811398276016855584079542162057543557291510641537592939022884356120792643705560062367986544382464373946972471945996555795505838034825597839682776084731530251788951718630722761103630509360074262261717363058613291544024695432904616258691774630578507674937487992329181750163484068813465534370997589353607405172909412697657593295156818624747127636468836551757018353417274662607306510451195762866349922848678780591085118985653555434958761664016447588028633629704046289097067736256584300235314749461233912068632146637087844699210427541569410912246568571204717241133378489816764096924981633421176857150311671040068175303192115415611958042570658693127276213710697472226029655524611053715554532499750843275200199214301910505362996007042963297805103066650638786268157658772683745128976850796366371059380911225428835839194121154773759981301921650952140133306070987313732926518169226845063443954056729812031546392324981793780469103793422169495229100793029949237507299325063050942813902793084134473061411643355614764093104425918481363930542369378976520526456347648318272633371512112030629233889286487949209737847861884868260804647319539200840398308008803869049557419756219293922110825766397681361044490024720948340326796768837621396744075713887292863079821849314343879778088737958896840946143415927131757836511457828935581859902923534388888846587452130838137779443636119762839036894595760120316502279857901545344747352706972851454599861422902737291131463782045516225447535356773622793648545035710208644541208984235038908770223039849380214734809687433336225449150117411751570704561050895274000206380497967960402617818664481248547269630823473377245543390519841308769781276565916764229022948181763075710255793365008152286383634493138089971785087070863632205869018938377766063006066757732427272929247421295265000706646722730009956124191409138984675224955790729398495608750456694217771551107346630456603944136235888443676215273928597072287937355966723924613827468703217858459948257514745406436460997059316120596841560473234396652457231650317792833860590388360417691428732735703986803342604670071717363573091122981306903286137122597937096605775172964528263757434075792282180744352908669606854021718597891166333863858589736209114248432178645039479195424208191626088571069110433994801473013100869848866430721216762473119618190737820766582968280796079482259549036328266578006994856825300536436674822534603705134503603152154296943991866236857638062351209884448741138600171173647632126029961408561925599707566827866778732377419444462275399909291044697716476151118672327238679208133367306181944849396607123345271856520253643621964198782752978813060080313141817069314468221189275784978281094367751540710106350553798003842219045508482239386993296926659221112742698133062300073465628498093636693049446801628553712633412620378491919498600097200836727876650786886306933418995225768314390832484886340318940194161036979843833346608676709431643653538430912157815543512852077720858098902099586449602479491970687230765687109234380719509824814473157813780080639358418756655098501321882852840184981407690738507369535377711880388528935347600930338598691608289335421147722936561907276264603726027239320991187820407067412272258120766729040071924237930330972132364184093956102995971291799828290009539147382437802779051112030954582532888721146170133440385939654047806199333224547317803407340902512130217279595753863158148810392952475410943880555098382627633127606718126171022011356181800775400227516734144169216424973175621363128588281978005788832454534581522434937268133433997710512532081478345067139835038332901313945986481820272322043341930929011907832896569222878337497354301561722829115627329468814853281922100752373626827643152685735493223028018101449649009015529248638338885664893002250974343601200814365153625369199446709711126951966725780061891215440222487564601554632812091945824653557432047644212650790655208208337976071465127508320487165271577472325887275761128357592132553934446289433258105028633583669291828566894736223508250294964065798630809614341696830467595174355313224362664207197608459024263017473392225291248366316428006552870975051997504913009859468071013602336440164400179188610853230764991714372054467823597211760465153200163085336319351589645890681722372812310320271897917951272799656053694032111242846590994556380215461316106267521633805664394318881268199494005537068697621855231858921100963441012933535733918459668197539834284696822889460076352031688922002021931318369757556962061115774305826305535862015637891246031220672933992617378379625150999935403648731423208873977968908908369996292995391977217796533421249291978383751460062054967341662833487341011097770535898066498136011395571584328308713940582535274056081011503907941688079197212933148303072638678631411038443128215994936824342998188719768637604496342597524256886188688978980888315865076262604856465004322896856149255063968811404400429503894245872382233543101078691517328333604779262727765686076177705616874050257743749983775830143856135427273838589774133526949165483929721519554793578923866762502745370104660909382449626626935321303744538892479216161188889702077910448563199514826630802879549546453583866307344423753319712279158861707289652090149848305435983200771326653407290662016775706409690183771201306823245333477966660525325490873601961480378241566071271650383582257289215708209369510995890132859490724306183325755201208090007175022022949742801823445413711916298449914722254196594682221468260644961839254249670903104007581488857971672246322887016438403908463856731164308169537326790303114583680575021119639905615169154708510459700542098571797318015564741406172334145847111268547929892443001391468289103679179216978616582489007322033591376706527676521307143985302760988478056216994659655461379174985659739227379416726495377801992098355427866179123126699374730777730569324430166839333011554515542656864937492128687049121754245967831132969248492466744261999033972825674873460201150442228780466124320183016108232183908654771042398228531316559685688005226571474428823317539456543881928624432662503345388199590085105211383124491861802624432195540433985722841341254409411771722156867086291742124053110620522842986199273629406208834754853645128123279609097213953775360023076765694208219943034648783348544492713539450224591334374664937701655605763384697062918725745426505879414630176639760457474311081556747091652708748125267159913793240527304613693961169892589808311906322510777928562071999459487700611801002296132304588294558440952496611158342804908643860880796440557763691857743754025896855927252514563404385217825890599553954627451385454452916761042969267970893580056234501918571489030418495767400819359973218711957496357095967825171096264752068890806407651445893132870767454169607107931692704285168093413311046353506242209810363216771910420786162184213763938194625697286781413636389620123976910465418956806197323148414224550071617215851321302030684176087215892702098879108938081045903397276547326416916845445627600759561367103584575649094430692452532085003091068783157561519847567569191284784654692558665111557913461272425336083635131342183905177154511228464455136016013513228948543271504760839307556100908786096663870612278690274831819331606701484957163004705262228238406266818448788374548131994380387613830128859885264201992286188208499588640888521352501457615396482647451025902530743172956899636499615707551855837165935367125448515089362904567736630035562457374779100987992499146967224041481601289530944015488942613783140087804311431741858071826185149051138744831358439067228949408258286021650288927228387426432786168690381960530155894459451808735197246008221529343980828254126128257157209350985382800738560472910941184006084485235377833503306861977724501886364070344973366473100602018128792886991861824418453968994777259482169137133647470453172979809245844361129618997595696240971845564020511432589591844724920942930301651488713079802102379065536525154780298059407529440513145807551537794861635879901158192019808879694967187448224156836463534326160242632934761634458163890163805123894184523973421841496889262398489648642093409816681494771155177009562669029850101513537599801272501241971119871526593747484778935488777815192931171431167444773882941064615028751327709474504763922874890662989841540259350834035142035136168819248238998027706666916342133424312054507359388616687691188185776118135771332483965209882085982391298606386822804754362408956522921410859852037330544625953261340234864689275060526893755148403298542086991221052597005628576707702567695300978970046408920009852106980295419699802138053295798159478289934443245491565327845223840551240445208226435420656313310702940722371552770504263482073984454889589248861397657079145414427653584572951329719091947694411910966797474262675590953832039169673494261360032263077428684105040061351052194413778158095005714526846009810352109249040027958050736436961021241137739717164869525493114805040126568351268829598413983222676377804500626507241731757395219796890754825199329259649801627068665658030178877405615167159731927320479376247375505855052839660294566992522173600874081212014209071041937598571721431338017425141582491824710905084715977249417049320254165239323233258851588893337097136310892571531417761978326033750109026284066415801371359356529278088456305951770081443994114674291850360748852366654744869928083230516815711602911836374147958492100860528981469547750812338896943152861021202736747049903930417035171342126923486700566627506229058636911882228903170510305406882096970875545329369434063981297696478031825451642178347347716471058423238594580183052756213910186997604305844068665712346869679456044155742100039179758348979935882751881524675930878928159243492197545387668305684668420775409821781247053354523194797398953320175988640281058825557698004397120538312459428957377696001857497335249965013509368925958021863811725906506436882127156815751021712900765992750370228283963962915973251173418586721023497317765969454283625519371556009143680329311962842546628403142444370648432390374906410811300792848955767243481200090309888457270907750873638873299642555050473812528975962934822878917619920725138309388288292510416837622758204081918933603653875284116785703720989718832986921927816629675844580174911809119663048187434155067790863948831489241504300476704527971283482211522202837062857314244107823792513645086677566622804977211397140621664116324756784216612961477109018826094677377686406176721484293894976671380122788941309026553511096118347012565197540807095384060916863936906673786627209429434264260402902158317345003727462588992622049877121178405563348492490326003508569099382392777297498413565614830788262363322368380709822346012274241379036473451735925215754757160934270935192901723954921426490691115271523338109124042812102893738488167358953934508930697715522989199698903885883275409044300321986834003470271220020159699371690650330547577095398748580670024491045504890061727189168031394528036165633941571334637222550477547460756055024108764382121688848916940371258901948490685379722244562009483819491532724502276218589169507405794983759821006604481996519360110261576947176202571702048684914616894068404140833587562118319210838005632144562018941505945780025318747471911604840677997765414830622179069330853875129298983009580277554145435058768984944179136535891620098725222049055183554603706533183176716110738009786625247488691476077664470147193074476302411660335671765564874440577990531996271632972009109449249216456030618827772947750764777446452586328919159107444252320082918209518021083700353881330983215894608680127954224752071924134648334963915094813097541433244209299930751481077919002346128122330161799429930618800533414550633932139339646861616416955220216447995417243171165744471364197733204899365074767844149929548073025856442942381787641506492878361767978677158510784235702640213388018875601989234056868423215585628508645525258377010620532224244987990625263484010774322488172558602233302076399933854152015343847725442917895130637050320444917797752370871958277976799686113626532291118629631164685159934660693460557545956063155830033697634000276685151293843638886090828376141157732003527565158745906567025439437931104838571313294490604926582363108949535090082673154497226396648088618041573977888472892174618974189721700770009862449653759012727015227634510874906948012210684952063002519011655963580552429180205586904259685261047412834518466736938580027700252965356366721619883672428226933950325930390994583168665542234654857020875504617520521853721567282679903418135520602999895366470106557900532129541336924472492212436324523042895188461779122338069674233980694887270587503389228395095135209123109258159006960395156367736067109050566299603571876423247920752836160805597697778756476767210521222327184821484446631261487584226092608875764331731023263768864822594691211032367737558122133470556805958008310127481673962019583598023967414489867276845869819376783757167936723213081586191045995058970991064686919463448038574143829629547131372173669836184558144505748676124322451519943362182916191468026091121793001864788050061351603144350076189213441602488091741051232290357179205497927970924502479940842696158818442616163780044759478212240873204124421169199805572649118243661921835714762891425805771871743688000324113008704819373962295017143090098476927237498875938639942530595331607891618810863505982444578942799346514915952884869757488025823353571677864826828051140885429732788197765736966005727700162592404301688659946862983717270595809808730901820120931003430058796552694788049809205484305467611034654748067290674399763612592434637719995843862812391985470202414880076880818848087892391591369463293113276849329777201646641727587259122354784480813433328050087758855264686119576962172239308693795757165821852416204341972383989932734803429262340722338155102209101262949249742423271698842023297303260161790575673111235465890298298313115123607606773968998153812286999642014609852579793691246016346088762321286205634215901479188632194659637483482564291616278532948239313229440231043277288768139550213348266388687453259281587854503890991561949632478855035090289390973718988003999026132015872678637873095678109625311008054489418857983565902063680699643165033912029944327726770869305240718416592070096139286401966725750087012218149733133695809600369751764951350040285926249203398111014953227533621844500744331562434532484217986108346261345897591234839970751854223281677187215956827243245910829019886390369784542622566912542747056097567984857136623679023878478161201477982939080513150258174523773529510165296934562786122241150783587755373348372764439838082000667214740034466322776918936967612878983488942094688102308427036452854504966759697318836044496702853190637396916357980928865719935397723495486787180416401415281489443785036291071517805285857583987711145474240156416477194116391354935466755593592608849200546384685403028080936417250583653368093407225310820844723570226809826951426162451204040711501448747856199922814664565893938488028643822313849852328452360667045805113679663751039248163336173274547275775636810977344539275827560597425160705468689657794530521602315939865780974801515414987097778078705357058008472376892422189750312758527140173117621279898744958406199843913365680297721208751934988504499713914285158032324823021340630312586072624541637765234505522051086318285359658520708173392709566445011404055106579055037417780393351658360904543047721422281816832539613634982525215232257690920254216409657452618066051777901592902884240599998882753691957540116954696152270401280857579766154722192925655963991820948894642657512288766330302133746367449217449351637104725732980832812726468187759356584218383594702792013663907689741738962252575782663990809792647011407580367850599381887184560094695833270775126181282015391041773950918244137561999937819240362469558235924171478702779448443108751901807414110290370706052085162975798361754251041642244867577350756338018895379263183389855955956527857227926155524494739363665533904528656215464288343162282921123290451842212532888101415884061619939195042230059898349966569463580186816717074818823215848647734386780911564660755175385552224428524049468033692299989300783900020690121517740696428573930196910500988278523053797637940257968953295112436166778910585557213381789089945453947915927374958600268237844486872037243488834616856290097850532497036933361942439802882364323553808208003875741710969289725499878566253048867033095150518452126944989251596392079421452606508516052325614861938282489838000815085351564642761700832096483117944401971780149213345335903336672376719229722069970766055482452247416927774637522135201716231722137632445699154022395494158227418930589911746931773776518735850032318014432883916374243795854695691221774098948611515564046609565094538115520921863711518684562543275047870530006998423140180169421109105925493596116719457630962328831271268328501760321771680400249657674186927113215573270049935709942324416387089242427584407651215572676037924765341808984312676941110313165951429479377670698881249643421933287404390485538222160837088907598277390184204138197811025854537088586701450623578513960109987476052535450100439353062072439709976445146790993381448994644609780957731953604938734950026860564555693224229691815630293922487606470873431166384205442489628760213650246991893040112513103835085621908060270866604873585849001704200923929789193938125116798421788115209259130435572321635660895603514383883939018953166274355609970015699780289236362349895374653428746875
Now another approach I have applied is through Divide and Concur via recursion
i.e. Fib(n) = fib(n-1) + Fib(n-2) and then further recursion for n-1 & n-2.....till 2 & 1. which is programmed as -
package com.company.dynamicProgramming;
import java.math.BigInteger;
public class FibonacciByBigDecimal {
public static void main(String ...args) {
int n = 100000;
BigInteger[] fibOfnS = new BigInteger[n + 1];
System.out.println("fibonacci of "+ n + " is : " + fibByDivCon(n, fibOfnS));
}
static BigInteger fibByDivCon(int n, BigInteger[] fibOfnS){
if(fibOfnS[n]!=null){
return fibOfnS[n];
}
if (n == 1 || n== 2){
fibOfnS[n] = BigInteger.ONE;
return BigInteger.ONE;
}
// creates 2 further entries in stack
BigInteger fibOfn = fibByDivCon(n-1, fibOfnS).add( fibByDivCon(n-2, fibOfnS)) ;
fibOfnS[n] = fibOfn;
return fibOfn;
}
}
When i ran the code for n = 100,000 the result is as below -
Exception in thread "main" java.lang.StackOverflowError
at com.company.dynamicProgramming.FibonacciByBigDecimal.fibByDivCon(FibonacciByBigDecimal.java:29)
at com.company.dynamicProgramming.FibonacciByBigDecimal.fibByDivCon(FibonacciByBigDecimal.java:29)
at com.company.dynamicProgramming.FibonacciByBigDecimal.fibByDivCon(FibonacciByBigDecimal.java:29)
Above you can see the StackOverflowError is created. Now the reason for this is too many recursion as -
// creates 2 further entries in stack
BigInteger fibOfn = fibByDivCon(n-1, fibOfnS).add( fibByDivCon(n-2, fibOfnS)) ;
So each entry in stack create 2 more entries and so on... which is represented as -
Eventually so many entries will be created that system is unable to handle in the stack and StackOverflowError thrown.
For Prevention :
For Above example perspective
*
*Avoid using recursion approach or reduce/limit the recursion by again one level division like if n is too large then split the n so that system can handle with in its limit.
*Use other approach, like the loop approach I have used in 1st code sample. (I am not at all intended to degrade Divide & Concur or Recursion as they are legendary approaches in many most famous algorithms.. my intention is to limit or stay away from recursion if I suspect stack overflow issues)
A: Considering this was tagged with "hacking", I suspect the "stack overflow" he's referring to is a call stack overflow, rather than a higher level stack overflow such as those referenced in most other answers here. It doesn't really apply to any managed or interpreted environments such as .NET, Java, Python, Perl, PHP, etc, which web apps are typically written in, so your only risk is the web server itself, which is probably written in C or C++.
Check out this thread:
https://stackoverflow.com/questions/7308/what-is-a-good-starting-point-for-learning-buffer-overflow
A: Stack
A stack, in this context, is the last in, first out buffer you place data while your program runs. Last in, first out (LIFO) means that the last thing you put in is always the first thing you get back out - if you push 2 items on the stack, 'A' and then 'B', then the first thing you pop off the stack will be 'B', and the next thing is 'A'.
When you call a function in your code, the next instruction after the function call is stored on the stack, and any storage space that might be overwritten by the function call. The function you call might use up more stack for its own local variables. When it's done, it frees up the local variable stack space it used, then returns to the previous function.
Stack overflow
A stack overflow is when you've used up more memory for the stack than your program was supposed to use. In embedded systems you might only have 256 bytes for the stack, and if each function takes up 32 bytes then you can only have function calls 8 deep - function 1 calls function 2 who calls function 3 who calls function 4 .... who calls function 8 who calls function 9, but function 9 overwrites memory outside the stack. This might overwrite memory, code, etc.
Many programmers make this mistake by calling function A that then calls function B, that then calls function C, that then calls function A. It might work most of the time, but just once the wrong input will cause it to go in that circle forever until the computer recognizes that the stack is overblown.
Recursive functions are also a cause for this, but if you're writing recursively (ie, your function calls itself) then you need to be aware of this and use static/global variables to prevent infinite recursion.
Generally, the OS and the programming language you're using manage the stack, and it's out of your hands. You should look at your call graph (a tree structure that shows from your main what each function calls) to see how deep your function calls go, and to detect cycles and recursion that are not intended. Intentional cycles and recursion need to be artificially checked to error out if they call each other too many times.
Beyond good programming practices, static and dynamic testing, there's not much you can do on these high level systems.
Embedded systems
In the embedded world, especially in high reliability code (automotive, aircraft, space) you do extensive code reviews and checking, but you also do the following:
*
*Disallow recursion and cycles - enforced by policy and testing
*Keep code and stack far apart (code in flash, stack in RAM, and never the twain shall meet)
*Place guard bands around the stack - empty area of memory that you fill with a magic number (usually a software interrupt instruction, but there are many options here), and hundreds or thousands of times a second you look at the guard bands to make sure they haven't been overwritten.
*Use memory protection (ie, no execute on the stack, no read or write just outside the stack)
*Interrupts don't call secondary functions - they set flags, copy data, and let the application take care of processing it (otherwise you might get 8 deep in your function call tree, have an interrupt, and then go out another few functions inside the interrupt, causing the blowout). You have several call trees - one for the main processes, and one for each interrupt. If your interrupts can interrupt each other... well, there be dragons...
High-level languages and systems
But in high level languages run on operating systems:
*
*Reduce your local variable storage (local variables are stored on the stack - although compilers are pretty smart about this and will sometimes put big locals on the heap if your call tree is shallow)
*Avoid or strictly limit recursion
*Don't break your programs up too far into smaller and smaller functions - even without counting local variables each function call consumes as much as 64 bytes on the stack (32 bit processor, saving half the CPU registers, flags, etc)
*Keep your call tree shallow (similar to the above statement)
Web servers
It depends on the 'sandbox' you have whether you can control or even see the stack. Chances are good you can treat web servers as you would any other high level language and operating system - it's largely out of your hands, but check the language and server stack you're using. It is possible to blow the stack on your SQL server, for instance.
A: A stack overflow in real code occurs very rarely. Most situations in which it occurs are recursions where the termination has been forgotten. It might however rarely occur in highly nested structures, e.g. particularly large XML documents. The only real help here is to refactor the code to use an explicit stack object instead of the call stack.
A: Most people will tell you that a stack overflow occurs with recursion without an exit path - while mostly true, if you work with big enough data structures, even a proper recursion exit path won't help you.
Some options in this case:
*
*Breadth-first search
*Tail recursion, .Net-specific great blog post (sorry, 32-bit .Net)
A: Stack overflow occurs when your program uses up the entire stack. The most common way this happens is when your program has a recursive function which calls itself forever. Every new call to the recursive function takes more stack until eventually your program uses up the entire stack.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "129"
} |
Q: Is there a Scheduler/Calendar JS Widget library? I am looking for some JavaScript based component to be used as a course scheduler which would be a cross between Google Calendar and the login time. I do not know if the right term for this is Course Scheduler but I shall describe this in more detail here.
Course Scheduler
The widget would be used to enter date and times of a course, as an example if I run a programming course 3 days a week on Mon, Tue and Wed every 7:00 am to 9:00am, 2 hours every day from 1st September to 30th November. I could answer various questions and the course data would be displayed in the calendar. It would also allow for non pattern based timings where each week is different from the other week etc.
Question
So would I end up creating something from scratch? Would it be sensible to use Google Calendar API for this? I did a Google search for some widgets, but I believe I need better keywords, as I could not find anything close to what I am looking for. Any tips? Commercial libraries would also work for me. Thanks.
A: try the following open source one.
wdCalendar is a jquery based google calendar clone. It cover most google calendar features.
* Day/week/month view provided.
* create/update/remove events by drag & drop.
* Easy way to integrate with database.
* All day event/more days event provided.
A: this could be what you're looking for:
DHTMLxScheduler link
*
*It has day/week/month views
*It is free
*Data can be loaded in xml or iCal formats
You can populate the calendar using any server-side scripting language. If you wanted to, you could just get your google calendar's xml data as per Mickey's example in the accepted post above, process it in your server-side language of choice and feed the calendar control with that data.
EDIT
I also found this project on Google code recently:
JQuery Frontier Calendar
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Why is Harvest being purchased at all? Does your work environment use Harvest SCM? I've used this now at two different locations and find it appalling. In one situation I wrote a conversion script so I could use CVS locally and then daily import changes to the Harvest system while I was sleeping. The corp was fanatic about using Harvest, despite 80% of the programmers crying for something different. It was needlessly complicated, slow and heavy. It is now a job requirement for me that Harvest is not in use where I work.
Has anyone else used Harvest before? What's your experience? As bad as mine? Did you employ other, different workarounds? Why is this product still purchased today?
A: I used Harvest during a short gig in the banking industry a few years ago. I agree that it was practically unusable, but the people in charge of QA seemed to love it.
A: I worked for a company that had two choices; ClearCase or Harvest. Subversion hadn't ever been considered, and the reason was that ClearCase (IBM) and Harvest (CA) both had longstanding mainframe contracts already.
A: We've used Harvest for about ten years (2000-2010) and even though we are now looking at replacing it I believe it has served us very well.
Harvest (let's stick with that name even though it's no longer it's official name), was the first major tool we implemented to support us in R&D and at the time none uf us knew much about the many aspects of application lifecycle (versioning of code, branching, automated testing, regression testing, quality assurance, deployment to numerous runtime environments and production, rollback, ememrgency fixes, maintenance updates etc.); today we know a lot more and our development processes serve us very well (not that there is not room for many improvements).
We do not have a very hierarchical organisation (we don't have a lot of inspectors that need to approve changes) but it's very helpful to have support for "checkpoints" - points in the development process where something need to happen (e.g. functional testing or integration testing).
The drawback (for us) with Harvest in regards to usability has been "what a programmer need to do to change x lines of code". Today (out there) there are a lot of easier and more efficient ways than Harvest to get write access to source code files, make your updates and then return the files again / move them to another aspect of the development process (testing,deployment etc.). Another drawback is the price tag; it's expensive.
Gain we've had with Harvest:
It support workflow and therefore we've been able to have a single system to manage code versioning, workflow and process automation. If possible it's easier to maintain and improve a single system than many.
In addition to providing cmd line access to internal processes (making it possible to script special solutions when so required by your processes) Harvest also is easily configured by graphic interface.
It has the concept of "Package" which makes it easy to attach plenty of meta data to code changes and to handle the changes independently of other changes (versioning on file level rather than change sets containing the complete code mass). This is helpful to handle indpendent emergency and maintenance changes.
If a developer is only a programmer and only think on the coding aspect of software development then I imagen he/she would might get very frustrated with Harvest.
If a developer is a developer and understand that software development is a lot more than coding and that the coding is only the very begining of a the lifecycle of software then I belive he would see a lot of benefits with Harvest.
A: I had the benefit of using Harvest at a bank and you'll never find a more wretched hive of scum and villainy, backwards triple-forking undocumented check-in gauntlets that require 15 steps to make one simple change. Nevermind that they weren't even using branching. This is an evil tool don't let it get you in its clutches.
A: Chances are, your company has some sort of contract with CA - are you using a lot of other CA software in-house?
Edit: Guess so!
A: OK I'm going to answer this in a couple of episodes because its late here and Harvest is a big topic.
Firstly CA Harvest (which is what version 7 of the product is called, version 5 is CCC which I cant recall the expansion, version 12 is called CA SCM) is a lot more than just a SCM tool - in the same way ClearCase is a lot more than an SCM tool. SVN, CVS, git, hg are all base-standard SCM and little more.
What you get with Harvest is SCM + Policy. It gives you a place to store and version your code and wrap it all in a policy of how that code matures though your organization from dev to prod. Do you have a policy in your organization that a Lead Developer needs to sign off on the code before its released to QA ? Harvest allows you define the signoff as a policy, and enforces it - you cant migrate the code from the "Dev" state to the "QA" state until one of the people in the project designated as a Lead Dev does exactly that. Do you have a policy that any SQL code needs signoff by a DBA before it progresses ? Harvest allows you to define that policy, and enforces it - so you might need both Lead Dev and DBA signoff before code migrates.
Harvest is by no means a tool for most software organizations - it is typically used in the finance industry, or in business' where a very strong regulatory framework governs what they can do. Banks need to comply with Sarbannes-Oxley, which has very strong auditing requirements. Harvest provides the ability to define all kinds of controls and process around how changes to the Banks assets move through their lifecycle. I know large public transport organizations that are responsible for the safety and punctuality of millions of people every day, that need the tightly defined control mechanisms that a tool like Harvest provides. I also have seen Harvest used in environments where 1000's of developers use it everyday - yes, I'm not exaggerating, literally 1000's of devs in one organization, writing code for a worldwide retailer, pushing IT solutions out their door everyday to the stores around the world.
Harvest is not perfect, thought version 12 is much better. It has too many "that's just stupid"-moments, it does per-file versioning ala CVS, and CVS-like branching and directory versioning (or lack thereof), with all the fun we've come to know and fear. Once you know it and accept it though, its isn't inherently slower than any other SCM I've used. It just has a bigger job to do than just version your code.
Another big win, and its even bigger with version 12, is its integration with other CA tool (and ability to integrate with non-CA tools, but not many at the moment) - defect tracking with Quality Centre, trouble ticketing with Unicentre Service Desk, software deployment to the desktop with SDM. You can define bridges between these apps that result in a lot tighter integration of these concerns, with the usually positive effects on accuracy and timeliness.
If your dealing with getting software out to a worldwide enterprise, with thousands of desktops and servers, mainfame/midrange/middleware systems, iron-clad change control processes, complexity, regulations, contracts, auditors, just a whole bunch of complexity, Harvest is just one tool in a whole suite of tools your going to need. If you just want a simple SCM for a team of 10 devs supporting a few hundred customers, its not a great way to go.
I'll try to add something about how Harvest actually works next time - repositories, projects, views, packages, forms, processes etc. That might help explain why some organizations use it, and why its not for everyone.
A: I have been using HARVEST for the last 4 years and i love it. The kind of support it gives you to control the code movement is really fantastic. We use HARVEST to deploy applications on to Websphere. It also do an amazing work in deploying the plugins into the web server along with the application. When you want to have a process in place for moving the code in a big enterprise environment, i don't think any other tool can even come closer to HARVEST.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: vimdiff and CVS integration I've always wanted to be able to get a reasonably elegant way of getting vimdiff to work with a CVS controlled file. I've found numerous (somewhat hacky) scripts around the internet (best example here) that basically check out the file you are editing from CVS to a temp file, and vimdiff the two. None of these take into account branches, and always assume you're working from MAIN, which for me is completely useless.
So, my question is this: has anyone out there found a decent solution for this that does more than this script?
Or failing that, does anyone have any ideas of how they would implement this, or suggestions for what features you would consider vital for something that does this? My intention is that, if no one can suggest an already built solution to either use or build from, we start building one from here.
A: I've been working on a similar script here: http://github.com/ghewgill/vim-scmdiff (in fact, they may have the same ancestry). I haven't used scmdiff with cvs, but it should do a diff against the branch you have checked out. You can also specify that you want to diff against a particular revision (with :D revision). Hopefully this helps, and feel free to contribute if you've got improvements!
A: @Greg Hewgill:
thanks for the script! I had a couple of issues with it though, so here's what I'd change:
line 21:
< map <silent> <C-d> :call <SID>scmToggle()<CR>
--
> map <silent> <C-h> :call <SID>scmToggle()<CR>
I use Ctrl-d for page-down (too lazy to move all that way over to PdDn), so had to switch to Ctrl-h.
line 112:
< let cmd = 'cd ' . g:scmBufPath . ' && ' . g:scmDiffCommand . ' diff ' . g:scmDiffRev . ' ' . expand('%:p') . ' > ' . tmpdiff
--
> if g:scmDiffUseAbsPaths
> let cmd = 'cd ' . g:scmBufPath . ' && ' . g:scmDiffCommand . ' diff ' . g:scmDiffRev . ' ' . expand('%:p') . ' > ' . tmpdiff
> else
> let cmd = g:scmDiffCommand . ' diff ' . g:scmDiffRev . ' ' . bufname('%') . ' > ' . tmpdiff
> endif
I had issues with not being able to use absolute paths with CVS. I don't know if this is a weirdness of our local set up here, or if it's a global CVS thing. So, I've made a configurable variable that you can put in your .vimrc to use relative path instead.
It now seems to work exactly how I wanted, so I'll keep bashing away and see if I can find anything else that breaks, posting fixes as I go.
Edit: Forgot to add: please feel free to add these changes to your script on github if you feel they're worthwhile.
A: You could change the call to cvs to take branches into account. That shouldn't be to hard. It bit harder would to change the whole function and make the branch your working a variable (argument, session, global or otherwise).
A: VCSCommand is another actively maintained vim script for VCS integration. It has support for CVS/SVN/SVK/git.
I use it all the time for SVN and never had any complaints. The shortcuts use mapleader, so it is unlikely that they will overwrite existing mappings.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Filtering collections in C# I am looking for a very fast way to filter down a collection in C#. I am currently using generic List<object> collections, but am open to using other structures if they perform better.
Currently, I am just creating a new List<object> and looping thru the original list. If the filtering criteria matches, I put a copy into the new list.
Is there a better way to do this? Is there a way to filter in place so there is no temporary list required?
A: You can use IEnumerable to eliminate the need of a temp list.
public IEnumerable<T> GetFilteredItems(IEnumerable<T> collection)
{
foreach (T item in collection)
if (Matches<T>(item))
{
yield return item;
}
}
where Matches is the name of your filter method. And you can use this like:
IEnumerable<MyType> filteredItems = GetFilteredItems(myList);
foreach (MyType item in filteredItems)
{
// do sth with your filtered items
}
This will call GetFilteredItems function when needed and in some cases that you do not use all items in the filtered collection, it may provide some good performance gain.
A: To do it in place, you can use the RemoveAll method of the "List<>" class along with a custom "Predicate" class...but all that does is clean up the code... under the hood it's doing the same thing you are...but yes, it does it in place, so you do same the temp list.
A: You can use the FindAll method of the List, providing a delegate to filter on. Though, I agree with @IainMH that it's not worth worrying yourself too much unless it's a huge list.
A:
If you're using C# 3.0 you can use linq
Or, if you prefer, use the special query syntax provided by the C# 3 compiler:
var filteredList = from x in myList
where x > 7
select x;
A: Using LINQ is relatively much slower than using a predicate supplied to the Lists FindAll method. Also be careful with LINQ as the enumeration of the list is not actually executed until you access the result. This can mean that, when you think you have created a filtered list, the content may differ to what you expected when you actually read it.
A: If you're using C# 3.0 you can use linq, which is way better and way more elegant:
List<int> myList = GetListOfIntsFromSomewhere();
// This will filter ints that are not > 7 out of the list; Where returns an
// IEnumerable<T>, so call ToList to convert back to a List<T>.
List<int> filteredList = myList.Where(x => x > 7).ToList();
If you can't find the .Where, that means you need to import using System.Linq; at the top of your file.
A: Here is a code block / example of some list filtering using three different methods that I put together to show Lambdas and LINQ based list filtering.
#region List Filtering
static void Main(string[] args)
{
ListFiltering();
Console.ReadLine();
}
private static void ListFiltering()
{
var PersonList = new List<Person>();
PersonList.Add(new Person() { Age = 23, Name = "Jon", Gender = "M" }); //Non-Constructor Object Property Initialization
PersonList.Add(new Person() { Age = 24, Name = "Jack", Gender = "M" });
PersonList.Add(new Person() { Age = 29, Name = "Billy", Gender = "M" });
PersonList.Add(new Person() { Age = 33, Name = "Bob", Gender = "M" });
PersonList.Add(new Person() { Age = 45, Name = "Frank", Gender = "M" });
PersonList.Add(new Person() { Age = 24, Name = "Anna", Gender = "F" });
PersonList.Add(new Person() { Age = 29, Name = "Sue", Gender = "F" });
PersonList.Add(new Person() { Age = 35, Name = "Sally", Gender = "F" });
PersonList.Add(new Person() { Age = 36, Name = "Jane", Gender = "F" });
PersonList.Add(new Person() { Age = 42, Name = "Jill", Gender = "F" });
//Logic: Show me all males that are less than 30 years old.
Console.WriteLine("");
//Iterative Method
Console.WriteLine("List Filter Normal Way:");
foreach (var p in PersonList)
if (p.Gender == "M" && p.Age < 30)
Console.WriteLine(p.Name + " is " + p.Age);
Console.WriteLine("");
//Lambda Filter Method
Console.WriteLine("List Filter Lambda Way");
foreach (var p in PersonList.Where(p => (p.Gender == "M" && p.Age < 30))) //.Where is an extension method
Console.WriteLine(p.Name + " is " + p.Age);
Console.WriteLine("");
//LINQ Query Method
Console.WriteLine("List Filter LINQ Way:");
foreach (var v in from p in PersonList
where p.Gender == "M" && p.Age < 30
select new { p.Name, p.Age })
Console.WriteLine(v.Name + " is " + v.Age);
}
private class Person
{
public Person() { }
public int Age { get; set; }
public string Name { get; set; }
public string Gender { get; set; }
}
#endregion
A: List<T> has a FindAll method that will do the filtering for you and return a subset of the list.
MSDN has a great code example here: http://msdn.microsoft.com/en-us/library/aa701359(VS.80).aspx
EDIT: I wrote this before I had a good understanding of LINQ and the Where() method. If I were to write this today i would probably use the method Jorge mentions above. The FindAll method still works if you're stuck in a .NET 2.0 environment though.
A: If your list is very big and you are filtering repeatedly - you can sort the original list on the filter attribute, binary search to find the start and end points.
Initial time O(n*log(n)) then O(log(n)).
Standard filtering will take O(n) each time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "184"
} |
Q: Visio and Forward-Engineering Do you know if I can use Visio to forward-engineer a sequence diagram into code (c#)?
Can it be done with Visio alone or do I need a plugin?
What about other diagrams?
A: You have to get the Visio that for users of Visual Studio. See this link: Visio for Enterprise Architects for more details. The code generation capabilities are fairly weak and you might end up getting Visio into an inconsistent state. I know that Visio will let you forward and reverse engineer both code and databases, but both capabilities are very limited and I don't recommend doing it.
In my opinion, Visio is a diagramming tool and it should be treated as such.
A: Looks like the latest version of Sparx Systems Enterprise Architect can forward engineer sequence diagrams
Sparx Systems Enterprise Architect
A: To the best of my knowledge, Visio can only forward-engineer code from class models.
As sequence diagrams only really show paths of communication between objects, I suspect that they do not contain the necessary information, except perhaps in trivial cases, for generating code.
Objects with any sort of complex behaviour patterns are likely to involve changing run-time states, of which sequence diagrams aren't really capable of capturing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Revoke shared folders in windows Over the last few months/years, I have shared a folder or two with numerous people on my domain. How do I easily revoke those shares to keep access to my system nice and tidy?
A: Using computer management (an MMC snap-in. See Control Panel Administrative tools) you can see a list of all folders that are shared. You could delete the shares or change the permissions on the share to only allow access for certain people or groups.
A: You can also achieve this via the command line:
C:>net share share-name /d
A: On Windows XP, go to:
Administrative Tools > Computer Management > System Tools > Shared Folders > Shares
This page lists all shares and lets you remove them easily, in one place.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Fastest C# Code to Download a Web Page Given a URL, what would be the most efficient code to download the contents of that web page? I am only considering the HTML, not associated images, JS and CSS.
A: here is my answer ,a method that takes a URL and return a string
public static string downloadWebPage(string theURL)
{
//### download a web page to a string
WebClient client = new WebClient();
client.Headers.Add("user-agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)");
Stream data = client.OpenRead(theURL);
StreamReader reader = new StreamReader(data);
string s = reader.ReadToEnd();
return s;
}
A: public static void DownloadFile(string remoteFilename, string localFilename)
{
WebClient client = new WebClient();
client.DownloadFile(remoteFilename, localFilename);
}
A: I think this is the fastest (download speed time with low latency) solution for download.
// WebClient vs HttpClient vs HttpWebRequest vs RestSharp
// در نهایت به نظرم روش زیر سریعترین روشه
HttpWebRequest Request = (HttpWebRequest)WebRequest.Create(url);
Request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
Request.Proxy = null;
Request.Method = "GET";
using (WebResponse Response = Request.GetResponse())
{
using (StreamReader Reader = new StreamReader(Response.GetResponseStream()))
{
return Reader.ReadToEnd();
}
}
A: System.Net.WebClient
From MSDN:
using System;
using System.Net;
using System.IO;
public class Test
{
public static void Main (string[] args)
{
if (args == null || args.Length == 0)
{
throw new ApplicationException ("Specify the URI of the resource to retrieve.");
}
WebClient client = new WebClient ();
// Add a user agent header in case the
// requested URI contains a query.
client.Headers.Add ("user-agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; .NET CLR 1.0.3705;)");
Stream data = client.OpenRead (args[0]);
StreamReader reader = new StreamReader (data);
string s = reader.ReadToEnd ();
Console.WriteLine (s);
data.Close ();
reader.Close ();
}
}
A: Use the WebClient class from System.Net; on .NET 2.0 and higher.
WebClient Client = new WebClient ();
Client.DownloadFile("http://mysite.com/myfile.txt", " C:\myfile.txt");
A: WebClient.DownloadString
public static void DownloadString (string address)
{
WebClient client = new WebClient ();
string reply = client.DownloadString (address);
Console.WriteLine (reply);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "64"
} |
Q: Reserved keywords in JavaScript What JavaScript keywords (function names, variables, etc) are reserved?
A: To supplement benc's answer, see Standard ECMA-262. These are the official reserved words, but only a pedant ignores the implementation to respect the standard. For the reserved words of the most popular implementations, that is firefox and internet explorer, see benc's answer.
The reserved words in EMCAScript-262 are the Keywords, Future Reserved Words, NullLiteral, and BooleanLiterals, where the Keywords are
break do instanceof typeof
case else new var
catch finally return void
continue for switch while
debugger function this with
default if throw
delete in try
the Future Reserved Words are
abstract export interface static
boolean extends long super
byte final native synchronized
char float package throws
class goto private transient
const implements protected volatile
double import public
enum int short
the NullLiteral is
null
and the BooleanLiterals are
true
false
A: Here is a browser and language version agnostic way to determine if a particular string is treated as a keyword by the JavaScript engine. Credits to this answer which provides the core of the solution.
function isReservedKeyword(wordToCheck) {
var reservedWord = false;
if (/^[a-z]+$/.test(wordToCheck)) {
try {
eval('var ' + wordToCheck + ' = 1');
} catch (error) {
reservedWord = true;
}
}
return reservedWord;
}
A: None of the current answers warn that regardless of ES-Dialect, browsers tend to have their own lists of reserved keywords, methods etc on top of what ES dictates.
For example, IE9 prohibits use of logical names like: addFilter, removeFilter (they, among others, are reserved methods).
See http://www.jabcreations.com/blog/internet-explorer-9 for a more extensive 'currently known' list specific to IE9. I have yet find any official reference to them on msdn (or elsewhere).
A: I was just reading about this in JavaScript & jQuery: The Missing Manual:
Not all of these reserved words will cause problems in all browsers, but it’s best to steer clear of these names when naming variables.
JavaScript keywords: break, case, catch, continue, debugger, default, delete, do, else, false, finally, for, function, if, in, instanceof, new, null, return, switch, this, throw, true, try, typeof, var, void, while, with.
Reserved for future use: abstract, boolean, byte, char, class, const, double, enum, export, extends, final, float, goto, implements, import, int, interface, let, long, native, package, private, protected, public, short, static, super, synchronized, throws, transient, volatile, yield.
Pre-defined global variables in the browser: alert, blur, closed, document, focus, frames, history, innerHeight, innerWidth, length, location, navigator, open, outerHeight, outerWidth, parent, screen, screenX, screenY, statusbar, window.
A: Here is a list from Eloquent JavaScript book:
*
*break
*case
*catch
*class
*const
*continue
*debugger
*default
*delete
*do
*else
*enum
*export
*extend
*false
*finally
*for
*function
*if
*implements
*import
*in
*instanceof
*interface
*let
*new
*null
*package
*private
*protected
*public
*return
*static
*super
*switch
*this
*throw
*true
*try
*typeof
*var
*void
*while
*with
*yield
A: Here is my poem, which includes all of the reserved keywords in JavaScript, and is dedicated to those who remain honest in the moment, and not just try to score:
Let this long package float,
Goto private class if short.
While protected with debugger case,
Continue volatile interface.
Instanceof super synchronized throw,
Extends final export throws.
Try import double enum?
- False, boolean, abstract function,
Implements typeof transient break!
Void static, default do,
Switch int native new.
Else, delete null public var
In return for const, true, char
…Finally catch byte.
A: We should be linking to the actual sources of info, rather than just the top google hit.
http://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Reserved_Words
JScript 8.0:
http://msdn.microsoft.com/en-us/library/ttyab5c8.aspx
A: benc's answer is excellent, but for my two cents, I like the w3schools' page on this:
http://www.w3schools.com/js/js_reserved.asp
In addition to listing the keywords reserved by the standard, it also has a long list of keywords you should avoid in certain contexts; for example, not using the name alert when writing code to be run in a browser. It helped me figure out why certain words were highlighting as keywords in my editor even though I knew they weren't keywords.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "169"
} |
Q: Asp.net website first start is very slow The first time I load the website in the production web server, it start very slow, subsequent pages load very quickly (included the home page).
I precompiled the site, but nothing changes.
I don't have any code at Application start.
I don't have cached items.
Any ideas? How can I find out what is happening?
A: When you published the site, did you choose to make the website "updatable" in the publish website's settings or not? If I remember well, the aspx / ascx file need to be compiled as well, and if they are "updatable" then the first start will cause a recompile of those resources.
A: It's just your app domain loading up and loading any binaries into memory. Also, it's initializing static variables, so if you have a static variable that loads up a lot of data from the db, it might take a bit.
A: Have you turned on tracing in your web.config?
A: Try clearing your event log?
A: use http://www.iis.net/expand/ApplicationWarmUp for warming up your app
this is for IIS 7.5 - so if you are running on Server R2 then it will work.
A: Make sure you publish your application in 'release' and not 'debug'. I've noticed this decreases loading time considerably. The web.config file will be updated.
A: This sounds very much like background compiling; though if you're precompiling, that shouldn't be an issue.
First thing I would look at is your ORM (if any). NHibernate, in particular, has a serious startup penalty, as it runs multiple compilers in the background at startup to turn each class in your data layer into its own in-memory assembly.
A: Just a quick nod at Darren. That's typical behavior of a .NET app after a DLL update is made. After the initial load everything should zip along just fine.
A: When you say "precompile" the site, are you using the aspnet_compiler utility to precompile, or simply using the "Build site" option in Visual Studio?
If you are not carrying out the former, I recommend giving it a spin. Coupled with Web Deployment Projects, you should have an easier time deploying your site for each release.
A: The initial slowness is a couple things:
*
*The appDomain is being setup
*ASP.NET is parsing and compiling the ASPX pages.
*Global Contexts are being initialized.
This is normal behavior for ASP.NET.
A:
@Mickey: No, it is turned off. Do I need to turn it on to find out?
The trace log will show you how long each action takes. It could help you find what is taking so long.
Here is a link that might help you get it setup.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Your favourite algorithm and the lesson it taught you What algorithm taught you the most about programming or a specific language feature?
We have all had those moments where all of a sudden we know, just know, we have learned an important lesson for the future based on finally understanding an algorithm written by a programmer a couple of steps up the evolutionary ladder. Whose ideas and code had the magic touch on you?
A: Quicksort. It showed me that recursion can be powerful and useful.
A: Bresenham's line drawing algorithm got me interested in realtime graphics rendering. This can be used to render filled polygons, like triangles, for things like 3D model rendering.
A: Recursive Descent Parsing - I remember being very impressed how such simple code could do something so seemingly complex.
A: Quicksort in Haskell:
qsort [] = []
qsort (x:xs) = qsort (filter (< x) xs) ++ [x] ++ qsort (filter (>= x) xs)
Although I couldn'd write Haskell at the time, I did understand this code and with it recursion and the quicksort algorithm. It just made click and there it was...
A: General algorithms:
*
*Quicksort (and it's average complexity analysis), shows that randomizing your input can be a good thing!;
*balanced trees (AVL trees for example), a neat way to balance search/insertion costs;
*Dijkstra and Ford-Fulkerson algorithms on graphs (I like the fact that the second one has many applications);
*the LZ* family of compression algorithms (LZW for example), data compression sounded kind of magic to me until I discovered it (a long time ago :) );
*the FFT, ubiquitous (re-used in so many other algorithms);
*the simplex algorithm, ubiquitous as well.
Numerical related:
*
*Euclid's algorithm to compute the gcd of two integers: one of the first algorithms, simple and elegant, powerful, has lots of generalizations;
*fast multiplication of integers (Cooley-Tukey for example);
*Newton iterations to invert / find a root, a very powerful meta-algorithm.
Number theory-related:
*
*AGM-related algorithms (examples): leads to very simple and elegant algorithms to compute pi (and much more!), though the theory is quite profound (Gauss introduced elliptic functions and modular forms from it, so you can say that it gave birth to algebraic geometry...);
*the number field sieve (for integer factorization): very complicated, but quite a nice theoretical result (this also goes for the AKS algorithm, which proved that PRIMES is in P).
I also enjoyed studying quantum computing (Shor and Deutsch-Josza algorithms for example): this teaches you to think out of the box.
As you can see, I'm a bit biased towards maths-oriented algorithms :)
A: The iterative algorithm for Fibonacci, because for me it nailed down the fact that the most elegant code (in this case, the recursive version) is not necessarily the most efficient.
To elaborate- The "fib(10) = fib(9) + fib(8)" approach means that fib(9) will be evaluated to fib(8) + fib(7). So evaluation of fib(8) (and therefor fib7, fib6) will all be evaluated twice.
The iterative method, (curr = prev1 + prev2 in a forloop) does not tree out this way, nor does it take as much memory since it's only 3 transient variables, instead of n frames in the recursion stack.
I tend to strive for simple, elegant code when I'm programming, but this is the algorithm that helped me realize that this isn't the end-all-be-all for writing good software, and that ultimately the end users don't care how your code looks.
A: Minimax taught me that chess programs aren't smart, they can just think more moves ahead than you can.
A: For some reason I like the Schwartzian transform
@sorted = map { $_->[0] }
sort { $a->[1] cmp $b->[1] }
map { [$_, foo($_)] }
@unsorted;
Where foo($) represents a compute-intensive expression that takes $ (each item of the list in turn) and produces the corresponding value that is to be compared in its sake.
A: I don't know if this qualifies as an algorithm, or just a classic hack. In either case, it helped to get me to start thinking outside the box.
Swap 2 integers without using an intermediate variable (in C++)
void InPlaceSwap (int& a, int &b) {
a ^= b;
b ^= a;
a ^= b;
}
A: Quicksort: Until I got to college, I had never questioned whether brute force Bubble Sort was the most efficient way to sort. It just seemed intuitively obvious. But being exposed to non-obvious solutions like Quicksort taught me to look past the obvious solutions to see if something better is available.
A: For me it's the weak-heapsort algorithm because it shows (1) how much a wise chosen data structure (and the algorithms working on it) can influence the performance and (2) that fascinating things can be discovered even in old, well-known things. (weak-heapsort is the best variant of all heap sorts, which was proven eight years later.)
A: "To iterate is human, to recurse divine" - quoted in 1989 at college.
P.S. Posted by Woodgnome while waiting for invite to join
A: Floyd-Warshall all-pairs shortest paths algorithm
procedure FloydWarshall ()
for k := 1 to n
for i := 1 to n
for j := 1 to n
path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );
Here's why it's cool: when you first learn about the shortest-path problem in your graph theory course, you probably start with Dijkstra's algorithm that solves single-source shortest path. It's quite complicated at first, but then you get over it, and you fully understood it.
Then the teacher says "Now we want to solve the same problem but for ALL sources". You think to yourself, "Oh god, this is going to be a much harder problem! It's going to be at least N times more complicated than Dijkstra's algorithm!!!".
Then the teacher gives you Floyd-Warshall. And your mind explodes. Then you start to tear up at how beautifully simple the algorithm is. It's just a triply-nested loop. It only uses a simple array for its data structure.
The most eye-opening part for me is the following realization: say you have a solution for problem A. Then you have a bigger "superproblem" B which contains problem A. The solution to problem B may in fact be simpler than the solution to problem A.
A: This one might sound trivial but it was a revelation for me at the time.
I was in my very first programming class(VB6) and the Prof had just taught us about random numbers and he gave the following instructions: "Create a virtual lottery machine. Imagine a glass ball full of 100 ping pong balls marked 0 to 99. Pick them randomly and display their number until they have all been selected, no duplicates."
Everyone else wrote their program like this: Pick a ball, put its number into an "already selected list" and then pick another ball. Check to see if its already selected, if so pick another ball, if not put its number on the "already selected list" etc....
Of course by the end they were making hundreds of comparisons to find the few balls that had not already been picked. It was like throwing the balls back into the jar after selecting them. My revelation was to throw balls away after picking.
I know this sounds mind-numbingly obvious but this was the moment that the "programming switch" got flipped in my head. This was the moment that programming went from trying to learn a strange foreign language to trying to figure out an enjoyable puzzle. And once I made that mental connection between programming and fun there was really no stopping me.
A: Huffman coding would be mine, I had originally made my own dumb version by minimizing the number of bits to encode text from 8 down to less, but had not thought about variable number of bits depending on frequency. Then I found the huffman coding described in an article in a magazine and it opened up lots of new possibilities.
A: This is a slow one :)
I learned lots about both C and computers in general by understanding Duffs Device and XOR swaps
EDIT:
@Jason Z, that's my XOR swap :) cool isn't it.
A: For some reason Bubble Sort has always stood out to me. Not because it's elegant or good just because it had/has a goofy name I suppose.
A:
The iterative algorithm for Fibonacci, because for me it nailed down the fact that the most elegant code (in this case, the recursive version) is not necessarily the most efficient.
The iterative method, (curr = prev1 + prev2 in a forloop) does not tree out this way, nor does it take as much memory since it's only 3 transient variables, instead of n frames in the recursion stack.
You know that fibonacci has a closed form solution that allows direct computation of the result in a fixed number of steps, right? Namely, (phin - (1 - phi)n) / sqrt(5). It always strikes me as somewhat remarkable that this should yield an integer, but it does.
phi is the golden ratio, of course; (1 + sqrt(5)) / 2.
A: I don't have a favourite -- there are so many beautiful ones to pick from -- but one I've always found intriguing is the Bailey–Borwein–Plouffe (BBP) formula, which enables you to calculate an arbitrary digit of pi without knowledge about the preceding digits.
A: RSA introduced me to the world of modular arithmetic, which can be used to solve a surprising number of interesting problems!
A: Hasn't taught me much, but the Johnson–Trotter Algorithm never fails to blow my mind.
A: Binary decision diagrams, though formally not an algorithm but a datastructure, lead to elegant and minimal solutions for various sorts of (boolean) logic problems. They were invented and developped to minimise the gate count in chip-design, and can be viewed as one of the fundaments of the silicon revolution. The resulting algorithms are amazingly simple.
What they taught me:
*
*a compact representation of any problem is important; small is beautiful
*a small set of constraints/reductions applied recursively can be used to accomplish this
*for problems with symmetries, tranformation to a canonical form should be the first step to consider
*not every piece of literature is read. Knuth found out about BDD's several years after their invention/introduction. (and spent almost a year investigating them)
A: For me, the simple swap in Kelly & Pohl's A Book on C to demonstrate call-by-reference flipped me out when I first saw it. I looked at that, and pointers snapped into place. Verbatim. . .
void swap(int *p, int *q)
{
int temp;
temp = *p;
*p = *q;
*q = temp;
}
A: The Towers of Hanoi algorithm is one of the most beautiful algorithms. It shows how you can use recursion to solve a problem in a much more elegant fashion than the iterative method.
Alternatively, the recursion algorithm for Fibonacci series and calculating powers of a number demonstrate the reverse situation of recursive algorithm being used for the sake of recursion instead of providing good value.
A: @Krishna Kumar
The bitwise solution is even more fun than the recursive solution.
A: An algorithm that generates a list of primes by comparing each number to the current list of primes, adding it if it's not found, and returning the list of primes at the end. Mind-bending in several ways, not the least of which being the idea of using the partially-completed output as the primary search criteria.
A: Storing two pointers in a single word for a doubly linked list tought me the lesson that you can do very bad things in C indeed (with which a conservative GC will have lots of trouble).
A: The most proud I've been of a solution was writing something very similar to the DisplayTag package. It taught me a lot about code design, maintainability, and reuse. I wrote it well before DisplayTag, and it was sunk into an NDA agreement, so I couldn't open source it, but I can still speak gushingly about that one in job interviews.
A: Not my favorite, but the Miller Rabin Algorithm for testing primality showed me that being right almost all the time, is good enough almost all the time. (i.e. Don't mistrust a probabilistic algorithm just because it has a probability of being wrong.)
A: Map/Reduce. Two simple concepts that fit together to make a load of data-processing tasks easier to parallelize.
Oh... and it's only the basis of massively-parallel indexing:
http://labs.google.com/papers/mapreduce.html
A: Binary search must be the most simple and elegant algorithm. Of cause the data needs to be sorted and for this the merge sort algorithm is also most simple and elegant.
A: *
*Using matrix arithmetic for computing Fibonacci in O(log N)
*Using Fourier transform for big numbers multiplication
The lesson is that a solution can be found in very unexpected areas and that there are very surprising connections between different areas of algorithms and mathematics.
A: Some algorithms, principles or tricks that have not yet been cited:
*
*Union-Find algorithm to quickly extract groups from a dataset.
*Zobrist hashcodes are quick, iterative and resilient to collisions.
*Constraint programming is a programming paradigm that leads to surprising good results in a variety of cases.
*Monte Carlo tree search is a set of wonderful techniques mixing tree walking, randomization, optimization.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: How can I play sound in Java? I want to be able to play sound files in my program. Where should I look?
A: For whatever reason, the top answer by wchargin was giving me a null pointer error when I was calling this.getClass().getResourceAsStream().
What worked for me was the following:
void playSound(String soundFile) {
File f = new File("./" + soundFile);
AudioInputStream audioIn = AudioSystem.getAudioInputStream(f.toURI().toURL());
Clip clip = AudioSystem.getClip();
clip.open(audioIn);
clip.start();
}
And I would play the sound with:
playSound("sounds/effects/sheep1.wav");
sounds/effects/sheep1.wav was located in the base directory of my project in Eclipse (so not inside the src folder).
A: For playing sound in java, you can refer to the following code.
import java.io.*;
import java.net.URL;
import javax.sound.sampled.*;
import javax.swing.*;
// To play sound using Clip, the process need to be alive.
// Hence, we use a Swing application.
public class SoundClipTest extends JFrame {
public SoundClipTest() {
this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
this.setTitle("Test Sound Clip");
this.setSize(300, 200);
this.setVisible(true);
try {
// Open an audio input stream.
URL url = this.getClass().getClassLoader().getResource("gameover.wav");
AudioInputStream audioIn = AudioSystem.getAudioInputStream(url);
// Get a sound clip resource.
Clip clip = AudioSystem.getClip();
// Open audio clip and load samples from the audio input stream.
clip.open(audioIn);
clip.start();
} catch (UnsupportedAudioFileException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (LineUnavailableException e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
new SoundClipTest();
}
}
A: I created a game framework sometime ago to work on Android and Desktop, the desktop part that handle sound maybe can be used as inspiration to what you need.
https://github.com/hamilton-lima/jaga/blob/master/jaga%20desktop/src-desktop/com/athanazio/jaga/desktop/sound/Sound.java
Here is the code for reference.
package com.athanazio.jaga.desktop.sound;
import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.InputStream;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.DataLine;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.SourceDataLine;
import javax.sound.sampled.UnsupportedAudioFileException;
public class Sound {
AudioInputStream in;
AudioFormat decodedFormat;
AudioInputStream din;
AudioFormat baseFormat;
SourceDataLine line;
private boolean loop;
private BufferedInputStream stream;
// private ByteArrayInputStream stream;
/**
* recreate the stream
*
*/
public void reset() {
try {
stream.reset();
in = AudioSystem.getAudioInputStream(stream);
din = AudioSystem.getAudioInputStream(decodedFormat, in);
line = getLine(decodedFormat);
} catch (Exception e) {
e.printStackTrace();
}
}
public void close() {
try {
line.close();
din.close();
in.close();
} catch (IOException e) {
}
}
Sound(String filename, boolean loop) {
this(filename);
this.loop = loop;
}
Sound(String filename) {
this.loop = false;
try {
InputStream raw = Object.class.getResourceAsStream(filename);
stream = new BufferedInputStream(raw);
// ByteArrayOutputStream out = new ByteArrayOutputStream();
// byte[] buffer = new byte[1024];
// int read = raw.read(buffer);
// while( read > 0 ) {
// out.write(buffer, 0, read);
// read = raw.read(buffer);
// }
// stream = new ByteArrayInputStream(out.toByteArray());
in = AudioSystem.getAudioInputStream(stream);
din = null;
if (in != null) {
baseFormat = in.getFormat();
decodedFormat = new AudioFormat(
AudioFormat.Encoding.PCM_SIGNED, baseFormat
.getSampleRate(), 16, baseFormat.getChannels(),
baseFormat.getChannels() * 2, baseFormat
.getSampleRate(), false);
din = AudioSystem.getAudioInputStream(decodedFormat, in);
line = getLine(decodedFormat);
}
} catch (UnsupportedAudioFileException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (LineUnavailableException e) {
e.printStackTrace();
}
}
private SourceDataLine getLine(AudioFormat audioFormat)
throws LineUnavailableException {
SourceDataLine res = null;
DataLine.Info info = new DataLine.Info(SourceDataLine.class,
audioFormat);
res = (SourceDataLine) AudioSystem.getLine(info);
res.open(audioFormat);
return res;
}
public void play() {
try {
boolean firstTime = true;
while (firstTime || loop) {
firstTime = false;
byte[] data = new byte[4096];
if (line != null) {
line.start();
int nBytesRead = 0;
while (nBytesRead != -1) {
nBytesRead = din.read(data, 0, data.length);
if (nBytesRead != -1)
line.write(data, 0, nBytesRead);
}
line.drain();
line.stop();
line.close();
reset();
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
A: It works for me. Simple variant
public void makeSound(){
File lol = new File("somesound.wav");
try{
Clip clip = AudioSystem.getClip();
clip.open(AudioSystem.getAudioInputStream(lol));
clip.start();
} catch (Exception e){
e.printStackTrace();
}
}
A: A bad example:
import sun.audio.*; //import the sun.audio package
import java.io.*;
//** add this into your application code as appropriate
// Open an input stream to the audio file.
InputStream in = new FileInputStream(Filename);
// Create an AudioStream object from the input stream.
AudioStream as = new AudioStream(in);
// Use the static class member "player" from class AudioPlayer to play
// clip.
AudioPlayer.player.start(as);
// Similarly, to stop the audio.
AudioPlayer.player.stop(as);
A: There is an alternative to importing the sound files which works in both applets and applications: convert the audio files into .java files and simply use them in your code.
I have developed a tool which makes this process a lot easier. It simplifies the Java Sound API quite a bit.
http://stephengware.com/projects/soundtoclass/
A: I'm surprised nobody suggested using Applet. Use Applet. You'll have to supply the beep audio file as a wav file, but it works. I tried this on Ubuntu:
package javaapplication2;
import java.applet.Applet;
import java.applet.AudioClip;
import java.io.File;
import java.net.MalformedURLException;
import java.net.URL;
public class JavaApplication2 {
public static void main(String[] args) throws MalformedURLException {
File file = new File("/path/to/your/sounds/beep3.wav");
URL url = null;
if (file.canRead()) {url = file.toURI().toURL();}
System.out.println(url);
AudioClip clip = Applet.newAudioClip(url);
clip.play();
System.out.println("should've played by now");
}
}
//beep3.wav was available from: http://www.pacdv.com/sounds/interface_sound_effects/beep-3.wav
A: I wrote the following code that works fine. But I think it only works with .wav format.
public static synchronized void playSound(final String url) {
new Thread(new Runnable() {
// The wrapper thread is unnecessary, unless it blocks on the
// Clip finishing; see comments.
public void run() {
try {
Clip clip = AudioSystem.getClip();
AudioInputStream inputStream = AudioSystem.getAudioInputStream(
Main.class.getResourceAsStream("/path/to/sounds/" + url));
clip.open(inputStream);
clip.start();
} catch (Exception e) {
System.err.println(e.getMessage());
}
}
}).start();
}
A: I didn't want to have so many lines of code just to play a simple damn sound. This can work if you have the JavaFX package (already included in my jdk 8).
private static void playSound(String sound){
// cl is the ClassLoader for the current class, ie. CurrentClass.class.getClassLoader();
URL file = cl.getResource(sound);
final Media media = new Media(file.toString());
final MediaPlayer mediaPlayer = new MediaPlayer(media);
mediaPlayer.play();
}
Notice : You need to initialize JavaFX. A quick way to do that, is to call the constructor of JFXPanel() once in your app :
static{
JFXPanel fxPanel = new JFXPanel();
}
A: import java.net.URL;
import java.net.MalformedURLException;
import javax.sound.sampled.AudioInputStream;
import javax.sound.sampled.AudioSystem;
import javax.sound.sampled.AudioFormat;
import javax.sound.sampled.Clip;
import javax.sound.sampled.LineUnavailableException;
import javax.sound.sampled.UnsupportedAudioFileException;
import java.io.IOException;
import java.io.File;
public class SoundClipTest{
//plays the sound
public static void playSound(final String path){
try{
final File audioFile=new File(path);
AudioInputStream audioIn=AudioSystem.getAudioInputStream(audioFile);
Clip clip=AudioSystem.getClip();
clip.open(audioIn);
clip.start();
long duration=getDurationInSec(audioIn);
//System.out.println(duration);
//We need to delay it otherwise function will return
//duration is in seconds we are converting it to milliseconds
Thread.sleep(duration*1000);
}catch(LineUnavailableException | UnsupportedAudioFileException | MalformedURLException | InterruptedException exception){
exception.printStackTrace();
}
catch(IOException ioException){
ioException.printStackTrace();
}
}
//Gives duration in seconds for audio files
public static long getDurationInSec(final AudioInputStream audioIn){
final AudioFormat format=audioIn.getFormat();
double frameRate=format.getFrameRate();
return (long)(audioIn.getFrameLength()/frameRate);
}
////////main//////
public static void main(String $[]){
//SoundClipTest test=new SoundClipTest();
SoundClipTest.playSound("/home/dev/Downloads/mixkit-sad-game-over-trombone-471.wav");
}
}
A: This thread is rather old but I have determined an option that could prove useful.
Instead of using the Java AudioStream library you could use an external program like Windows Media Player or VLC and run it with a console command through Java.
String command = "\"C:/Program Files (x86)/Windows Media Player/wmplayer.exe\" \"C:/song.mp3\"";
try {
Process p = Runtime.getRuntime().exec(command);
catch (IOException e) {
e.printStackTrace();
}
This will also create a separate process that can be controlled it the program.
p.destroy();
Of course this will take longer to execute than using an internal library but there may be programs that can start up faster and possibly without a GUI given certain console commands.
If time is not of the essence then this is useful.
A: I faced many issues to play mp3 file format
so converted it to .wav using some online converter
and then used below code (it was easier instead of mp3 supporting)
try
{
Clip clip = AudioSystem.getClip();
clip.open(AudioSystem.getAudioInputStream(GuiUtils.class.getResource("/sounds/success.wav")));
clip.start();
}
catch (Exception e)
{
LogUtils.logError(e);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "190"
} |
Q: Regex to Parse Hyperlinks and Descriptions C#: What is a good Regex to parse hyperlinks and their description?
Please consider case insensitivity, white-space and use of single quotes (instead of double quotes) around the HREF tag.
Please also consider obtaining hyperlinks which have other tags within the <a> tags such as <b> and <i>.
A: As long as there are no nested tags (and no line breaks), the following variant works well:
<a\s+href=(?:"([^"]+)"|'([^']+)').*?>(.*?)</a>
As soon as nested tags come into play, regular expressions are unfit for parsing. However, you can still use them by applying more advanced features of modern interpreters (depending on your regex machine). E.g. .NET regular expressions use a stack; I found this:
(?:<a.*?href=[""'](?<url>.*?)[""'].*?>)(?<name>(?><a[^<]*>(?<DEPTH>)|</a>(?<-DEPTH>)|.)+)(?(DEPTH)(?!))(?:</a>)
Source: http://weblogs.asp.net/scottcate/archive/2004/12/13/281955.aspx
A: See this example from StackOverflow: Regular expression for parsing links from a webpage?
Using The HTML Agility Pack you can parse the html, and extract details using the semantics of the HTML, instead of a broken regex.
A: I found this but apparently these guys had some problems with it.
Edit: (It works!)
I have now done my own testing and found that it works, I don't know C# so I can't give you a C# answer but I do know PHP and here's the matches array I got back from running it on this:
<a href="pages/index.php" title="the title">Text</a>
array(3) { [0]=> string(52) "Text" [1]=> string(15) "pages/index.php" [2]=> string(4) "Text" }
A: I have a regex that handles most cases, though I believe it does match HTML within a multiline comment.
It's written using the .NET syntax, but should be easily translatable.
A: Just going to throw this snippet out there now that I have it working..this is a less greedy version of one suggested earlier. The original wouldnt work if the input had multiple hyperlinks. This code below will allow you to loop through all the hyperlinks:
static Regex rHref = new Regex(@"<a.*?href=[""'](?<url>[^""^']+[.]*?)[""'].*?>(?<keywords>[^<]+[.]*?)</a>", RegexOptions.IgnoreCase | RegexOptions.Compiled);
public void ParseHyperlinks(string html)
{
MatchCollection mcHref = rHref.Matches(html);
foreach (Match m in mcHref)
AddKeywordLink(m.Groups["keywords"].Value, m.Groups["url"].Value);
}
A: Here is a regular expression that will match the balanced tags.
(?:""'[""'].*?>)(?(?>(?)|(?<-DEPTH>)|.)+)(?(DEPTH)(?!))(?:)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Print a barcode to a Intermec PB20 via the LinePrinter API Does anyone know how to print a barcode to the Intermec PB20 bluetooth printer from a Windows Compact Framework application? We are currently using the Intermec LinePrinter API but have been unable to find a way to print a barcode.
A: Thank you all for your thoughts. Printing directly to the serial port is likely the most flexible method. In this case we didn't want to replicate all of the work that was already built into the Intermec dll for handling the port, printer errors, etc. We were able to get this working by sending the printer the appropriate codes to switch it into a different mode and then pass direct printer commands that way.
Here was our solution in case anyone else happens to encounter a similar issue working with Intermec Printers. The following code is a test case that doesn't catch printer errors and retry, etc. (See Intermec code examples.)
Intermec.Print.LinePrinter lp;
int escapeCharacter = int.Parse("1b", NumberStyles.HexNumber);
char[] toEzPrintMode = new char[] { Convert.ToChar(num2), 'E', 'Z' };
lp = new Intermec.Print.LinePrinter("Printer_Config.XML", "PrinterPB20_40COL");
lp.Open();
lp.Write(charArray2); //switch to ez print mode
string testBarcode = "{PRINT:@75,10:PD417,YDIM 6,XDIM 2,COLUMNS 2, SECURITY 3|ABCDEFGHIJKL|}";
lp.Write(testBarcode);
lp.Write("{LP}"); //switch from ez print mode back to line printer mode
lp.NewLine();
lp.Write("Test"); //verify line printer mode is working
There is a technical document on Intermec's support site called the "Technical Manual" that describes the code for directly controlling the printer. The section about Easy Print describes how to print a variety of barcodes.
A: Last time I had to print Barcode (despite the printer or framework) I resorted to use a True Type font with the Barcode I needed. (In my case was EAN-13 something), an european barcode.
There are fonts where you simply write numbers (and/or letters when supported) and you get a perfect barcode any scanner can read :)
Google is your friend. I don't know if there are free ones.
A: Thank you for your answer. There are free fonts available -- However, the PB20 is a handheld printer with a few built-in fonts. It has the capability to print barcodes and can be manipulated directly via the serial port. Intermec provides a .Net CF API to make printing "easy", and it is using this API that we have been unable to figure out how to tell the printer to print a barcode.
A: Ditch all API's and use a serial port API directly.
Talk the printers language and you can get decent results.
Every other approach leads to frustration.
Not so pretty, but that is the way my old factory worked.
4k print jobs per day, and none ever missed.
A: Free 3 of 9
This is 3 of 9 (sometimes called "code
39"), a widely used barcode standard
that includes capital letters,
numbers, and several symbols. This is
not the barcode for UPC's (universal
price codes) found on products at the
store. However, most kinds of barcode
scanners will recognize 3 of 9 just
fine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Compact Framework - Is there an MVC framework/library available? I've found an article on this subject by a Microsoft employee, but has anyone implemented a more robust framework for this? Is there a lightweight framework for WinForms that could be ported easily? I'd like to get up to speed fairly quickly and avoid producing a framework/library of my own to handle this when someone smarter has already done this.
I haven't looked at the Mobile Software Factory from the P&P group, but I suspect it's kind of heavy. Is it worth a look?
Edit: I'm not looking for information on the ASP.NET MVC project. I'm asking about the compact framework 'WinForms' implementation, and how to implement MVC with that.
A: I personally think that the Mobile Software Factory doesn't hold much joy for CF.
We still use one part of it (EventBroker) at work and I'd like to even remove that part if possible (as it doesn't support generic events and you have to cast the arguments into their strong types from EventArgs). A sister project at work used it for part of their UI but had to rip it out due to performance issues (another big project, although that has additional performance issues of it's own as well).
The issue I find with the MVP framework that the P&P lib offers is that Forms and Controls OWN presenters instead of Presenters/Controllers owning Forms (who didn't read "It's just a view" : Pragmatic Programmer?).
This fits beautifully with MS's "Form First" rapid application development mantra but it sucks when you consider how expensive windows handles can be in CE (if you have a lot of them).
We run a very large CF application at work and we've rolled our own MVC framework. It's not hard to roll your own, just make sure you separate everything out into Controllers, Views, Business Objects and Services and have a UIController that controls the interactions between the controllers.
We actually go one step further and re-use forms/controls by using a Controller->View->Layout pattern.
The controller is the same as usual, the view is the object that customises a layout into a particular view and the layout is the actual UserControl. We then swap these in and out of a single Form. This reduces the amount of Windows Controls we use dramatically.
This + initialising all of the forms on start-up means that we eradicate the noticable pause that you get when creating new Windows Controls "on-demand".
Obviously it only really pays to do this kind of thing if you are rolling a large application. We have roughly 20 + different types of View which use in total about 7 different layouts. This hurts our initialisation routine (as we load the forms at start up) by a magnitude of about 10 seconds but psychologically most users are willing to accept such a hit at start up as opposed to noticeable pauses during run-time.
The main issue with the P&P library in my books is that it is a FF -> CF port and due to certain incompatability and performance differences between the two platforms you lose a lot of useful functionality.
Btw, this is by far and away the most comprehensive article i've ever read on MVC/MVP.
For Windows application (desktop or CE) I'd recommend using the Taligent Model-View-Presenter version without the interactions, commands and selections (e.g the controller/presenter performs all the work).
A: Neither of you (davidg or Kevin Pang) paid attention to the fact that he's interested in WinForms, not Web Forms. He wants a framework that pushes the Model-View-Controller design pattern (davidg, MVC isn't just the name of an ASP.NET framework) in a WinForms project using the .NET Compact Framework. He asked his question just fine.
A: There's also the OpenNETCF IoC framework (which I don't think existed when this question was asked) which is much lighter, but similar in object model to the P&P's Mobile Software Factory.
A: @DavidG and @KevenPang
MVC is not limited to a web technology, in fact the original smalltalk MVC was for desktop applications.
It works like this:
*
*View = Client Form
*Controller = Wraps up Client Events and marshals between View and Model
*Model = Application Data and Business Logic
In pure Smalltalk MVC, the View is not limited to being a form, but can be any representation of Model Data...For example, if we had a Model that represented a spreadsheet, we could have the following views:
*
*SpreadSheet View
*Printer Friendly View
*Icon View
etc, the Model would be the same, but the View would create a different output object in each case.
All that said, I don't know if such a framework exists for the .NET Compact framework, I just wanted to point out that MVC does not mean WebApp.
A: Take a look at mFly's Mobile MVC. I've never used it, but it's pitched as a reasonable MVC framework for the CF.
A: @davidg: "Why would you want MVC on Compact Framework?"
Why not? It's not like it's reserved for web dev, it's a pattern.
A: Edit: The above posters are correct. I saw MVC and immediately thought of web forms. My apologies. Feel free to disregard this. I'll leave my original message in place just in case anyone who is interested in web forms MVC needs the links. :-)
There are a couple MVC frameworks out there, neither of which are very "lightweight", but MVC is a pretty big shift away from web forms so that is expected:
*
*ASP.NET MVC - This is Microsoft's attempt at an MVC framework. It is still in preview mode so use it at your own discretion, but several people are already using it in their production applications. You will find ample documentation on this with a simple Google search as it is becoming very popular amongst the .NET crowd.
*Castle MonoRail - The MonoRail framework is an open-source MVC framework that has been around for quite some time and is in use on several production applications. It is definitely more flushed out than the ASP.NET MVC framework, but considering the amount of effort Microsoft is throwing at their MVC offering, I think will change relatively soon.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Using ItemizedOverlay and OverlayItem In Android Beta 0.9 Has anyone managed to use ItemizedOverlays in Android Beta 0.9? I can't get it to work, but I'm not sure if I've done something wrong or if this functionality isn't yet available.
I've been trying to use the ItemizedOverlay and OverlayItem classes. Their intended purpose is to simulate map markers (as seen in Google Maps Mashups) but I've had problems getting them to appear on the map.
I can add my own custom overlays using a similar technique, it's just the ItemizedOverlays that don't work.
Once I've implemented my own ItemizedOverlay (and overridden createItem), creating a new instance of my class seems to work (I can extract OverlayItems from it) but adding it to a map's Overlay list doesn't make it appear as it should.
This is the code I use to add the ItemizedOverlay class as an Overlay on to my MapView.
// Add the ItemizedOverlay to the Map
private void addItemizedOverlay() {
Resources r = getResources();
MapView mapView = (MapView)findViewById(R.id.mymapview);
List<Overlay> overlays = mapView.getOverlays();
MyItemizedOverlay markers = new MyItemizedOverlay(r.getDrawable(R.drawable.icon));
overlays.add(markers);
OverlayItem oi = markers.getItem(0);
markers.setFocus(oi);
mapView.postInvalidate();
}
Where MyItemizedOverlay is defined as:
public class MyItemizedOverlay extends ItemizedOverlay<OverlayItem> {
public MyItemizedOverlay(Drawable defaultMarker) {
super(defaultMarker);
populate();
}
@Override
protected OverlayItem createItem(int index) {
Double lat = (index+37.422006)*1E6;
Double lng = -122.084095*1E6;
GeoPoint point = new GeoPoint(lat.intValue(), lng.intValue());
OverlayItem oi = new OverlayItem(point, "Marker", "Marker Text");
return oi;
}
@Override
public int size() {
return 5;
}
}
A: For the sake of completeness I'll repeat the discussion on Reto's post over at the Android Groups here.
It seems that if you set the bounds on your drawable it does the trick:
Drawable defaultMarker = r.getDrawable(R.drawable.icon);
// You HAVE to specify the bounds! It seems like the markers are drawn
// through Drawable.draw(Canvas) and therefore must have its bounds set
// before drawing.
defaultMarker.setBounds(0, 0, defaultMarker.getIntrinsicWidth(),
defaultMarker.getIntrinsicHeight());
MyItemizedOverlay markers = new MyItemizedOverlay(defaultMarker);
overlays.add(markers);
By the way, the above is shamelessly ripped from the demo at MarcelP.info. Also, here is a good howto.
A: try :
Drawable defaultMarker = r.getDrawable(R.drawable.icon);
defaultMarker.setBounds(0, 0, defaultMarker.getIntrinsicWidth(),
defaultMarker.getIntrinsicHeight());
MyItemizedOverlay markers = new MyItemizedOverlay(defaultMarker);
overlays.add(markers);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "58"
} |
Q: Beyond design patterns? For the past 10 years or so there have been a smattering of articles and papers referencing Christopher Alexander's newer work "The Nature of Order" and how it can be applied to software.
Unfortunately, the only works I can find are from James Coplien and Richard Gabriel; there is nothing beyond that, at least from my attempts to find such things through google.
Is this kind of discussion happening anywhere?
MSN
@Georgia
My question isn't about design patterns or pattern languages; it's about trying to see if more of Christopher Alexander's work can be applied to software (which it probably can, since it has even less physical constraints than architecture and building).
Design patterns and pattern languages seem to have embraced the structure of Alexander's design patterns, but not many capture the essence. The essence being something beyond solving a problem in a particular context.
It's difficult to explain without using some of Alexander's later works as a reference point.
Edit: No, I take that back.
For example, there's an architectural design pattern that is called Alcoves. The pattern has a context that isn't just rooted in the circumstances of the situation but also rooted in fundamentals about the purpose of buildings: that they are structures to be lived in and must promote living in them. In the case of the Alcove pattern, the context is that you want an area that allows for multiple people to be in the same area doing different things, because it is important for family members to be physically together as well as to be able to do things that tend to distract other family members.
Most software design patterns describe a problem in a context, but they make no deeper statement about why the problem is important, or why the problem is something that is fundamental to software. It makes it very easy to apply design patterns inappropriately or blithely, which is the exact opposite of the intent of design patterns to began with.
MSN
A: Try starting at http://c2.com/cgi/wiki?NatureOfOrder or http://c2.com/cgi/wiki?HowNatureOfOrderAppliesToSoftware
A: Your question brings to mind some of the comments made by Eric Evans in his book "Domain-Driven Design". He points out that design patterns in software development have often been described as strictly technical solutions to technical problems. But sometimes there is an opportunity to apply a pattern that not only gives structure to the software implementation, but is also meaningful in the business model.
For example, consider the use of the STRATEGY pattern as merely an implementation detail, versus the case where it actually makes sense for programmers and the business to talk about how STRATEGIES are selected and used, i.e. where it is part of the UBIQUITOUS LANGUAGE of the system:
When we use the technical design pattern in the domain layer, we have to add an additional motivation, another layer of meaning. When the STRATEGY corresponds to an actual business strategy or policy, the pattern becomes more than just a useful implementation technique (though that too is valuable as far as it goes). [Chapter 12]
Evans argues that aligning the software model with the deep model of the business domain is a difficult goal to achieve, but one that provides a huge amount of value. If he's right, then perhaps the "deeper statement" that a software design pattern needs to make is: how does the pattern fit into the wider problem context, beyond the narrow technical scope of the software system itself.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: What is the best way to store user settings for a .NET application? I have a .NET 2.0 Windows Forms application. Where is the best place the store user settings (considering Windows guidelines)?
Some people pointed to Application.LocalUserAppDataPath. However, that creates a folder structure like:
C:\Documents and Settings\user_name\Local Settings\Application
Data\company_name\product_name\product_version\
If I release version 1 of my application and store an XML file there, then release version 2, that would change to a different folder, right? I'd prefer to have a single folder, per user, to store settings, regardless of the application version.
A: .NET applications have a built-in settings mechanism that is easy to use. The problem with it, in my opinion, is that it stores those settings off into a rather obscure directory and end users will not be able to find it. Moreover, just switching from debug to release build changes the location of this directory, meaning that any settings saved in one configuration are lost in the other.
For these and other reasons, I came up with my own settings code for Windows Forms. It's not quite as slick as the one that comes with .NET, but it's more flexible, and I use it all the time.
A: I love using the built-in Application Settings. Then you have built in support for using the settings designer if you want at design-time, or at runtime to use:
// read setting
string setting1 = (string)Settings.Default["MySetting1"];
// save setting
Settings.Default["MySetting2"] = "My Setting Value";
// you can force a save with
Properties.Settings.Default.Save();
It does store the settings in a similar folder structure as you describe (with the version in the path). However, with a simple call to:
Properties.Settings.Default.Upgrade();
The app will pull all previous versions settings in to save in.
A: Or write your settings in a xml file and save it using Isolated Storage. Depending on the store you use it saves it in the Application Data folder. You can also choose a roaming enabled store which means when the user logs on a different computer the settings move with them.
A: One approach that has worked for me in the past has been to create a settings class and use XML serialization to write it to the file system. You could extend this concept by creating a collection of settings objects and serializing it. You would have all of your settings for all users in one place without having to worry about managing the file system.
Before anyone gives me any flak for partially re-inventing the wheel, let me say a few things. For one, it is only a few lines of code to serialize and write the file. Secondly, if you have an object that contains your settings, you don't have to make multiple calls to the appSettings object when you load your app. And lastly, it is very easy to add items that represent your applications state, thereby allowing you to resume a long-running task when the application loads next.
A: I try some methods to store my settings to simply text file and i found best way:
file stored in application folder, to usage , settings.txt:
(inside settings file approved comments, try //comment)
//to get settings value
Settings.Get("name", "Ivan");
//to set settings value
Settings.Set("name", "John");
using:
using System;
using System.Collections.Generic;
using System.Runtime.InteropServices;
using System.Text;
using System.Windows.Forms;
//you can store also with section name, to use just add name section Set(section_name,name,value) and Get(section_name,name,value)
public static class Settings
{
private static string SECTION = typeof(Settings).Namespace;//"SETTINGS";
private static string settingsPath = Application.StartupPath.ToString() + "\\settings.txt";
[DllImport("kernel32")]
private static extern long WritePrivateProfileString(string section, string key, string val, string filePath);
[DllImport("kernel32")]
private static extern int GetPrivateProfileString(string section, string key, string def, StringBuilder retVal, int size, string filePath);
public static String GetString(String name)
{
StringBuilder temp = new StringBuilder(255);
int i = GetPrivateProfileString(SECTION,name,"",temp,255,settingsPath);
return temp.ToString();
}
public static String Get(String name, String defVal)
{
return Get(SECTION,name,defVal);
}
public static String Get(string _SECTION, String name, String defVal)
{
StringBuilder temp = new StringBuilder(255);
int i = GetPrivateProfileString(_SECTION, name, "", temp, 255, settingsPath);
return temp.ToString();
}
public static Boolean Get(String name, Boolean defVal)
{
return Get(SECTION, name, defVal);
}
public static Boolean Get(string _SECTION, String name, Boolean defVal)
{
StringBuilder temp = new StringBuilder(255);
int i = GetPrivateProfileString(_SECTION,name,"",temp,255,settingsPath);
bool retval=false;
if (bool.TryParse(temp.ToString(),out retval))
{
return retval;
} else
{
return retval;
}
}
public static int Get(String name, int defVal)
{
return Get(SECTION, name, defVal);
}
public static int Get(string _SECTION, String name, int defVal)
{
StringBuilder temp = new StringBuilder(255);
int i = GetPrivateProfileString(SECTION,name,"",temp,255,settingsPath);
int retval=0;
if (int.TryParse(temp.ToString(),out retval))
{
return retval;
} else
{
return retval;
}
}
public static void Set(String name, String val)
{
Set(SECTION, name,val);
}
public static void Set(string _SECTION, String name, String val)
{
WritePrivateProfileString(_SECTION, name, val, settingsPath);
}
public static void Set(String name, Boolean val)
{
Set(SECTION, name, val);
}
public static void Set(string _SECTION, String name, Boolean val)
{
WritePrivateProfileString(_SECTION, name, val.ToString(), settingsPath);
}
public static void Set(String name, int val)
{
Set(SECTION, name, val);
}
public static void Set(string _SECTION,String name, int val)
{
WritePrivateProfileString(SECTION, name, val.ToString(), settingsPath);
}
}
A: Settings are standard key-value pairs (string-string). I could wrap them in an XML file, if that helps.
I'd rather use the file system instead of the registry. It seems to be easier to maintain. In support scenarios, if the user needs to manually open/change the settings, that would be easier if it's in the file system.
A: I'd go down the folder list you posted except for the product version. You don't want the settings reset after an update is released.
I'm actually moving away from the registry for user settings because of the debug/footprint factor. I'm currently only storing a few basic settings (window size, position, version of a data file) in the registry, and I've run into more problems if an update goes bad or a user loses a second monitor and that is where the application was opening to. A few of them are savvy enough to understand regedit, but for the rest they have to do a reinstall, which is quick, but I think they grumble a bit. With the file based version, all I'd have to do is have them open up an XML file in Notepad and make a quick tweak.
In addition, I'm looking to make my application runnable off a USB flash drive, and having the settings tied into the file seems much friendlier to that process. I'm sure I can do some code to check/clean the registry, but I think most of us are already tired of the registry clutter that seems to eat up our machines nowadays.
I know there are some security tradeoffs to this, but none of the data I'm sorting is that critical to that cause, and I'm not suffering any performance hits due to the size of the application.
A: Isolated storage is primarily used for applications distributed using ClickOnce and are run in a secure sandbox. The base path is decided for you and you won't be able infer it in your code. The path will be something like "\LocalSettings\ApplicationData\IsolatedStorage\ejwnwe.302\kfiwemqi.owx\url.asdaiojwejoieajae....", not all that friendly. Your storage space is also limited.
Ryan Farley has it right.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63"
} |
Q: Exception handling: Contract vs Exceptional approach I know two approaches to Exception handling, lets have a look at them.
*
*Contract approach.
When a method does not do what it says it will do in the method header, it will throw an exception. Thus the method "promises" that it will do the operation, and if it fails for some reason, it will throw an exception.
*Exceptional approach.
Only throw exceptions when something truly weird happens. You should not use exceptions when you can resolve the situation with normal control flow (If statements). You don't use Exceptions for control flow, as you might in the contract approach.
Lets use both approaches in different cases:
We have a Customer class that has a method called OrderProduct.
contract approach:
class Customer
{
public void OrderProduct(Product product)
{
if((m_credit - product.Price) < 0)
throw new NoCreditException("Not enough credit!");
// do stuff
}
}
exceptional approach:
class Customer
{
public bool OrderProduct(Product product)
{
if((m_credit - product.Price) < 0)
return false;
// do stuff
return true;
}
}
if !(customer.OrderProduct(product))
Console.WriteLine("Not enough credit!");
else
// go on with your life
Here I prefer the exceptional approach, as it is not truly Exceptional that a customer has no money assuming he did not win the lottery.
But here is a situation I err on the contract style.
Exceptional:
class CarController
{
// returns null if car creation failed.
public Car CreateCar(string model)
{
// something went wrong, wrong model
return null;
}
}
When I call a method called CreateCar, I damn wel expect a Car instance instead of some lousy null pointer, which can ravage my running code a dozen lines later. Thus I prefer contract to this one:
class CarController
{
public Car CreateCar(string model)
{
// something went wrong, wrong model
throw new CarModelNotKnownException("Model unkown");
return new Car();
}
}
Which do style do you use? What do you think is best general approach to Exceptions?
A: I favor what you call the "contract" approach. Returning nulls or other special values to indicate errors isn't necessary in a language that supports exceptions. I find it much easier to understand code when it doesn't have a bunch of "if (result == NULL)" or "if (result == -1)" clauses mixed in with what could be very simple, straightforward logic.
A: My usual approach is to use contract to handle any kind of error due to "client" invocation, that is, due to an external error (i.e ArgumentNullException).
Every error on the arguments is not handled. An exception is raised and the "client" is in charge of handling it. On the other hand, for internal errors always try to correct them (as if you can't get a database connection for some reason) and only if you can't handle it reraise the exception.
It's important to keep in mind that most unhandled exception at such level will not be able to be handled by the client anyway so they will just probably go up to the most general exception handler, so if such an exception occurs you are probably FUBAR anyway.
A: I believe that if you are building a class which will be used by an external program (or will be reused by other programs) then you should use the contract approach. A good example of this is an API of any kind.
A: If you are actually interested in exceptions and want to think about how to use them to construct robust systems, consider reading Making reliable distributed systems in the presence of software errors.
A: Both approaches are right. What that means is that a contract should be written in such a way as to specify for all cases that are not truly exceptional a behavior that does not require throwing an exception.
Note that some situations may or may not be exceptional based upon what the caller of the code is expecting. If the caller is expecting that a dictionary will contain a certain item, and absence of that item would indicate a severe problem, then failure to find the item is an exceptional condition and should cause an exception to be thrown. If, however, the caller doesn't really know if an item exists, and is equally prepared to handle its presence or its absence, then absence of the item would be an expected condition and should not cause an exception. The best way to handle such variations in caller expectation is to have a contract specify two methods: a DoSomething method and a TryDoSomething method, e.g.
TValue GetValue(TKey Key);
bool TryGetValue(TKey Key, ref TValue value);
Note that, while the standard 'try' pattern is as illustrated above, some alternatives may also be helpful if one is designing an interface which produces items:
// In case of failure, set ok false and return default<TValue>.
TValue TryGetResult(ref bool ok, TParam param);
// In case of failure, indicate particular problem in GetKeyErrorInfo
// and return default<TValue>.
TValue TryGetResult(ref GetKeyErrorInfo errorInfo, ref TParam param);
Note that using something like the normal TryGetResult pattern within an interface will make the interface invariant with respect to the result type; using one of the patterns above will allow the interface to be covariant with respect to the result type. Also, it will allow the result to be used in a 'var' declaration:
var myThingResult = myThing.TryGetSomeValue(ref ok, whatever);
if (ok) { do_whatever }
Not quite the standard approach, but in some cases the advantages may justify it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: What is a selector engine? I've seen news of John Resig's fast new selector engine named Sizzle pop up in quite a few places, but I don't know what a selector engine is, nor have any of the articles given an explanation of what it is. I know Resig is the creator of jQuery, and that Sizzle is something in Javascript, but beyond that I don't know what it is. So, what is a selector engine?
Thanks!
A: A selector engine is a way to traverse the DOM looking for a specific element.
An example of a built in selector engine:
var foo = document.getElementById('foo');
A: A selector engine is used to query a page's DOM for particular elements, based on some sort of query (usually CSS syntax or similar).
For example, this jQuery:
$('div')
Would search for and return all of the <div> elements on the page. It uses jQuery's selector engine to do that.
Optimizing the selector engine is a big deal because almost every operation you perform with these frameworks is based on some sort of DOM query.
A: Also, Sizzle is the engine John Resig is working on currently to replace jQuery's already fantastic selector engine.
A: A selector engine is used to find elements in a document, in the same way as CSS stylesheets does. Currently only Safari has the built-in querySelectorAll function which does just that. With other browser you have to use external JavaScript implementations as LlamaLab Selector or Sizzle instead.
A: A selector engine is a JavaScript library that lets you select elements in the DOM tree using some kind of string for identifying them (think regular expressions for DOM elements). Most selector engines use some variation of the CSS3 selectors syntax so, for example, you can write something like:
var paragraphs = selectorengine.select('p.firstParagraph')
to select all P elements in the document with class firstParagraph.
Some selector engines also support a partial implementation of XPath, and even some custom syntaxes. For example, jQuery lets you write:
var checkedBoxes = jQuery('form#login input:checked')
To select all checked check boxes in the login form in the document.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Third party Visual Studio snippets Do you know where I could find some useful third party (free) code snippets for VS 2008?
A: http://gotcodesnippets.com/
http://www.codekeep.net/ has a VS add-in for their snippets, too
A: bdukes site has more options, but here are the ones MSDN has published...
http://msdn.microsoft.com/en-us/vstudio/aa718338.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Bash One Liner: copy template_*.txt to foo_*.txt? Say I have three files (template_*.txt):
*
*template_x.txt
*template_y.txt
*template_z.txt
I want to copy them to three new files (foo_*.txt).
*
*foo_x.txt
*foo_y.txt
*foo_z.txt
Is there some simple way to do that with one command, e.g.
cp --enableAwesomeness template_*.txt foo_*.txt
A: [01:22 PM] matt@Lunchbox:~/tmp/ba$
ls
template_x.txt template_y.txt template_z.txt
[01:22 PM] matt@Lunchbox:~/tmp/ba$
for i in template_*.txt ; do mv $i foo${i:8}; done
[01:22 PM] matt@Lunchbox:~/tmp/ba$
ls
foo_x.txt foo_y.txt foo_z.txt
A: My preferred way:
for f in template_*.txt
do
cp $f ${f/template/foo}
done
The "I-don't-remember-the-substitution-syntax" way:
for i in x y z
do
cp template_$i foo_$
done
A: This should work:
for file in template_*.txt ; do cp $file `echo $file | sed 's/template_\(.*\)/foo_\1/'` ; done
A:
for f in template_*.txt; do cp $f foo_${f#template_}; done
A: I don't know of anything in bash or on cp, but there are simple ways to do this sort of thing using (for example) a perl script:
($op = shift) || die "Usage: rename perlexpr [filenames]\n";
for (@ARGV) {
$was = $_;
eval $op;
die $@ if $@;
rename($was,$_) unless $was eq $_;
}
Then:
rename s/template/foo/ *.txt
A: for i in template_*.txt; do cp -v "$i" "`echo $i | sed 's%^template_%foo_%'`"; done
Probably breaks if your filenames have funky characters in them. Remove the '-v' when (if) you get confidence that it works reliably.
A: The command mmv (available in Debian or Fink or easy to compile yourself) was created precisely for this task. With the plain Bash solution, I always have to look up the documentation about variable expansion. But mmv is much simpler to use, quite close to "awesomeness"! ;-)
Your example would be:
mcp "template_*.txt" "foo_#1.txt"
mmv can handle more complex patterns as well and it has some sanity checks, for example, it will make sure none of the files in the destination set appear in the source set (so you can't accidentally overwrite files).
A: Yet another way to do it:
$ ls template_*.txt | sed -e 's/^template\(.*\)$/cp template\1 foo\1/' | ksh -sx
I've always been impressed with the ImageMagick convert program that does what you expect with image formats:
$ convert rose.jpg rose.png
It has a sister program that allows batch conversions:
$ mogrify -format png *.jpg
Obviously these are limited to image conversions, but they have interesting command line interfaces.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Hibernate saveOrUpdate with another object in the session Is there any way to save an object using Hibernate if there is already an object using that identifier loaded into the session?
*
*Doing session.contains(obj) seems to only return true if the session contains that exact object, not another object with the same ID.
*Using merge(obj) throws an exception if the object is new
A: Have you tried calling .SaveOrUpdateCopy()?
It should work in all instances, if there is an entity by the same id in the session or if there is no entity at all. This is basically the catch-all method, as it converts a transient object into a persistent one (Save), updates the object if it is existing (Update) or even handles if the entity is a copy of an already existing object (Copy).
Failing that, you may have to identify and .Evict() the existing object before Attaching (.Update()) your "new" object.
This should be easy enough to do:
IPersistable entity = Whatever(); // This is the object we're trying to update
// (IPersistable has an id field)
session.Evict(session.Get(entity.GetType(), entity.Id));
session.SaveOrUpdate(entity);
Although the above code could probably do with some null checking for the .Get() call.
A: How about:
session.replicate(entity, ReplicationMode.OVERWRITE);
?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Visual Studio 2005 Shortcuts I'm trying to bind the following shortcut: Ctrl + W to close tabs
How can you customize VS to add/change shortcuts? Also, what are the most useful shortcuts you guys have found?
A: Tools > Options > (Show all settings), then Environment > Keyboard.
Here, rebind the key “File.Close” to Ctrl+W.
A: I keep a link to Jeff's shortcuts page, and refer to it to learn the shortcuts for all tasks I find myself regularly doing. I also use VisualAssist, and use a lot of:
*
*toggling between .h and .cpp files (yes, I code in C++ :) ) (Alt-o);
*going to the definition of something (Alt-g).
A: VS 2005/2008 Keybinding posters:
*
*Visual C# 2008 Keybinding Reference
Poster
*Visual C# 2005 Keyboard
Shortcut Reference Poster
*Visual Basic 2008 Keybinding
Reference Poster
*Visual Basic
2005 Keyboard Shortcut Reference
Poster
These don't cover customizations but they're good reference materials and definitely helpful for finding new shortcuts.
A: Tools > Options > Environment > Keyboard
I find most of them useful TBH!
Commenting, Bookmarking, Incremental Search, etc etc.
The one you want to override by the way is Window.CloseDocumentWindow which defaults to CTRL+F4
A: Ctrl-Shift-Space shows the syntax/overloads for the current function you are typing parameters for.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Does Design By Contract Work For You? Do you use Design by Contract professionally? Is it something you have to do from the beginning of a project, or can you change gears and start to incorporate it into your software development lifecycle? What have you found to be the pros/cons of the design approach?
I came across the Design by Contract approach in a grad school course. In the academic setting, it seemed to be a pretty useful technique. But I don't currently use Design by Contract professionally, and I don't know any other developers that are using it. It would be good to hear about its actual usage from the SO crowd.
A: If you look into STL, boost, MFC, ATL and many open source projects, you can see there are so many ASSERTION statements and that makes project going further more safely.
Design-By-Contract! It really works in real product.
A: Frank Krueger writes:
Gaius: A Null Pointer exception gets thrown for you automatically by the runtime, there is no benefit to testing that stuff in the function prologue.
I have two responses to this:
*
*Null was just an example. For square(x), I'd want to test that the square root of the result is (approximately) the value of the parameter. For setters, I'd want to test that the value actually changed. For atomic operations, I'd want to check that all component operations succeeded or all failed (really one test for success and n tests for failure). For factory methods in weakly-typed languages, I want to check that the right kind of object is returned. The list goes on and on. Basically, anything that can be tested in one line of code is a very good candidate for a code contract in a prologue comment.
*I disagree that you shouldn't test things because they generate runtime exceptions. If anything, you should test things that might generate runtime exceptions. I like runtime exceptions because they make the system fail fast, which helps debugging. But the null in the example was a result value for some possible input. There's an argument to be made for never returning null, but if you're going to, you should test it.
A: It's absolutely foolish to not design by contract when doing anything in an SOA realm, and it's always helpful if you're working on any sort of modular work, where bits & pieces might be swapped out later on, especially if any black boxen are involved.
A: I can't recommend it highly enough. It's particularly nice if you have a suite that takes inline documentation contract specifications, like so:
// @returns null iff x = 0
public foo(int x) {
...
}
and turns them into generated unit tests, like so:
public test_foo_returns_null_iff_x_equals_0() {
assertNull foo(0);
}
That way, you can actually see the tests you're running, but they're auto-generated. Generated tests shouldn't be checked into source control, by the way.
A: You really get to appreciate design by contract when you have an interface between to applications that have to talk to each other.
Without contracts this situation quickly becomes a game of blame tennis. The teams keep knocking accusations back and forth and huge amounts of time get wasted.
With a contract, the blame is clear.
Did the caller satisfy the preconditions? If not the client team need to fix it.
Given a valid request, did the receiver satisfy the post conditions? If not the server team need to fix that.
Did both parties adhere to the contract, but the result is unsatisfactory? The contract is insufficient and the issue needs to be escalated.
For this you don't need to have the contracts implemented in the form of assertions, you just need to make sure they are documented and agreed on by all parties.
A: In lieu of more expressive type systems, I would absolutely use design by contract on military grade projects.
For weakly typed languages or languages with dynamic scope (PHP, JavaScript), functional contracts are also very handy.
For everything else, I would toss it aside an rely upon beta testers and unit tests.
Gaius: A Null Pointer exception gets thrown for you automatically by the runtime, there is no benefit to testing that stuff in the function prologue. If you are more interested in documentation, then I would use annotations that can be used with static analyzers and the like (to make sure the code isn't breaking your annotations for example).
A stronger type system coupled with Design by Contract seems to be the way to go. Take a look at Spec# for an example:
The Spec# programming language. Spec#
is an extension of the object-oriented
language C#. It extends the type
system to include non-null types and
checked exceptions. It provides
method contracts in the form of pre-
and postconditions as well as object
invariants.
A: Both Unit testing and Design by Contract are valuable test approaches in my experince.
I have tried using Design by Contract in a System Automatic Testing framework and my experience is that is gives a flexibility and possibilities not easily obtained by unit testing. For example its possible to run longer sequence and verify that
the respons times are within limits every time an action is executed.
Looking at the presentations at InfoQ it appears that Design by contract is a valuable addition to the conventional Unit tests in the integration phase of components.
For example it possible to create a mock interface first and then use the component after-
or when a new version of a component is released.
I have not found a toolkit covering all my design requirement to design by contract testing
in the .Net/Microsoft platform.
A: I find it telling that Go programming language has no constructs that make design by contract possible. panic/defer/recover aren't exactly that as defer and recover logic make it possible to ignore panic, IOW to ignore broken contract. What's needed at very least is some form of unrecoverable panic, which is assert really. Or, at best, direct language support of design by contract constructs (pre and post-conditions, implementation and class invariants). But given strong-headedness of language purists at the helm of Go ship, I give little change of any of this done.
One can implement assert-like behaviour by checking for special assert error in last defer function in panicking function and calling runtime.Breakpoint() to dump stack during recovery. To be assert-like that behaviour needs to be conditional. Of course this approach fells apart when new defer function is added after the one doing assert. Which will happen in large project exactly at the wrong time, resulting in missed bugs.
My point is that is that assert is useful in so many ways that having to dance around it may be a headache.
A: I don't actually use Design by Contract, on a daily basis. I do, however know that it has been incorporated into the D language, as part of the language.
A: Yes, it does! Actually a few years ago, I designed a little framework for Argument Validation. I was doing a SOA project, in which the different back-end system, did all kind of validation and checking. But to increase response times (in cases where the input was invalid, and to reduce to load those back-end systems), we started to validate the input parameters of the provided services. Not only for Not Null, but also for String patterns. Or values from within sets. And also the cases where parameters had dependencies between them.
Now I realize we implemented at that time a small design by contract framework :)
Here is the link for those who are interested in the small Java Argument Validation framework. Which is implemented as plain Java solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: IDE for Swing applications development Is there any IDE that simplifies creating Swing applications (ideally something along the lines of Visual Studio)
A: Netbeans has some GUI-building support, and it's one of the most popular Java IDEs on the market. Give it a look.
A: Try Instantiations' Windows Builder Pro. It includes Swing Designer, which is a Swing UI builder. It is based on Eclipse.
A: Abeille is very good and is based on the JGoodies FormLayout. Unlike almost every other Java GUI builder, Abeille does not generate code by default. In the project I used it on, it was wonderful to avoid reading or scrolling through the layout code (because that code no longer existed). Most of our hand-written code concerned itself with connecting events to actions, simply asking the layout for the relevant controls.
It's a crime that code generation is the default way to layout code in Java because better ways of doing GUIs have been around for decades. I have used Matisse, the NetBeans GUI code generator. While Matisse (now known as "Swing GUI Builder") makes it pleasant to layout components, it is similar to all other code generation tools because when you use Matisse you must live in constant fear that someone else edited the "you cannot edit this in NetBeans" GUI sections outside of NetBeans. As soon as you touch the layout builder again it could destroy their work and then you have a broken GUI. There might be some simple task like re-ordering a variable initialization and its use or re-naming a variable (this was especially a problem when using Matisse's database feature). You know how to do this by editing the un-editable source code but may waste time trying to figure out how to do the same thing in the GUI builder. Like most code generation tools, it might get you started, but eventually you will have to maintain the generated code yourself.
A: WindowBuilder Pro for Eclipse
*
*Free!
*It works with existing code and doesn't lock you in (as opposed to NetBeans)
*It works with MiGLayout
*It does have some conventions that your view classes have to follow, though.
Installing in Eclipse v4.2 (Juno):
*
*Goto - menu Help → Install New Software...
*Select - Work With: Juno - http://download.eclipse.org/releases/juno.
*The WindowBuilder items are under "General Purpose Tools" (or use the filter).
Older versions and zips are available at http://www.eclipse.org/windowbuilder/download.php.
A: The latest version of NetBeans include a very nice and simple visual editor for Swing called Matisse
Matisse
A: I have a very good experience with NetBeans. It's so easy if you know every minor parts of these applications.
The most complicated part is using, for example, the layouts (if you can not handle complicated parts), but everything is almost plug & play.
And in addition, you can put JFrame into other frames without creating another frame class for this. I think that will be good.
A: NetBeans is the simplest to use (http://netbeans.org/). However, it does not allow you to edit (fine tune) the generated code.
JDeveloper (http://www.oracle.com/technetwork/developer-tools/jdev/overview/index.html) is another solution and does allow you to edit the code... but I feel NetBeans is more intuitive.
A: I recommend WindowBuilder plugin for Eclipse IDE 3.7.2 Indigo / 24 February 2012.
Here's for the step-by-step installation: Create Java GUI as Easy as Visual Basic
A: Like others have mentioned, NetBeans' visual editor is pretty good, but it's based pretty heavily on the Swing Application Framework, so you'd need to get an understanding of how it works to properly use it (although you don't need to dig in to just test things).
Other than that there are also:
*
*the IntelliJ IDEA visual editor (flash demo of the features)
*and Eclipse's Visual Editor
Personally I've used NetBeans' and IDEA's visual editors. Both are nice, but I thought NetBeans had a leg up, because it doesn't use any proprietary way of saving the GUI structure and instead does something similar to what Visual Studio does - auto-generating the code that you can then add to. IDEA stores the information in a separate file which means you have to use IDEA to edit the layout visually later.
I have not used Eclipse's Visual Editor.
My vote is for NetBeans' visual editor. I think it satisfies what most people are looking for in a visual editor and leaves it flexible enough to plug the holes manually through code without affecting the visual editor (so you can switch back and forth between code and design views without breaking either).
A: I have switched between several IDEs and the one that I believe has the best GUI builder in terms of use and performance would have to be NetBeans.
A: Frankly, I've never seen an editor which comes even close to what I can do manually in a text editor. All the visual editors are nice if you only have very simple needs like putting a few buttons in a window. When things become more complex, visual editors quickly loose their competitive edge.
I usually use a bunch of high-level classes built from more basic widgets and wire my UI from that. This also allows me to easily test my UI with automated JUnit tests (because I can control what the source looks like).
Lastly, changes to the UI won't generate unnecessary noise in the version control system.
A: I have tried a few and the closest I have found that comes close to Visual Studio is NetBeans. Version 6.5 is excellent and really improved over version 5.
A: Eclipse Visual Editor is pretty dull in my experience. I had more luck with JBuilder, which is also based on Eclipse, simply adding a few plugins to it as many other commercial products do. It is still not able to parse any Swing code (I doubt any Swing WISIWYG editor does), but if you start with it, it gives you relatively seamless experience.
You need to pay for it though.
At the end of the day, I have worked with different similar UI tools, Flash Builder, Delphi etc., but unless you do some relatively trivial UI design, not including much business logic and communication with other layers, you'll have to accept that what you are capable of creating in code once you learn to do it properly is much more powerful than what any editor is capable of providing you with.
A: I like the Swing GUI Builder from the NetBeans IDE.
A: I'm a big fan of JetBrains, and when it comes to Java, IntelliJ is the best IDE I have used.
For Swing, they have a fully interactive UI builder. And, for actual coding, their intellisense can't be beat.
A: Of course you should use NetBeans for building a Java Swing GUI. The drag and drop features and auto-code generation are quite mature.
For Eclipse, I am not sure. But because IBM has its own SWT package for GUI, I am not sure whether it support Swing.
A: JFormDesigner.
I used NetBeans extensively in the past for GUI design, but I am now using IntelliJ with the JFormDesigner plugin. I have tried several other solutions, and this is the one I am sticking with.
JFormDesigner also works with JBuilder and Eclipse, so you are not locking your projects to one particular IDE.
A: For me, the best visual Swing editor is JFormDesigner, which you can run standalone or as a plugin for IntelliJ IDEA and Eclipse.
It generates proper (actually readable) source code, it's very ergonomic and intuitive and, above all, very extensible. That last point is really important, because if you want to build a decent Swing application, you'll have to extend the base components or use some third-party libraries and it must be easy to integrate those in the visual editor.
It's not free, but it's a bargain for the power you get (129 EUR / 159 USD). I've been using it for a few years and love it.
A: There are two that you can use (I've used them both, and they are both very powerful, and easy to use):
*
*NetBeans which has a built in GUI Builder.
Or you can use:
*
*Eclipse with the Windowbuilder plugin
*
*(it can be downloaded here and here)
Personally, I prefer Eclipse with Windowbuilder, but that's just me. You can use either one.
Here is a picture of the Windowbuilder plugin:
And here is a picture of NetBeans' built in GUI Builder:
A: I used to use MyEclipse quite a bit. It had a decent IDE for making Swing forms and such. I assume it has improved in the past year - they seem to add features in gobs and heaps, quite often.
http://www.myeclipseide.com/
A: As I'm using Eclipse, I use the Visual Editor plugin. It generates clean source code, with good patterns and easy to patch/modify/extend.
Unfortunately, it is not very stable. But it's worth trying.
A: I like Eclipse's VisualEditor (VE), and sometime ago I've tried to switch to
another editor, but I found it impossible. Visual editor has this
feature that it generates manageable, readable, editable, and easy-to-understand code.
Unlike both mentioned earlier NetBeans editor and WindowBuilder it uses the lazy initialization pattern to separate initialization of components. Also it does not need to lock down parts of code that you can't edit; you may edit code by hand, and VE is still able to work with your changes.
The only disadvantage of VE is that it uses Eclipse v3.2 (Callisto) (there is no official build for Eclipse v3.4 (Ganymede), or Eclipse v3.3 (Europa)), so effectively you have to use two Eclipses instances, one for VE and one for the rest of the development.
I took it from recent discussion on comp.lang.java.gui (I was the author of this post, so I could do it rightfully). Here is the link to the whole discussion.
A: We have been doing Swing development for nearly the past 10 years. There are some nice GUI builders available (e.g. JFormDesigner), but all restrict us too much in different kinds.
For example, we have a lot of components without public no-arg constructor (e.g. a JTable subclass which requires the model in the constructor) or we have component factories.
Desktop applications usually have to be obfuscated. Obfuscation very easily breaks user interfaces created with a GUI designer or requires much work to avoid obfuscating such classes.
Another often happening case is that, for example, a panel should only contain some components depending on some condition. Simply hiding them would make the GUI look bad; they rather should not be added instead. I never found a GUI editor which provides this flexibility and even if there would be one, it would be so hard to use, that I definitely would be faster with good old Java code.
A: I think the best editor that can exist is Visual editor for Eclipse.
The only drawback is the fact that we can't re-edit the visual part when we modified the source code. I hope one day we will have a tool that rivals Visual Studio in this aspect.
A: I have not used anything other than NetBeans for Swing, but I have been extremely happy with it. I used it for 18 months on a $25M application and to develop an prototype application to replace a Windows Forms app.
Up and until Microsoft came out with WPF, in my opinion, there was not a better tool kit for traditional desktop applications. (I always found Windows Forms too limiting).
A: Use NetBeans, I have also successfully developed one application using NetBeans.
It is realy awesome, it helps you while writing the code.
Since Swing generates some code on its own so it is really helpful to use Netbeans.
Go through it and you can always ask question and problems.
It will be good if you go for latest version release.
A: I personally will suggest NetBeans Swing Builder. Yet, if you want total control and to gain an in-depth understanding of the Swing framework, I have noticed doing it free hand is the ultimate choice.
A: I have always coded my UIs by hand. The frustration of dealing with screen builders and filling out all those property sheets is too much for me. After a couple of screens and a little research I am just as productive.
A: window builder pro is good option and it is free also.
A: As others have mentioned, my best experience with Java SWING applications is with NetBeans. NetBeans has a WYSIWYG editor, and the code is automatically generated for you, which is then protected, however you can add custom code to add listeners and other events that the end user may be interested in using, such as buttons, text forms and areas, and other nice GUI tools.
A: When I use swing, I usually use the NetBeans.
If you prefer the Eclipse IDE, there are the Visual Editor plugin.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: Incosistency between MS Sql 2k and 2k5 with columns as function arguments I'm having trouble getting the following to work in SQL Server 2k, but it works in 2k5:
--works in 2k5, not in 2k
create view foo as
SELECT usertable.legacyCSVVarcharCol as testvar
FROM usertable
WHERE rsrcID in
( select val
from
dbo.fnSplitStringToInt(usertable.legacyCSVVarcharCol, default)
)
--error message:
Msg 170, Level 15, State 1, Procedure foo, Line 4
Line 25: Incorrect syntax near '.'.
So, legacyCSVVarcharCol is a column containing comma-separated lists of INTs. I realize that this is a huge WTF, but this is legacy code, and there's nothing that can be done about the schema right now. Passing "testvar" as the argument to the function doesn't work in 2k either. In fact, it results in a slightly different (and even weirder error):
Msg 155, Level 15, State 1, Line 8
'testvar' is not a recognized OPTIMIZER LOCK HINTS option.
Passing a hard-coded string as the argument to fnSplitStringToInt works in both 2k and 2k5.
Does anyone know why this doesn't work in 2k? Is this perhaps a known bug in the query planner? Any suggestions for how to make it work? Again, I realize that the real answer is "don't store CSV lists in your DB!", but alas, that's beyond my control.
Some sample data, if it helps:
INSERT INTO usertable (legacyCSVVarcharCol) values ('1,2,3');
INSERT INTO usertable (legacyCSVVarcharCol) values ('11,13,42');
Note that the data in the table does not seem to matter since this is a syntax error, and it occurs even if usertable is completely empty.
EDIT: Realizing that perhaps the initial example was unclear, here are two examples, one of which works and one of which does not, which should highlight the problem that's occurring:
--fails in sql2000, works in 2005
SELECT t1.*
FROM usertable t1
WHERE 1 in
(Select val
from
fnSplitStringToInt(t1.legacyCSVVarcharCol, ',')
)
--works everywhere:
SELECT t1.*
FROM usertable t1
WHERE 1 in
( Select val
from
fnSplitStringToInt('1,4,543,56578', ',')
)
Note that the only difference is the first argument to fnSplitStringToInt is a column in the case that fails in 2k and a literal string in the case that succeeds in both.
A: Passing column-values to a table-valued user-defined function is not supported in SQL Server 2000, you can only use constants, so the following (simpler version) would also fail:
SELECT *, (SELECT TOP 1 val FROM dbo.fnSplitStringToInt(usertable.legacyCSVVarcharCol, ','))
FROM usertable
It will work on SQL Server 2005, though, as you have found out.
A: I don't think functions can have default values in functions in SS2K.
What happens when you run this SQL in SS2K?
select val
from dbo.fnSplitStringToInt('1,2,3', default)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What workshops / user groups / conventions do you attend? I haven't been to enough of these "live" events to really determine which, if any, are worth the time / money. Which ones do you attend and why?
A: For conventions, if you're still in university, and can make it to Montreal, Canada, the Canadian Undergraduate Software Engineering Conference (CUSEC) has been extremely enjoyable. See the 2009 site for the next event, and for a take on what previous years have been like, take a look at the 2008 speakers (note: it included Jeff Atwood).
I attend CUSEC primarily because our software engineering society on campus makes a point of organizing a trip to it, but also because of the speakers that present there, and the career fair.
A: I used to belong to my local Linux User Group which I co-founded but I treated it more as a social event than anything else but obviously a social event full of geeks is still a great way to get a great debate going :)
Conventions and the like I've not got much out of other than being pestered by businesses who can offer me nothing that is apart from a bunch of Linux and Hacker ones where I've met loads of people who I consider friends offline, again great for the social aspect but pretty worthless to me in other respects.
That's not to say I never got any business out of attending various events it's just that treating them as social occasions meant any business that did come my way was a bonus so I never left an event feeling like it was a waste of time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Can I add an event listener to a databinding action in Flex? I have a ComboBox that I bind to a standard HTTPService, I would like to add an event listener so that I can run some code after the ComboBox is populated from the data provider.
How can I do this?
A: Flex doesn't have a specific data-binding events in the way that say ASP .Net does. You have to watch for the dataProvider property like John says in the first answer, but not simply to the combobox or its dataProvider property. Let's say you have a setup like this:
<!-- Assume you have extracted an XMLList out of the result
and attached it to the collection -->
<mx:HttpService id="svc" result="col.source = event.result.Project"/>
<mx:XMLListCollection id="col"/>
<mx:ComboBox id="cbProject" dataProvider="{col}"/>
Now if you set a changewatcher like this:
// Strategy 1
ChangeWatcher.watch(cbProject, "dataProvider", handler) ;
your handler will not get triggered when the data comes back. Why? Because the dataProvider itself didn't change - its underlying collection did. To trigger that, you have to do this:
// Strategy 2
ChangeWatcher.watch(cbProject, ["dataProvider", "source"], handler) ;
Now, when your collection has updated, your handler will get triggered. If you want to make it work using Strategy 1, don't set your dataProvider in MXML. Rather, handle the collectionChange event of your XMLListCollection and in AS, over-write the dataProvider of the ComboBox.
Are these exactly the same as a databound event? No, but I've used them and never had an issue. If you want to be absolutely sure your data has bound, just put a changeWatcher on the selectedItem property of your combobox and do your processing there. Just be prepared to have that event trigger multiple times and handle that appropriately.
A: You can use a mx.binding.utils.ChangeWatcher as described here.
A: You can use BindingUtils to get notified when the dataProvider property of the combo box changes:
BindingUtils.bindSetter(comboBoxDataProviderChanged, comboBox, "dataProvider");
BindingUtils lives in the mx.binding.utils package.
I have a longer description of how to work with BindingUtils here: Does painless programmatic data binding exist?
A: You can also listen for the ResultEvent.RESULT on the HTTPService, that would be called slightly before the combo box got populated I guess, but it might be good enough.
A: Where are you adding the listener compared to the loading of the data? Is it possible the data is being loaded, and the event fired, before you've added your listener?
A: @Herms
The listener is definitely added before the web service call, here is an example of what my code look like (I simplified lots of things...):
I have this flex component:
public class FooComboBox extends ComboBox
{
private var service:HTTPService = null;
public function ProjectAutoComplete()
{
service = new HTTPService();
service.url = Application.application.poxmlUrl;
service.addEventListener(FaultEvent.FAULT,serviceFault);
service.addEventListener(ResultEvent.RESULT,resultReturned);
this.addEventListener(FlexEvent.DATA_CHANGE,dataChange);
}
public function init():void
{
var postdata:Object = {};
postdata["key"] = "ProjectName";
postdata["accountId"] = Application.application.accountId
service.send(postdata);
}
private function resultReturned(event:ResultEvent):void
{
this.dataProvider = service.lastResult.Array.Element;
// thought I could do it here...but no luck...
}
private function dataChange(e:FlexEvent):void
{
// combobox has been databound
mx.controls.Alert.show("databound!");
}
...
}
and then in a mxml file I have the FooComboBox with id "foo" and I call:
foo.init();
I need to execute some code after the combobox is completely databound...any ideas?
A: Maybe the event doesn't trigger when the data provider is first set? Try setting the data provider to an empty array in the constructor, so that it's definitely changing instead of just being initially assigned later in your resultReturned() method. I've no clue if that will help, but it's worth a shot.
Also, you're setting the provider to lastResult.Array.Element. That looks a little suspicious to me, as the data provider should probably be an array. Granted, I have no clue what your data looks like, so what you have could very well be correct, but it's something I noticed that might be related. Maybe it should just be lastResult.Array?
A: In your example code, try running validateNow() in the resultReturned method. That will force the combo box to commit its properties. The thing is that even though the property is set the new value isn't used until commitProperties is run, which it will do at the earliest on the next frame, validateNow() forces it to be done at once.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Using MySQLi - which is better for closing queries I have a habit of keeping my variable usage to a bare minimum. So I'm wondering if there is any advantage to be gained by the following:
$query = $mysqli->query('SELECT * FROM `people` ORDER BY `name` ASC LIMIT 0,30');
// Example 1
$query = $query->fetch_assoc();
// Example 2
$query_r = $query->fetch_assoc();
$query->free();
Now if I'm right Example 1 should be more efficient as $query is unset when I reassign it which should free any memory associated with it. However there is a method (MySQLi_Result::free()) which frees associated memory - is this the same thing?
If I don't call ::free() to free any memory associated with the result but unset it by reassigning the variable am I doing the same thing? I don't know how to log this sort of thing - does anyone have some ideas?
A: The manual seems to suggest that you should still be using free() to release the memory. I believe the reasoning is that free() is freeing the memory in MySQL, not in PHP. Since PHP can't garbage-collect for MySQL, you need to call free().
A: Example 1 dissociates the $query variable from the MySQL result. The MySQL result still exists in memory, and will continue to exist and waste memory until garbage collection occurs.
Example 2 frees the MySQL result immediately, releasing the used resources.
However, since PHP pages are generally short-lived with small result-sets, the memory saved is trivial. You will not notice a slowdown unless you leave a ton of results in memory over a long period of time on pages that run for a long time.
Brian,
PHP can garbage collect the MySQL result, it just doesn't happen immediately.
The result lives in PHP's memory pool, not in the MySQL server's.
(the locality of memory when using unbuffered queries is slightly different, but they're so rarely used in PHP as to not be worth mentioning)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: .NET Multi Dimensional Array Printing Let's say I have a .NET Array of n number of dimensions. I would like to foreach through the elements and print out something like:
[0, 0, 0] = 2
[0, 0, 1] = 32
And so on. I could write a loop using some the Rank and dimension functions to come up with the indices. Is there a built in function instead?
A: Thanks for the answer, here is what I wrote while I waited:
public static string Format(Array array)
{
var builder = new StringBuilder();
builder.AppendLine("Count: " + array.Length);
var counter = 0;
var dimensions = new List<int>();
for (int i = 0; i < array.Rank; i++)
{
dimensions.Add(array.GetUpperBound(i) + 1);
}
foreach (var current in array)
{
var index = "";
var remainder = counter;
foreach (var bound in dimensions)
{
index = remainder % bound + ", " + index;
remainder = remainder / bound;
}
index = index.Substring(0, index.Length - 2);
builder.AppendLine(" [" + index + "] " + current);
counter++;
}
return builder.ToString();
}
A: Take a look at this: might helpful for you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Active X Control JavaScript My coworker and I have encountered a nasty situation where we have to use an active X control to manipulate a web camera on a page.
Is it possible to assign a javascript event handler to a button in the active x control so that it would fire an action on the page when clicked, or do we have to create a button on the html page itself that manipulates the Active X Control and then can fire any necessary actions on the page?
A: Please just use an existing ActiveX control. Like Flash or Silverlight. Flash has built-in webcam support and is controllable via JavaScript. Silverlight doesn't have built-in camera support, but it's JavaScript integration is fantastic.
If you must write your own then fret not, it is trivial to get it to interact with JavaScript. You just have to expose the IDispatch interface.
For events, you need to learn about Connection Points.
A: Yes! You can throw events in C++/ActiveX land which makes the JavaScript code run an event handler function. I was even able to make an entire invisible ActiveX control (same color as page background) with no buttons or visual feedback that did all of its GUI work through JavaScript and CSS.
edit: Frank's advice is right on. Here's the link on scripting events.
My strategy was to call a C++ function called MyUpdate (which implements IConnectionPoint) when I wanted to force updates in the browser.
(Also, I made sure to pump Windows messages in the Fire_MyUpdate method because sometimes JavaScript code would call back into C++ land by calling methods on the ActiveX control; this avoids freezing up the browser and ensures that the JavaScript GUI stays responsive, e.g. for a Cancel button.)
On the browser side, the JavaScript code has the global variable referencing the object, followed by "::", followed by the method name:
function Uploader::MyUpdate()
{
// ... code to fetch the current state of various
// properties from the Uploader object and do something with it
// for example check Uploader.IsActive and show or hide an HTML div
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How to detect which blog API Let's say that you want to create a dead simple BlogEditor and, one of your ideas, is to do what Live Writer does and ask only the URL of the persons Blog. How can you detect what type of blog is it?
Basic detection can be done with the URL itself, such as “http://myblog.blogger.com” etc. But what if it's self hosted?
I'm mostly interested on how to do this in Java, but this question could be also used as a reference for any other language.
A: Many (most?) blogs will have a meta tag for "generator" which will list the blog engine. For example a blogger blog will contain the following meta tag:
<meta name="generator" content="Blogger" />
My Subtext blog shows the following generator meta tag:
<meta name="Generator" content="Subtext Version 1.9.5.177" />
This meta tag would be the first place to look. For blogs that don't set this meta tag in the source, you'd have to resort to looking for patterns to determine the blog type.
A: Some blogs provide a Generator meta tag - e.g. Wordpress - you could find out if there's any exceptions to this.
You'll have to be careful how you detect it though, Google surprised me with this line:
<meta content='blogger' name='generator'/>
Single quotes are blasphemy.
A: To determine other patterns to look for in determining the blogging engine (for those that don't have a generator meta tag), you'd basically just look through the source to determine something specific to that blog type. You'd also need to compare this across multiple blogs of that type as you want to make sure that it's not something specific to the skin or theme in use on the blog only.
Another thought would be to read the docs of the various common blogging engine to know how to discover the location of it's paths to things like MetaWebLog API, etc. IIRC, Live Writer has built-in support for the most common types, the rest are categorized "MetaWebLog API Blog" or something.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I pass arguments to a batch file? I need to pass an ID and a password to a batch file at the time of running rather than hardcoding them into the file.
Here's what the command line looks like:
test.cmd admin P@55w0rd > test-log.txt
A:
For to use looping get all arguments and in pure batch:
Obs: For using without: ?*&|<>
@echo off && setlocal EnableDelayedExpansion
for %%Z in (%*)do set "_arg_=%%Z" && set/a "_cnt+=1+0" && (
call set "_arg_[!_cnt!]=!_arg_!" && for /l %%l in (!_cnt! 1 !_cnt!
)do echo/ The argument n:%%l is: !_arg_[%%l]!
)
goto :eof
Your code is ready to do something with the argument number where it needs, like...
@echo off && setlocal EnableDelayedExpansion
for %%Z in (%*)do set "_arg_=%%Z" && set/a "_cnt+=1+0" && call set "_arg_[!_cnt!]=!_arg_!"
fake-command /u !_arg_[1]! /p !_arg_[2]! > test-log.txt
A: Yep, and just don't forget to use variables like %%1 when using if and for and the gang.
If you forget the double %, then you will be substituting in (possibly null) command line arguments and you will receive some pretty confusing error messages.
A: A friend was asking me about this subject recently, so I thought I'd post how I handle command-line arguments in batch files.
This technique has a bit of overhead as you'll see, but it makes my batch files very easy to understand and quick to implement. As well as supporting the following structures:
>template.bat [-f] [--flag] [--namedvalue value] arg1 [arg2][arg3][...]
The jist of it is having the :init, :parse, and :main functions.
Example usage
>template.bat /?
test v1.23
This is a sample batch file template,
providing command-line arguments and flags.
USAGE:
test.bat [flags] "required argument" "optional argument"
/?, --help shows this help
/v, --version shows the version
/e, --verbose shows detailed output
-f, --flag value specifies a named parameter value
>template.bat <- throws missing argument error
(same as /?, plus..)
**** ****
**** MISSING "REQUIRED ARGUMENT" ****
**** ****
>template.bat -v
1.23
>template.bat --version
test v1.23
This is a sample batch file template,
providing command-line arguments and flags.
>template.bat -e arg1
**** DEBUG IS ON
UnNamedArgument: "arg1"
UnNamedOptionalArg: not provided
NamedFlag: not provided
>template.bat --flag "my flag" arg1 arg2
UnNamedArgument: "arg1"
UnNamedOptionalArg: "arg2"
NamedFlag: "my flag"
>template.bat --verbose "argument #1" --flag "my flag" second
**** DEBUG IS ON
UnNamedArgument: "argument #1"
UnNamedOptionalArg: "second"
NamedFlag: "my flag"
template.bat
@::!/dos/rocks
@echo off
goto :init
:header
echo %__NAME% v%__VERSION%
echo This is a sample batch file template,
echo providing command-line arguments and flags.
echo.
goto :eof
:usage
echo USAGE:
echo %__BAT_NAME% [flags] "required argument" "optional argument"
echo.
echo. /?, --help shows this help
echo. /v, --version shows the version
echo. /e, --verbose shows detailed output
echo. -f, --flag value specifies a named parameter value
goto :eof
:version
if "%~1"=="full" call :header & goto :eof
echo %__VERSION%
goto :eof
:missing_argument
call :header
call :usage
echo.
echo **** ****
echo **** MISSING "REQUIRED ARGUMENT" ****
echo **** ****
echo.
goto :eof
:init
set "__NAME=%~n0"
set "__VERSION=1.23"
set "__YEAR=2017"
set "__BAT_FILE=%~0"
set "__BAT_PATH=%~dp0"
set "__BAT_NAME=%~nx0"
set "OptHelp="
set "OptVersion="
set "OptVerbose="
set "UnNamedArgument="
set "UnNamedOptionalArg="
set "NamedFlag="
:parse
if "%~1"=="" goto :validate
if /i "%~1"=="/?" call :header & call :usage "%~2" & goto :end
if /i "%~1"=="-?" call :header & call :usage "%~2" & goto :end
if /i "%~1"=="--help" call :header & call :usage "%~2" & goto :end
if /i "%~1"=="/v" call :version & goto :end
if /i "%~1"=="-v" call :version & goto :end
if /i "%~1"=="--version" call :version full & goto :end
if /i "%~1"=="/e" set "OptVerbose=yes" & shift & goto :parse
if /i "%~1"=="-e" set "OptVerbose=yes" & shift & goto :parse
if /i "%~1"=="--verbose" set "OptVerbose=yes" & shift & goto :parse
if /i "%~1"=="--flag" set "NamedFlag=%~2" & shift & shift & goto :parse
if /i "%~1"=="-f" set "NamedFlag=%~2" & shift & shift & goto :parse
if not defined UnNamedArgument set "UnNamedArgument=%~1" & shift & goto :parse
if not defined UnNamedOptionalArg set "UnNamedOptionalArg=%~1" & shift & goto :parse
shift
goto :parse
:validate
if not defined UnNamedArgument call :missing_argument & goto :end
:main
if defined OptVerbose (
echo **** DEBUG IS ON
)
echo UnNamedArgument: "%UnNamedArgument%"
if defined UnNamedOptionalArg echo UnNamedOptionalArg: "%UnNamedOptionalArg%"
if not defined UnNamedOptionalArg echo UnNamedOptionalArg: not provided
if defined NamedFlag echo NamedFlag: "%NamedFlag%"
if not defined NamedFlag echo NamedFlag: not provided
:end
call :cleanup
exit /B
:cleanup
REM The cleanup function is only really necessary if you
REM are _not_ using SETLOCAL.
set "__NAME="
set "__VERSION="
set "__YEAR="
set "__BAT_FILE="
set "__BAT_PATH="
set "__BAT_NAME="
set "OptHelp="
set "OptVersion="
set "OptVerbose="
set "UnNamedArgument="
set "UnNamedArgument2="
set "NamedFlag="
goto :eof
A: There is no need to complicate it. It is simply command %1 %2 parameters, for example,
@echo off
xcopy %1 %2 /D /E /C /Q /H /R /K /Y /Z
echo copied %1 to %2
pause
The "pause" displays what the batch file has done and waits for you to hit the ANY key. Save that as xx.bat in the Windows folder.
To use it, type, for example:
xx c:\f\30\*.* f:\sites\30
This batch file takes care of all the necessary parameters, like copying only files, that are newer, etc. I have used it since before Windows. If you like seeing the names of the files, as they are being copied, leave out the Q parameter.
A: Simple solution(even though question is old)
Test1.bat
echo off
echo "Batch started"
set arg1=%1
echo "arg1 is %arg1%"
echo on
pause
CallTest1.bat
call "C:\Temp\Test1.bat" pass123
output
YourLocalPath>call "C:\Temp\test.bat" pass123
YourLocalPath>echo off
"Batch started"
"arg1 is pass123"
YourLocalPath>pause
Press any key to continue . . .
Where YourLocalPath is current directory path.
To keep things simple store the command param in variable and use variable for comparison.
Its not just simple to write but its simple to maintain as well so if later some other person or you read your script after long period of time, it will be easy to understand and maintain.
To write code inline : see other answers.
A: In batch file
set argument1=%1
set argument2=%2
echo %argument1%
echo %argument2%
%1 and %2 return the first and second argument values respectively.
And in command line, pass the argument
Directory> batchFileName admin P@55w0rd
Output will be
admin
P@55w0rd
A: Here's how I did it:
@fake-command /u %1 /p %2
Here's what the command looks like:
test.cmd admin P@55w0rd > test-log.txt
The %1 applies to the first parameter the %2 (and here's the tricky part) applies to the second. You can have up to 9 parameters passed in this way.
A: @ECHO OFF
:Loop
IF "%1"=="" GOTO Continue
SHIFT
GOTO Loop
:Continue
Note: IF "%1"=="" will cause problems if %1 is enclosed in quotes itself.
In that case, use IF [%1]==[] or, in NT 4 (SP6) and later only, IF "%~1"=="" instead.
A: Everyone has answered with really complex responses, however it is actually really simple. %1 %2 %3 and so on are the arguements parsed to the file. %1 is arguement 1, %2 is arguement 2 and so on.
So, if I have a bat script containing this:
@echo off
echo %1
and when I run the batch script, I type in this:
C:> script.bat Hello
The script will simply output this:
Hello
This can be very useful for certain variables in a script, such as a name and age. So, if I have a script like this:
@echo off
echo Your name is: %1
echo Your age is: %2
When I type in this:
C:> script.bat Oliver 1000
I get the output of this:
Your name is: Oliver
Your age is: 1000
A: Make a new batch file (example: openclass.bat) and write this line in the file:
java %~n1
Then place the batch file in, let's say, the system32 folder, go to your Java class file, right click, Properties, Open with..., then find your batch file, select it and that's that...
It works for me.
PS: I can't find a way to close the cmd window when I close the Java class. For now...
A: Let's keep this simple.
Here is the .cmd file.
@echo off
rem this file is named echo_3params.cmd
echo %1
echo %2
echo %3
set v1=%1
set v2=%2
set v3=%3
echo v1 equals %v1%
echo v2 equals %v2%
echo v3 equals %v3%
Here are 3 calls from the command line.
C:\Users\joeco>echo_3params 1abc 2 def 3 ghi
1abc
2
def
v1 equals 1abc
v2 equals 2
v3 equals def
C:\Users\joeco>echo_3params 1abc "2 def" "3 ghi"
1abc
"2 def"
"3 ghi"
v1 equals 1abc
v2 equals "2 def"
v3 equals "3 ghi"
C:\Users\joeco>echo_3params 1abc '2 def' "3 ghi"
1abc
'2
def'
v1 equals 1abc
v2 equals '2
v3 equals def'
C:\Users\joeco>
A: FOR %%A IN (%*) DO (
REM Now your batch file handles %%A instead of %1
REM No need to use SHIFT anymore.
ECHO %%A
)
This loops over the batch parameters (%*) either they are quoted or not, then echos each parameter.
A: I wrote a simple read_params script that can be called as a function (or external .bat) and will put all variables into the current environment. It won't modify the original parameters because the function is being called with a copy of the original parameters.
For example, given the following command:
myscript.bat some -random=43 extra -greeting="hello world" fluff
myscript.bat would be able to use the variables after calling the function:
call :read_params %*
echo %random%
echo %greeting%
Here's the function:
:read_params
if not %1/==/ (
if not "%__var%"=="" (
if not "%__var:~0,1%"=="-" (
endlocal
goto read_params
)
endlocal & set %__var:~1%=%~1
) else (
setlocal & set __var=%~1
)
shift
goto read_params
)
exit /B
Limitations
*
*Cannot load arguments with no value such as -force. You could use -force=true but I can't think of a way to allow blank values without knowing a list of parameters ahead of time that won't have a value.
Changelog
*
*2/18/2016
*
*No longer requires delayed expansion
*Now works with other command line arguments by looking for - before parameters.
A: Paired arguments
If you prefer passing the arguments in a key-value pair you can use something like this:
@echo off
setlocal enableDelayedExpansion
::::: asigning arguments as a key-value pairs:::::::::::::
set counter=0
for %%# in (%*) do (
set /a counter=counter+1
set /a even=counter%%2
if !even! == 0 (
echo setting !prev! to %%#
set "!prev!=%%~#"
)
set "prev=%%~#"
)
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
:: showing the assignments
echo %one% %two% %three% %four% %five%
endlocal
And an example :
c:>argumentsDemo.bat one 1 "two" 2 three 3 four 4 "five" 5
1 2 3 4 5
Predefined variables
You can also set some environment variables in advance. It can be done by setting them in the console or setting them from my computer:
@echo off
if defined variable1 (
echo %variable1%
)
if defined variable2 (
echo %variable2%
)
and calling it like:
c:\>set variable1=1
c:\>set variable2=2
c:\>argumentsTest.bat
1
2
File with listed values
You can also point to a file where the needed values are preset.
If this is the script:
@echo off
setlocal
::::::::::
set "VALUES_FILE=E:\scripts\values.txt"
:::::::::::
for /f "usebackq eol=: tokens=* delims=" %%# in ("%VALUES_FILE%") do set "%%#"
echo %key1% %key2% %some_other_key%
endlocal
and values file is this:
:::: use EOL=: in the FOR loop to use it as a comment
key1=value1
key2=value2
:::: do not left spaces arround the =
:::: or at the begining of the line
some_other_key=something else
and_one_more=more
the output of calling it will be:
value1 value2 something else
Of course you can combine all approaches. Check also arguments syntax , shift
A: If you want to intelligently handle missing parameters you can do something like:
IF %1.==. GOTO No1
IF %2.==. GOTO No2
... do stuff...
GOTO End1
:No1
ECHO No param 1
GOTO End1
:No2
ECHO No param 2
GOTO End1
:End1
A: Inspired by an answer elsewhere by @Jon, I have crafted a more general algorithm for extracting named parameters, optional values, and switches.
Let us say that we want to implement a utility foobar. It requires an initial command. It has an optional parameter --foo which takes an optional value (which cannot be another parameter, of course); if the value is missing it defaults to default. It also has an optional parameter --bar which takes a required value. Lastly it can take a flag --baz with no value allowed. Oh, and these parameters can come in any order.
In other words, it looks like this:
foobar <command> [--foo [<fooval>]] [--bar <barval>] [--baz]
Here is a solution:
@ECHO OFF
SETLOCAL
REM FooBar parameter demo
REM By Garret Wilson
SET CMD=%~1
IF "%CMD%" == "" (
GOTO usage
)
SET FOO=
SET DEFAULT_FOO=default
SET BAR=
SET BAZ=
SHIFT
:args
SET PARAM=%~1
SET ARG=%~2
IF "%PARAM%" == "--foo" (
SHIFT
IF NOT "%ARG%" == "" (
IF NOT "%ARG:~0,2%" == "--" (
SET FOO=%ARG%
SHIFT
) ELSE (
SET FOO=%DEFAULT_FOO%
)
) ELSE (
SET FOO=%DEFAULT_FOO%
)
) ELSE IF "%PARAM%" == "--bar" (
SHIFT
IF NOT "%ARG%" == "" (
SET BAR=%ARG%
SHIFT
) ELSE (
ECHO Missing bar value. 1>&2
ECHO:
GOTO usage
)
) ELSE IF "%PARAM%" == "--baz" (
SHIFT
SET BAZ=true
) ELSE IF "%PARAM%" == "" (
GOTO endargs
) ELSE (
ECHO Unrecognized option %1. 1>&2
ECHO:
GOTO usage
)
GOTO args
:endargs
ECHO Command: %CMD%
IF NOT "%FOO%" == "" (
ECHO Foo: %FOO%
)
IF NOT "%BAR%" == "" (
ECHO Bar: %BAR%
)
IF "%BAZ%" == "true" (
ECHO Baz
)
REM TODO do something with FOO, BAR, and/or BAZ
GOTO :eof
:usage
ECHO FooBar
ECHO Usage: foobar ^<command^> [--foo [^<fooval^>]] [--bar ^<barval^>] [--baz]
EXIT /B 1
*
*Use SETLOCAL so that the variables don't escape into the calling environment.
*Don't forget to initialize the variables SET FOO=, etc. in case someone defined them in the calling environment.
*Use %~1 to remove quotes.
*Use IF "%ARG%" == "" and not IF [%ARG%] == [] because [ and ] don't play will at all with values ending in a space.
*Even if you SHIFT inside an IF block, the current args such as %~1 don't get updated because they are determined when the IF is parsed. You could use %~1 and %~2 inside the IF block, but it would be confusing because you had a SHIFT. You could put the SHIFT at the end of the block for clarity, but that might get lost and/or confuse people as well. So "capturing" %~1 and %~1 outside the block seems best.
*You don't want to use a parameter in place of another parameter's optional value, so you have to check IF NOT "%ARG:~0,2%" == "--".
*Be careful only to SHIFT when you use one of the parameters.
*The duplicate code SET FOO=%DEFAULT_FOO% is regrettable, but the alternative would be to add an IF "%FOO%" == "" SET FOO=%DEFAULT_FOO% outside the IF NOT "%ARG%" == "" block. However because this is still inside the IF "%PARAM%" == "--foo" block, the %FOO% value would have been evaluated and set before you ever entered the block, so you would never detect that both the --foo parameter was present and also that the %FOO% value was missing.
*Note that ECHO Missing bar value. 1>&2 sends the error message to stderr.
*Want a blank line in a Windows batch file? You gotta use ECHO: or one of the variations.
A: Another useful tip is to use %* to mean "all". For example:
echo off
set arg1=%1
set arg2=%2
shift
shift
fake-command /u %arg1% /p %arg2% %*
When you run:
test-command admin password foo bar
The above batch file will run:
fake-command /u admin /p password admin password foo bar
I may have the syntax slightly wrong, but this is the general idea.
A: Accessing batch parameters can be simple with %1, %2, ... %9 or also %*,
but only if the content is simple.
There is no simple way for complex contents like "&"^&, as it's not possible to access %1 without producing an error.
set var=%1
set "var=%1"
set var=%~1
set "var=%~1"
The lines expand to
set var="&"&
set "var="&"&"
set var="&"&
set "var="&"&"
And each line fails, as one of the & is outside of the quotes.
It can be solved with reading from a temporary file a remarked version of the parameter.
@echo off
SETLOCAL DisableDelayedExpansion
SETLOCAL
for %%a in (1) do (
set "prompt="
echo on
for %%b in (1) do rem * #%1#
@echo off
) > param.txt
ENDLOCAL
for /F "delims=" %%L in (param.txt) do (
set "param1=%%L"
)
SETLOCAL EnableDelayedExpansion
set "param1=!param1:*#=!"
set "param1=!param1:~0,-2!"
echo %%1 is '!param1!'
The trick is to enable echo on and expand the %1 after a rem statement (works also with %2 .. %*).
So even "&"& could be echoed without producing an error, as it is remarked.
But to be able to redirect the output of the echo on, you need the two for-loops.
The extra characters * # are used to be safe against contents like /? (would show the help for REM).
Or a caret ^ at the line end could work as a multiline character, even in after a rem.
Then reading the rem parameter output from the file, but carefully.
The FOR /F should work with delayed expansion off, else contents with "!" would be destroyed.
After removing the extra characters in param1, you got it.
And to use param1 in a safe way, enable the delayed expansion.
A: To refer to a set variable in command line you would need to use %a% so for example:
set a=100
echo %a%
rem output = 100
Note: This works for Windows 7 pro.
A: If you're worried about security/password theft (that led you to design this solution that takes login credentials at execution instead of static hard coding without the need for a database), then you could store the api or half the code of password decryption or decryption key in the program file, so at run time, user would type username/password in console to be hashed/decrypted before passed to program code for execution via set /p, if you're looking at user entering credentials at run time.
If you're running a script to run your program with various user/password, then command line args will suit you.
If you're making a test file to see the output/effects of different logins, then you could store all the logins in an encrypted file, to be passed as arg to test.cmd, unless you wanna sit at command line & type all the logins until finished.
The number of args that can be supplied is limited to total characters on command line. To overcome this limitation, the previous paragraph trick is a workaround without risking exposure of user passwords.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1373"
} |
Q: What are the advantages and disadvantages of turning NOCOUNT off in SQL server queries? What are the advantages and disadvantages of turning NOCOUNT off in SQL server queries?
A: And it's not just the network traffic that is reduced. There is a boost internal to SQL Server because the execution plan can be optimized due to reduction of an extra query to figure out how many rows were affected.
A: It simply stops the message that shows the # of rows effected for being sent/displayed, which provides a performance benefit, especially if you have many statements that will return the message. It improves performance since less data is being sent over the network (between the sql server and front end).
More at BOL: SET NOCOUNT
A: From SQL BOL:
SET NOCOUNT ON prevents the sending of
DONE_IN_PROC messages to the client
for each statement in a stored
procedure. For stored procedures that
contain several statements that do not
return much actual data, setting SET
NOCOUNT to ON can provide a
significant performance boost, because
network traffic is greatly reduced.
See http://msdn.microsoft.com/en-us/library/ms189837.aspx for more details.
Also, this article on SQLServerCentral is great on this subject:
Performance Effects of NOCOUNT
A: I always have it set to ON for the reasons above, but if you have more than 1 result set in your proc it could mess up client code
A: I personally like to turn NOCOUNT on for queries that get run in an manual fashion and use a lot of Print statements to output debugging messages. In this way, your output would look less like:
Updating usernames
(287 rows updated)
Done
Updating passwords
(287 rows updated)
Done
Doing the next thing
(1127 rows updated)
Done
And more like
Updating usernames
Done
Updating passwords
Done
Doing the next thing
Done
Depending on the sensitivity of what you're updating, sometimes it is helpful to include the counts; however, for complex scripts with a lot of output I usually like to leave them out.
A: SET NOCOUNT ON is an oneline Statement, Sql server sends message back to client.this is performed for Every Process(ie .. select,insert,update,delete).if you avoid this message we can improve overall performance for our Database and also reduce network traffic
For EX:
declare @a table(id int)
set nocount on
insert @a select 1 union select 2
set nocount off
A: Stops the message indicating the number of rows affected by a Transact-SQL statement from being returned as part of the results.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Any negative impacts when using Mod-Rewrite? I know there are a lot of positive things mod-rewrite accomplishes. But are there any negative? Obviously if you have poorly written rules your going to have problems. But what if you have a high volume site and your constantly using mod-rewrite, is it going to have a significant impact on performance? I did a quick search for some benchmarks on Google and didn't find much.
A: To echo what Ryan says above, rules in a .htaccess can really hurt your load times on a busy site in comparison to having the rules in your config file. We initially tried this (~60million pages/month) but didn't last very long until our servers started smoking :)
The obvious downside to having the rules in your config is you have to reload the config whenever you modify your rules.
The last flag ("L") is useful for speeding up execution of your rules, once your more frequently-accessed rules are towards the top and assessed first. It can make maintenance much trickier if you've a long set of rules though - I wasted a couple of very frustrating hours one morning as I was editing mid-way down my list of rules and had one up the top that was trapping more than intended!
We had difficulty finding relevant benchmarks also, and ended up working out our own internal suite of tests. Once we got our rules sorted out, properly ordered and into our Apache conf, we didn't find much of a negative performance impact.
A: I've used mod_rewrite on sites that get millions/hits/month without any significant performance issues. You do have to know which rewrites get applied first depending on your rules.
Using mod_rewrite is most likely faster than parsing the URL with your current language.
If you are really worried about performance, don't use .htaccess files, those are slow. Put all your rewrite rules in your Apache config, which is only read once on startup. .htaccess files get re-parsed on every request, along with every .htaccess file in parent folders.
A: If you're worried about apache's performance, one thing to consider if you have a lot of rewrite rules is to use the "skip" flag. It is a way to skip matching on rules. So, whatever overhead would have been spent on matching is saved.
Be careful though, I was on a project which utilized the "skip" flag a lot, and it made maintenance painful, since it depends on the order in which things are written in the file.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: How do I set a textbox to multi-line in SSRS? I have a report with many fields that I'm trying to get down to 1 page horizontally (I don't care whether it's 2 or 200 pages vertically... just don't want to have to deal with 2 pages wide by x pages long train-wreck). That said, it deals with contact information.
My idea was to do:
Name: Address: City: State: ...
Jon Doe Addr1 ThisTown XX ...
Addr2
Addr3
-----------------------------------------------
Jane Doe Addr1 ThisTown XX ...
Addr2
Addr3
-----------------------------------------------
Is there some way to set a textbox to be multi-line (or the SQL result)? Have I missed something bloody obvious?
The CanGrow Property is on by default, and I've double checked that this is true. My problem is that I don't know how to force a line-break. I get the 3 address fields that just fills a line, then wraps to another. I've tried /n, \n (since I can never remember which is the correct slash to put), <br>, <br /> (since the report will be viewed in a ReportViewer control in an ASP.NET website). I can't think of any other ways to wrap the text.
Is there some way to get the results from the database as 3 lines of text/characters?
A: I had an additional problem after putting in the chr(10) into the database.
In the field (within the report) add in:
=REPLACE(Fields!Addr1.Value, CHR(10), vbCrLf)
A: My data was captured in a SL application, needed this for the field expression
=REPLACE(Fields!Text.Value, CHR(13), vbCrLf)
A: Hitting Shift+Enter while typing in the textbox creates a line break.
A: Alter the report's text box to:
= Fields!Addr1.Value + VbCrLf +
Fields!Addr2.Value + VbCrLf +
Fields!Addr3.Value
A: I believe you need to set the CanGrow property to true on the Textbox. See http://msdn.microsoft.com/en-us/library/ms159116(SQL.90).aspx for some details.
A: link break do this
chr(10)
A: Try this one :
= Fields!Field1.Value + System.Environment.NewLine + Fields!Field2.Value
A: In RDLC reports, you can convert a textbox to placehoder.
Then right click that textbox placeholder, select placehoder properties and select HTML. Then for multiline to take effect, you have to insert <br/> tag between those lines.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: sizeof() equivalent for reference types? I'm looking for a way to get the size of an instance of a reference type. sizeof is only for value types. Is this possible?
A: I had a similar question recently and wanted to know the size of Object and LinkedListNode in C#. To solve the problem, I developed a program that would:
*
*Measure the program's "Working Set"
*Allocate a lot of objects.
*Measure the "Working Set" again.
*Divide the difference by the number of allocated objects.
On my computer (64-bit), I got the following data:
Measuring Object:
iter working set size estimate
-1 11190272
1000000 85995520 74.805248
2000000 159186944 73.998336
3000000 231473152 73.4276266666667
4000000 306401280 73.802752
5000000 379092992 73.580544
6000000 451387392 73.3661866666667
7000000 524378112 73.3125485714286
8000000 600096768 73.613312
9000000 676405248 73.9127751111111
Average size: 73.7577032239859
Measuring LinkedListNode<Object>:
iter working set size estimate
-1 34168832
1000000 147959808 113.790976
2000000 268963840 117.397504
3000000 387796992 117.876053333333
4000000 507973632 118.4512
5000000 628379648 118.8421632
6000000 748834816 119.110997333333
7000000 869265408 119.299510857143
8000000 993509376 119.917568
9000000 1114038272 119.985493333333
Average size: 118.296829561905
Estimated Object size: 29.218576886067
Estimated LinkedListNode<reference type> size: 44.5391263379189
Based on the data, the average size of allocating millions of Objects is approximately 29.2 bytes. A LinkedListNode object is approximately 44.5 bytes. This data illustrates two things:
*
*It's very unlikely that the system is allocating a partial byte. The fractional measure of bytes indicates the overhead the CLR requires to allocate and track millions of reference types.
*If we simply round-down the number of bytes, we're still unlikely to have the proper byte count for reference types. This is clear from the measure of Objects. If we round down, we assume the size is 29 bytes which, while theoretically possible, is unlikely because of padding. In order to improve performance, object allocations are usually padded for alignment purposes. I would guess that CLR objects will be 4 byte aligned.
Assuming CLR overhead and 4-byte alignment, I'd estimate an Object in C# is 28 bytes and a LinkedListNode is 44 bytes.
BTW Jon Skeet had the idea for the method above before I did and stated it in this answer to a similar question.
A: You need Marshal.SizeOf
Edit: This is for unsafe code, but then, so is sizeof().
A: If you don't mind it being a little less accurate than perfect, and for comparative purposes, you could serialize the object/s and measure that (in bytes for example)
EDIT (I kept thinking after posting): Because it's a little more complicated than sizeof for valuetypes, for example: reference types can have references to other objects and so on... there's not an exact and easy way to do it that I know of...
A: Beware that Marshal.SizeOf is for unsafe code...
I don't think it's possible for managed code though, maybe you can explain your problem, there may be another way to solve it
A: If you can - Serialize it!
Dim myObjectSize As Long
Dim ms As New IO.MemoryStream
Dim bf As New Runtime.Serialization.Formatters.Binary.BinaryFormatter()
bf.Serialize(ms, myObject)
myObjectSize = ms.Position
A: Please refer my answer in the below link.
It is possible via .sos.dll debugger extension
Find out the size of a .net object
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Previewing HTML in Java What libraries/methods do you know of that can do some basic HTML representation in Swing? Can you comment on your experience?
A: A good pure Java solution is JWebEngine. It render HTML 4 very good.
A: I haven't tried this in a while, but a quick google search shows some possibilities:
*
*Java Sketchbook: The HTML Renderer Shootout, Part 1
*Cobra: Java HTML Renderer & Parser
Are you trying to do this in an applet, or an application? If it's an application (or signed applet) you could potentially instantiate IE or Firefox within your application. Webrenderer acts as a Swing wrapper for this.
A: Swing has a built-in compontent called BasicHTML. I've never used it, but I think it should be sufficient for the basic stuff.
A: Many of the Swing controls (like JLabel) can render basic HTML content. JEditorPane can be used to display HTML pages. However, these controls are limited to HTML 3.2 support.
For a richer experience, I would use the JDesktop Integration Components.
JDIC provides Java applications with
access to functionalities and
facilities provided by the native
desktop. It consists of a collection
of Java packages and tools. JDIC
supports a variety of features such as
embedding the native browser,
launching the desktop applications,
creating tray icons on the desktop,
registering file type associations,
creating JNLP installer packages, etc.
A: This has historically been a major weak point for Java, IMO. There are numerous ways to display limited markup, but very few that offer full featured HTML capabilities. The previously mentioned JDIC component is one option, however it is considered a "heavyweight" component and therefore does not always integrate well with Swing applications.
I am hopeful, however, that the new Webkit based JWebPane project will provide more advanced capabilities without all of the issues that we've had to deal with in the past. And, of course, there are several commercial options as well (IceBrowser is pretty good as an example).
A: I've just used SwingBox to display a quite simple HTML page, with good results.
The project includes a simple demo application which compares its BrowserPane component to JEditorPane, showing a far better result on complex pages (but still not comparable with a modern web browser).
The only issue I had is about unwanted scrollbars from the wrapping JScrollPane. The demo application seems to have the same problem. I can't tell where the issue originates. I'm using version 1.0.
Here a code fragment to show how simple is to use the component:
BrowserPane browserPane = new BrowserPane();
JScrollPane scrollPane = new JScrollPane(browserPane);
someContainer.add(scrollPane);
browserPane.setText("<html><b>Some HTML here</b></html>");
// or...
browserPane.setPage(new URL("http://en.wikipedia.org"));
A: Came across Lobo java web browser the other day.
Lobo is being actively developed with the aim to fully support HTML 4, Javascript and CSS2.
No experience with it though, but thought it may fit the bill for you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is there any difference between "foo is None" and "foo == None"? Is there any difference between:
if foo is None: pass
and
if foo == None: pass
The convention that I've seen in most Python code (and the code I myself write) is the former, but I recently came across code which uses the latter. None is an instance (and the only instance, IIRC) of NoneType, so it shouldn't matter, right? Are there any circumstances in which it might?
A: is tests for identity, not equality. For your statement foo is none, Python simply compares the memory address of objects. It means you are asking the question "Do I have two names for the same object?"
== on the other hand tests for equality as determined by the __eq__() method. It doesn't cares about identity.
In [102]: x, y, z = 2, 2, 2.0
In [103]: id(x), id(y), id(z)
Out[103]: (38641984, 38641984, 48420880)
In [104]: x is y
Out[104]: True
In [105]: x == y
Out[105]: True
In [106]: x is z
Out[106]: False
In [107]: x == z
Out[107]: True
None is a singleton operator. So None is None is always true.
In [101]: None is None
Out[101]: True
A: You may want to read this object identity and equivalence.
The statement 'is' is used for object identity, it checks if objects refer to the same instance (same address in memory).
And the '==' statement refers to equality (same value).
A: For None there shouldn't be a difference between equality (==) and identity (is). The NoneType probably returns identity for equality. Since None is the only instance you can make of NoneType (I think this is true), the two operations are the same. In the case of other types this is not always the case. For example:
list1 = [1, 2, 3]
list2 = [1, 2, 3]
if list1==list2: print "Equal"
if list1 is list2: print "Same"
This would print "Equal" since lists have a comparison operation that is not the default returning of identity.
A: @Jason:
I recommend using something more along the lines of
if foo:
#foo isn't None
else:
#foo is None
I don't like using "if foo:" unless foo truly represents a boolean value (i.e. 0 or 1). If foo is a string or an object or something else, "if foo:" may work, but it looks like a lazy shortcut to me. If you're checking to see if x is None, say "if x is None:".
A: Some more details:
*
*The is clause actually checks if the two objects are at the same
memory location or not. i.e whether they both point to the same
memory location and have the same id.
*As a consequence of 1, is ensures whether, or not, the two lexically represented objects have identical attributes (attributes-of-attributes...) or not
*Instantiation of primitive types like bool, int, string(with some exception), NoneType having a same value will always be in the same memory location.
E.g.
>>> int(1) is int(1)
True
>>> str("abcd") is str("abcd")
True
>>> bool(1) is bool(2)
True
>>> bool(0) is bool(0)
True
>>> bool(0)
False
>>> bool(1)
True
And since NoneType can only have one instance of itself in the python's "look-up" table therefore the former and the latter are more of a programming style of the developer who wrote the code(maybe for consistency) rather then having any subtle logical reason to choose one over the other.
A: A word of caution:
if foo:
# do something
Is not exactly the same as:
if x is not None:
# do something
The former is a boolean value test and can evaluate to false in different contexts. There are a number of things that represent false in a boolean value tests for example empty containers, boolean values. None also evaluates to false in this situation but other things do too.
A: is always returns True if it compares the same object instance
Whereas == is ultimately determined by the __eq__() method
i.e.
>>> class Foo(object):
def __eq__(self, other):
return True
>>> f = Foo()
>>> f == None
True
>>> f is None
False
A: (ob1 is ob2) equal to (id(ob1) == id(ob2))
A: The reason foo is None is the preferred way is that you might be handling an object that defines its own __eq__, and that defines the object to be equal to None. So, always use foo is None if you need to see if it is infact None.
A: There is no difference because objects which are identical will of course be equal. However, PEP 8 clearly states you should use is:
Comparisons to singletons like None should always be done with is or is not, never the equality operators.
A: John Machin's conclusion that None is a singleton is a conclusion bolstered by this code.
>>> x = None
>>> y = None
>>> x == y
True
>>> x is y
True
>>>
Since None is a singleton, x == None and x is None would have the same result. However, in my aesthetical opinion, x == None is best.
A: a is b # returns true if they a and b are true alias
a == b # returns true if they are true alias or they have values that are deemed equivalence
a = [1,3,4]
b = a[:] #creating copy of list
a is b # if gives false
False
a == b # gives true
True
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "227"
} |
Q: How to set encoding in .getJSON jQuery In my web app, I submit some form fields with jQuery's $.getJSON() method. I am having some problems with the encoding. The character-set of my app is charset=ISO-8859-1, but I think these fields are submitted with UTF-8.
How I can set encoding used in $.getJSON calls?
A: You need to analyze the JSON calls using Wireshark, so you will see if you include the charset in the formation of the JSON page or not, for example:
*
*If the page is simple if text / html
0000 48 54 54 50 2f 31 2e 31 20 32 30 30 20 4f 4b 0d HTTP/1.1 200 OK.
0010 0a 43 6f 6e 74 65 6e 74 2d 54 79 70 65 3a 20 74 .Content -Type: t
0020 65 78 74 2f 68 74 6d 6c 0d 0a 43 61 63 68 65 2d ext/html ..Cache-
0030 43 6f 6e 74 72 6f 6c 3a 20 6e 6f 2d 63 61 63 68 Control: no-cach
*
*If the page is of the type including custom JSON with MIME "charset = ISO-8859-1"
0000 48 54 54 50 2f 31 2e 31 20 32 30 30 20 4f 4b 0d HTTP/1.1 200 OK.
0010 0a 43 61 63 68 65 2d 43 6f 6e 74 72 6f 6c 3a 20 .Cache-C ontrol:
0020 6e 6f 2d 63 61 63 68 65 0d 0a 43 6f 6e 74 65 6e no-cache ..Conten
0030 74 2d 54 79 70 65 3a 20 74 65 78 74 2f 68 74 6d t-Type: text/htm
0040 6c 3b 20 63 68 61 72 73 65 74 3d 49 53 4f 2d 38 l; chars et=ISO-8
0050 38 35 39 2d 31 0d 0a 43 6f 6e 6e 65 63 74 69 6f 859-1..C onnectio
Why is that? because we can not put on the page of JSON a goal like this:
In my case I use the manufacturer Connect Me 9210 Digi:
*
*I had to use a flag to indicate that one would use non-standard MIME:
p-> theCgiPtr-> = fDataType eRpDataTypeOther;
*It added the new MIME in the variable:
strcpy (p-> theCgiPtr-> fOtherMimeType, "text / html;
charset = ISO-8859-1 ");
It worked for me without having to convert the data passed by JSON for UTF-8 and then redo the conversion on the page ...
A: If you want to use $.getJSON() you can add the following before the call :
$.ajaxSetup({
scriptCharset: "utf-8",
contentType: "application/json; charset=utf-8"
});
You can use the charset you want instead of utf-8.
The options are explained here.
contentType : When sending data to the server, use this content-type. Default is application/x-www-form-urlencoded, which is fine for most cases.
scriptCharset : Only for requests with jsonp or script dataType and GET type. Forces the request to be interpreted as a certain charset. Only needed for charset differences between the remote and local content.
You may need one or both ...
A: I think that you'll probably have to use $.ajax() if you want to change the encoding, see the contentType param below (the success and error callbacks assume you have <div id="success"></div> and <div id="error"></div> in the html):
$.ajax({
type: "POST",
url: "SomePage.aspx/GetSomeObjects",
contentType: "application/json; charset=utf-8",
dataType: "json",
data: "{id: '" + someId + "'}",
success: function(json) {
$("#success").html("json.length=" + json.length);
itemAddCallback(json);
},
error: function (xhr, textStatus, errorThrown) {
$("#error").html(xhr.responseText);
}
});
I actually just had to do this about an hour ago, what a coincidence!
A: Use encodeURI() in client JS and use URLDecoder.decode() in server Java side works.
Example:
*
*Javascript:
$.getJSON(
url,
{
"user": encodeURI(JSON.stringify(user))
},
onSuccess
);
*Java:
java.net.URLDecoder.decode(params.user, "UTF-8");
A: Use this function to regain the utf-8 characters
function decode_utf8(s) {
return decodeURIComponent(escape(s));
}
Example:
var new_Str=decode_utf8(str);
A: f you want to use $.getJSON() you can add the following before the call :
$.ajaxSetup({
scriptCharset: "utf-8",
contentType: "application/json; charset=utf-8"
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: What HTML parsing libraries do you recommend in Java I want to parse some HTML in order to find the values of some attributes/tags etc.
What HTML parsers do you recommend? Any pros and cons?
A: I have tried HTML Parser which is dead simple.
A: NekoHTML, TagSoup, and JTidy will allow you to parse HTML and then process with XML tools, like XPath.
A: Do you need to do a full parse of the HTML? If you're just looking for specific values within the contents (a specific tag/param), then a simple regular expression might be enough, and could very well be faster.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Is there a way to make a TSQL variable constant? Is there a way to make a TSQL variable constant?
A: There is no built-in support for constants in T-SQL. You could use SQLMenace's approach to simulate it (though you can never be sure whether someone else has overwritten the function to return something else…), or possibly write a table containing constants, as suggested over here. Perhaps write a trigger that rolls back any changes to the ConstantValue column?
A: Prior to using a SQL function run the following script to see the differences in performance:
IF OBJECT_ID('fnFalse') IS NOT NULL
DROP FUNCTION fnFalse
GO
IF OBJECT_ID('fnTrue') IS NOT NULL
DROP FUNCTION fnTrue
GO
CREATE FUNCTION fnTrue() RETURNS INT WITH SCHEMABINDING
AS
BEGIN
RETURN 1
END
GO
CREATE FUNCTION fnFalse() RETURNS INT WITH SCHEMABINDING
AS
BEGIN
RETURN ~ dbo.fnTrue()
END
GO
DECLARE @TimeStart DATETIME = GETDATE()
DECLARE @Count INT = 100000
WHILE @Count > 0 BEGIN
SET @Count -= 1
DECLARE @Value BIT
SELECT @Value = dbo.fnTrue()
IF @Value = 1
SELECT @Value = dbo.fnFalse()
END
DECLARE @TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, @TimeStart, @TimeEnd) AS VARCHAR) + ' elapsed, using function'
GO
DECLARE @TimeStart DATETIME = GETDATE()
DECLARE @Count INT = 100000
DECLARE @FALSE AS BIT = 0
DECLARE @TRUE AS BIT = ~ @FALSE
WHILE @Count > 0 BEGIN
SET @Count -= 1
DECLARE @Value BIT
SELECT @Value = @TRUE
IF @Value = 1
SELECT @Value = @FALSE
END
DECLARE @TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, @TimeStart, @TimeEnd) AS VARCHAR) + ' elapsed, using local variable'
GO
DECLARE @TimeStart DATETIME = GETDATE()
DECLARE @Count INT = 100000
WHILE @Count > 0 BEGIN
SET @Count -= 1
DECLARE @Value BIT
SELECT @Value = 1
IF @Value = 1
SELECT @Value = 0
END
DECLARE @TimeEnd DATETIME = GETDATE()
PRINT CAST(DATEDIFF(ms, @TimeStart, @TimeEnd) AS VARCHAR) + ' elapsed, using hard coded values'
GO
A: No, but you can create a function and hardcode it in there and use that.
Here is an example:
CREATE FUNCTION fnConstant()
RETURNS INT
AS
BEGIN
RETURN 2
END
GO
SELECT dbo.fnConstant()
A: If you are interested in getting optimal execution plan for a value in the variable you can use a dynamic sql code. It makes the variable constant.
DECLARE @var varchar(100) = 'some text'
DECLARE @sql varchar(MAX)
SET @sql = 'SELECT * FROM table WHERE col = '''+@var+''''
EXEC (@sql)
A: For enums or simple constants, a view with a single row has great performance and compile time checking / dependency tracking ( cause its a column name )
See Jared Ko's blog post https://blogs.msdn.microsoft.com/sql_server_appendix_z/2013/09/16/sql-server-variables-parameters-or-literals-or-constants/
create the view
CREATE VIEW ShipMethods AS
SELECT CAST(1 AS INT) AS [XRQ - TRUCK GROUND]
,CAST(2 AS INT) AS [ZY - EXPRESS]
,CAST(3 AS INT) AS [OVERSEAS - DELUXE]
, CAST(4 AS INT) AS [OVERNIGHT J-FAST]
,CAST(5 AS INT) AS [CARGO TRANSPORT 5]
use the view
SELECT h.*
FROM Sales.SalesOrderHeader
WHERE ShipMethodID = ( select [OVERNIGHT J-FAST] from ShipMethods )
A: One solution, offered by Jared Ko is to use pseudo-constants.
As explained in SQL Server: Variables, Parameters or Literals? Or… Constants?:
Pseudo-Constants are not variables or parameters. Instead, they're simply views with one row, and enough columns to support your constants. With these simple rules, the SQL Engine completely ignores the value of the view but still builds an execution plan based on its value. The execution plan doesn't even show a join to the view!
Create like this:
CREATE SCHEMA ShipMethod
GO
-- Each view can only have one row.
-- Create one column for each desired constant.
-- Each column is restricted to a single value.
CREATE VIEW ShipMethod.ShipMethodID AS
SELECT CAST(1 AS INT) AS [XRQ - TRUCK GROUND]
,CAST(2 AS INT) AS [ZY - EXPRESS]
,CAST(3 AS INT) AS [OVERSEAS - DELUXE]
,CAST(4 AS INT) AS [OVERNIGHT J-FAST]
,CAST(5 AS INT) AS [CARGO TRANSPORT 5]
Then use like this:
SELECT h.*
FROM Sales.SalesOrderHeader h
JOIN ShipMethod.ShipMethodID const
ON h.ShipMethodID = const.[OVERNIGHT J-FAST]
Or like this:
SELECT h.*
FROM Sales.SalesOrderHeader h
WHERE h.ShipMethodID = (SELECT TOP 1 [OVERNIGHT J-FAST] FROM ShipMethod.ShipMethodID)
A: Okay, lets see
Constants are immutable values which are known at compile time and do not change for the life of the program
that means you can never have a constant in SQL Server
declare @myvalue as int
set @myvalue = 5
set @myvalue = 10--oops we just changed it
the value just changed
A: My workaround to missing constans is to give hints about the value to the optimizer.
DECLARE @Constant INT = 123;
SELECT *
FROM [some_relation]
WHERE [some_attribute] = @Constant
OPTION( OPTIMIZE FOR (@Constant = 123))
This tells the query compiler to treat the variable as if it was a constant when creating the execution plan. The down side is that you have to define the value twice.
A: No, but good old naming conventions should be used.
declare @MY_VALUE as int
A: Since there is no build in support for constants, my solution is very simple.
Since this is not supported:
Declare Constant @supplement int = 240
SELECT price + @supplement
FROM what_does_it_cost
I would simply convert it to
SELECT price + 240/*CONSTANT:supplement*/
FROM what_does_it_cost
Obviously, this relies on the whole thing (the value without trailing space and the comment) to be unique. Changing it is possible with a global search and replace.
A: There are no such thing as "creating a constant" in database literature. Constants exist as they are and often called values. One can declare a variable and assign a value (constant) to it. From a scholastic view:
DECLARE @two INT
SET @two = 2
Here @two is a variable and 2 is a value/constant.
A: SQLServer 2022 (currently only as Preview available) is now able to Inline the function proposed by SQLMenace, this should prevent the performance hit described by some comments.
CREATE FUNCTION fnConstant() RETURNS INT AS BEGIN RETURN 2 END GO
SELECT is_inlineable FROM sys.sql_modules WHERE [object_id]=OBJECT_ID('dbo.fnConstant');
is_inlineable
1
SELECT dbo.fnConstant()
ExecutionPlan
To test if it also uses the value coming from the Function, I added a second function returning value "1"
CREATE FUNCTION fnConstant1()
RETURNS INT
AS
BEGIN
RETURN 1
END
GO
Create Temp Table with about 500k rows with Value 1 and 4 rows with Value 2:
DROP TABLE IF EXISTS #temp ;
create table #temp (value_int INT)
DECLARE @counter INT;
SET @counter = 0
WHILE @counter <= 500000
BEGIN
INSERT INTO #temp VALUES (1);
SET @counter = @counter +1
END
SET @counter = 0
WHILE @counter <= 3
BEGIN
INSERT INTO #temp VALUES (2);
SET @counter = @counter +1
END
create index i_temp on #temp (value_int);
Using the describe plan we can see that the Optimizer expects 500k values for
select * from #temp where value_int = dbo.fnConstant1(); --Returns 500001 rows
Constant 1
and 4 rows for
select * from #temp where value_int = dbo.fnConstant(); --Returns 4rows
Constant 2
A: Robert's performance test is interesting. And even in late 2022, the scalar functions are much slower (by an order of magnitude) than variables or literals. A view (as suggested mbobka) is somewhere in-between when used for this same test.
That said, using a loop like that in SQL Server is not something I'd ever do, because I'd normally be operating on a whole set.
In SQL 2019, if you use schema-bound functions in a set operation, the difference is much less noticeable.
I created and populated a test table:
create table #testTable (id int identity(1, 1) primary key, value tinyint);
And changed the test so that instead of looping and changing a variable, it queries the test table and returns true or false depending on the value in the test table, e.g.:
insert @testTable(value)
select case when value > 127
then @FALSE
else @TRUE
end
from #testTable with(nolock)
I tested 5 scenarios:
*
*hard-coded values
*local variables
*scalar functions
*a view
*a table-valued function
running the test 10 times, yielded the following results:
scenario
min
max
avg
scalar functions
233
259
240
hard-coded values
236
265
243
local variables
235
278
245
table-valued function
243
272
253
view
244
267
254
Suggesting to me, that for set-based work in (at least) 2019 and better, there's not much in it.
set nocount on;
go
-- create test data table
drop table if exists #testTable;
create table #testTable (id int identity(1, 1) primary key, value tinyint);
-- populate test data
insert #testTable (value)
select top (1000000) convert(binary (1), newid())
from sys.all_objects a
, sys.all_objects b
go
-- scalar function for True
drop function if exists fnTrue;
go
create function dbo.fnTrue() returns bit with schemabinding as
begin
return 1
end
go
-- scalar function for False
drop function if exists fnFalse;
go
create function dbo.fnFalse () returns bit with schemabinding as
begin
return 0
end
go
-- table-valued function for booleans
drop function if exists dbo.tvfBoolean;
go
create function tvfBoolean() returns table with schemabinding as
return
select convert(bit, 1) as true, convert(bit, 0) as false
go
-- view for booleans
drop view if exists dbo.viewBoolean;
go
create view dbo.viewBoolean with schemabinding as
select convert(bit, 1) as true, convert(bit, 0) as false
go
-- create table for results
drop table if exists #testResults
create table #testResults (id int identity(1,1), test int, elapsed bigint, message varchar(1000));
-- define tests
declare @tests table(testNumber int, description nvarchar(100), sql nvarchar(max))
insert @tests values
(1, N'hard-coded values', N'
declare @testTable table (id int, value bit);
insert @testTable(id, value)
select id, case when t.value > 127
then 0
else 1
end
from #testTable t')
, (2, N'local variables', N'
declare @FALSE as bit = 0
declare @TRUE as bit = 1
declare @testTable table (id int, value bit);
insert @testTable(id, value)
select id, case when t.value > 127
then @FALSE
else @TRUE
end
from #testTable t'),
(3, N'scalar functions', N'
declare @testTable table (id int, value bit);
insert @testTable(id, value)
select id, case when t.value > 127
then dbo.fnFalse()
else dbo.fnTrue()
end
from #testTable t'),
(4, N'view', N'
declare @testTable table (id int, value bit);
insert @testTable(id, value)
select id, case when value > 127
then b.false
else b.true
end
from #testTable t with(nolock), viewBoolean b'),
(5, N'table-valued function', N'
declare @testTable table (id int, value bit);
insert @testTable(id, value)
select id, case when value > 127
then b.false
else b.true
end
from #testTable with(nolock), dbo.tvfBoolean() b')
;
declare @testNumber int, @description varchar(100), @sql nvarchar(max)
declare @testRuns int = 10;
-- execute tests
while @testRuns > 0 begin
set @testRuns -= 1
declare testCursor cursor for select testNumber, description, sql from @tests;
open testCursor
fetch next from testCursor into @testNumber, @description, @sql
while @@FETCH_STATUS = 0 begin
declare @TimeStart datetime2(7) = sysdatetime();
execute sp_executesql @sql;
declare @TimeEnd datetime2(7) = sysdatetime()
insert #testResults(test, elapsed, message)
select @testNumber, datediff_big(ms, @TimeStart, @TimeEnd), @description
fetch next from testCursor into @testNumber, @description, @sql
end
close testCursor
deallocate testCursor
end
-- display results
select test, message, count(*) runs, min(elapsed) as min, max(elapsed) as max, avg(elapsed) as avg
from #testResults
group by test, message
order by avg(elapsed);
A: The best answer is from SQLMenace according to the requirement if that is to create a temporary constant for use within scripts, i.e. across multiple GO statements/batches.
Just create the procedure in the tempdb then you have no impact on the target database.
One practical example of this is a database create script which writes a control value at the end of the script containing the logical schema version. At the top of the file are some comments with change history etc... But in practice most developers will forget to scroll down and update the schema version at the bottom of the file.
Using the above code allows a visible schema version constant to be defined at the top before the database script (copied from the generate scripts feature of SSMS) creates the database but used at the end. This is right in the face of the developer next to the change history and other comments, so they are very likely to update it.
For example:
use tempdb
go
create function dbo.MySchemaVersion()
returns int
as
begin
return 123
end
go
use master
go
-- Big long database create script with multiple batches...
print 'Creating database schema version ' + CAST(tempdb.dbo.MySchemaVersion() as NVARCHAR) + '...'
go
-- ...
go
-- ...
go
use MyDatabase
go
-- Update schema version with constant at end (not normally possible as GO puts
-- local @variables out of scope)
insert MyConfigTable values ('SchemaVersion', tempdb.dbo.MySchemaVersion())
go
-- Clean-up
use tempdb
drop function MySchemaVersion
go
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "100"
} |
Q: What's your favorite profiling tool (for C++) So far, I've only used Rational Quantify. I've heard great things about Intel's VTune, but have never tried it!
Edit: I'm mostly looking for software that will instrument the code, as I guess that's about the only way to get very fine results.
See also:
What are some good profilers for native C++ on Windows?
A: My only experience profiling C++ code is with AQTime by AutomatedQA (now SmartBear Software). It has several types of profilers built in (performance, memory, Windows handles, exception tracing, static analysis, etc.), and instruments the code to get the results.
I enjoyed using it - it was always fun to find those spots where a small change in code could make a dramatic improvement in performance.
A: I have never done profiling before. Yesterday I programmed a ProfilingTimer class with a static timetable (a map<std::string, long long>) for time storage.
The constructor stores the starting tick, and the destructor calculates the elapsed time and adds it to the map:
ProfilingTimer::ProfilingTimer(std::string name)
: mLocalName(name)
{
sNestedName += mLocalName;
sNestedName += " > ";
if(sTimetable.find(sNestedName) == sTimetable.end())
sTimetable[sNestedName] = 0;
mStartTick = Platform::GetTimerTicks();
}
ProfilingTimer::~ProfilingTimer()
{
long long totalTicks = Platform::GetTimerTicks() - mStartTick;
sTimetable[sNestedName] += totalTicks;
sNestedName.erase(sNestedName.length() - mLocalName.length() - 3);
}
In every function (or {block}) that I want to profile i need to add:
ProfilingTimer _ProfilingTimer("identifier");
This line is a bit cumbersome to add in all functions I want to profile since I have to guess which functions take a lot of time. But it works well and the print function shows time consumed in %.
(Is anyone else working with any similar "home-made profiling"? Or is it just stupid? But it's fun! Does anyone have improvement suggestions?
Is there some sort of auto-adding a line to all functions?)
A: I've used Glowcode extensively in the past and have had nothing but positive experiences with it. Its Visual Studio integration is really nice, and it is the most efficient/intuitive profiler that I've ever used (even compared to profilers for managed code).
Obviously, thats useless if your not running on Windows, but the question leaves it unclear to me exactly what your requirements are.
A: oprofile, without a doubt; its simple, reliable, does the job, and can give all sorts of nice breakdowns of data.
A: The profiler in Visual Studio 2008 is very good: fast, user friendly, clear and well integrated in the IDE.
A: For Windows, check out Xperf. It uses sampled profile, has some useful UI, & does not require instrumentation. Quite useful for tracking down performance problems. You can answer questions like:
*
*Who is using the most CPU? Drill down to function name using call stacks.
*Who is allocating the most memory?
*Who is doing the most registry queries?
*Disk writes? etc.
You will be quite surprised when you find the bottlenecks, as they are probably not where you expected!
A: Since you don't mention the platform you're working on, I'll say cachegrind under Linux. Definitely. It's part of the Valgrind toolset.
http://valgrind.org/info/tools.html
I've never used its sub-feature Callgrind, since most of my code optimization is for inside functions.
Note that there is a frontend KCachegrind available.
A: For Windows, I've tried AMD Codeanalyst, Intel VTune and the profiler in Visual Studio Team Edition.
Codeanalyst is buggy (crashes frequently) and on my code, its results are often inaccurate. Its UI is unintuitive. For example, to reach the call stack display in the profile results, you have to click the "Processes" tab, then click the EXE filename of your program, then click a toolbar button with the tiny letters "CSS" on it. But it is freeware, so you may as well try it, and it works (with fewer features) without an AMD processor.
VTune ($700) has a terrible user interface IMO; in a large program, it's hard to find the particular call tree you want, and you can only look at one "node" in a program at a time (a function with its immediate callers and callees)--you cannot look at a complete call tree. There is a call graph view, but I couldn't find a way to make the relative execution times appear on the graph. In other words, the functions in the graph look the same regardless of how much time was spent in them--it's as though they totally missed the point of profiling.
Visual Studio's profiler has the best GUI of the three, but for some reason it is unable to collect samples from the majority of my code (samples are only collected for a few functions in my entire C++ program). Also, I couldn't find a price or way to buy it directly; but it comes with my company's MSDN subscription. Visual Studio supports managed, native, and mixed code; I'm not sure about the other two profilers in that regard.
In conclusion, I don't know of a good profiler yet! I'll be sure to check out the other suggestions here.
A: For linux development (although some of these tools might work on other platforms). These are the two big names I know of, there's plenty of other smaller ones that haven't seen active development in a while.
*
*Valgrind
*TAU - Tuning and Analysis Utilities
A: There are different requirements for profiling. Is instrumented code ok, or do you need to profile optimized code (or even already compiled code)? Do you need line-by-line profile information? Which OS are you running? Do you need to profile shared libraries as well? What about trace into system calls?
Personally, I use oprofile for everything I do, but that might not be the best choice in every case. Vtune and Shark are both excellent as well.
A: For Windows development, I've been using Software Verification's Performance Validator - it's fast, reasonably accurate, and reasonably priced. Best yet, it can instrument a running process, and lets you turn data collection on and off at runtime, both manually and based on the callstack - great for profiling a small section of a larger program.
A: I use devpartner for the pc platform.
A: For Linux:
Google Perftools
*
*Faster than valgrind (yet, not so fine grained)
*Does not need code instrumentation
*Nice graphical output (--> kcachegrind)
*Does memory-profiling, cpu-profiling, leak-checking
A: I have tried Quantify an AQTime, and Quantify won because of its invaluable 'focus on sub tree' and 'delete sub tree' features.
A: The only sensitive answer is PTU from Intel. Of course its best to use it on an Intel processor and to get even more valuable results at least on a C2D machine as the architecture itself is easier to give back meaningful profiles.
A: I've used VTune under Windows and Linux for many years with very good results. Later versions have gotten worse, when they outsourced that product to their Russian development crew quality and performance both went down (increased VTune crashes, often 15+ minutes to open an analysis file).
Regarding instrumentation, you may find out that it's less useful than you think. In the kind of applications I've worked on adding instrumentation often slows the product down so much that it doesn't work anymore (true story: start app, go home, come back next day, app still initializing). Also, with non instrumented profiling you can react to live problems. For example, with VTune remote date collector I can start up a sampling session against a live server with hundreds of simultaneous connections that is experiencing performance problems and catch issues that happen in production that I'd never be able to replicate in a test environment.
A: ElectricFence works nicely for malloc debugging
A: IMHO, sampling using a debugger is the best method. All you need is an IDE or debugger that lets you halt the program. It nails your performance problems before you even get the profiler installed.
A: My favorite tool is Easy Profiler : http://code.google.com/p/easyprofiler/
It's a compile time profiler : the source code must be manually instrumented using a set of routines so to describe the target regions.
However, once the application is run, and measures automatically written to an XML file, it is only a matter of opening the Observer application and doing few clicks on the analysis/compare tools, before you can see the result in a qualitative chart.
A: Visual studio 2010 profiler under Windows. VTune had a great call graph tool, but it got broken as of Windows Vista/7. I don't know if they fixed it.
A: Let me give a plug for EQATEC... just what I was looking for... simple to learn and use and gives me the info I need to find the hotspots quickly. I much prefer it to the one built in to Visual Studio (though I haven't tried the VS 2010 one yet, to be fair).
The ability to take snapshots is HUGE. I often get an extra analysis and optimization done while waiting for the real target analysis to run... love it.
Oh, and its base version is free!
http://www.eqatec.com/Profiler/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "91"
} |
Q: Why is my PDF footer text invisible? I'm creating PDFs on-demand with ColdFusion's CFDocument tag, like so:
<cfdocument format="PDF" filename="#attributes.fileName#" overwrite="true">
<cfdocumentitem type="footer">
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<tr>
<td align="left"><font face="Tahoma" color="black"><strong>My Client's Corporation</strong><br/>Street address<br/>City, ST 55555</font></td>
<td align="right"><font face="Tahoma" color="black">Phone: 555.555.5555<br/>Fax: 555.555.5555<br/>Email: [email protected]</font></td>
</tr>
</table>
</cfdocumentitem>
<html>
<body>
<table border="0" cellpadding="0" cellspacing="0" width="100%">
<!--- some content here ... --->
</table>
</body>
</html>
</cfdocument>
The problem I'm having is that sometimes (actually, most of the time, but not always) some of the footer text is there, but invisible. I can highlight it and copy/paste it into notepad, where I can see it all -- but in the generated PDF only the first line of the left column of the footer is visible, the rest is invisible. Hence why I added the font color of black in the code.
Any ideas on how to correct this?
A: A PDF is what I'm after, so I'm not sure how outputting another format would help.
As it turns out, the footer space just wasn't enough to fit all of this text; verified by the fact that changing the font size to 4pt would fit it all in without a problem.
I spent some time attempting to rewrite the footer code using DDX as outlined here and the CFPDF tag to implement it; but even after several hours of hacking away and finally getting a valid DDX as reported by the new isDDX function, the CFPDF tag reported that it was invalid DDX for some reason.
At this point I decided I had wasted enough of the client's time/money and just reformatted the footer to be 2 lines of centered text, which was good enough.
A: Usually when PDF shows blank text, it's because the font metrics are embedded in the document, but the glyphs are not. I know nothing about ColdFusion, but you might try the following:
*
*Try a font other than Tahoma as a test. All PDF readers must support 14 basic fonts, including 4 Helvetica variants, 4 Times variants, 4 Courier variants, Symbol and ZapfDingbats, so those are always safe choices
*See if ColdFusion offers any control over font embedding
*Try a list of alternatives in your font declaration, like "Tahoma,Helvetica,sans-serif"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What is MVC and what are the advantages of it? I found What are mvp and mvc and what is the difference but it didn't really answer this question.
I've recently started using MVC because it's part of the framework that myself and my work-partner are going to use. We chose it because it looked easy and separated process from display, are there advantages besides this that we don't know about and could be missing out on?
Pros
*
*Display and Processing are seperated
Cons
*
*None so far
A: Jeff has a post about it, otherwise I found some useful documents on Apple's website, in Cocoa tutorials (this one for example).
A: MVC is the separation of model, view and controller — nothing more, nothing less. It's simply a paradigm; an ideal that you should have in the back of your mind when designing classes. Avoid mixing code from the three categories into one class.
For example, while a table grid view should obviously present data once shown, it should not have code on where to retrieve the data from, or what its native structure (the model) is like. Likewise, while it may have a function to sum up a column, the actual summing is supposed to happen in the controller.
A 'save file' dialog (view) ultimately passes the path, once picked by the user, on to the controller, which then asks the model for the data, and does the actual saving.
This separation of responsibilities allows flexibility down the road. For example, because the view doesn't care about the underlying model, supporting multiple file formats is easier: just add a model subclass for each.
A: I think another benefit of using the MVC pattern is that it opens up the doors to other approaches to the design, such as MVP/Presenter first and the many other MV* patterns.
Without this fundamental segregation of the design "components" the adoption of these techniques would be much more difficult.
I think it helps to make your code even more interface-based.. Not only within the individual project, but you can almost start to develop common "views" which mean you can template lot more of the "grunt" code used in your applications. For example, a very abstract "data view" which simply takes a bunch of data and throws it to a common grid layout.
Edit:
If I remember correctly, this is a pretty good podcast on MV* patterns (listened to it a while ago!)
A: MVC is just a general design pattern that, in the context of lean web app development, makes it easy for the developer to keep the HTML markup in an app’s presentation layer (the view) separate from the methods that receive and handle client requests (the controllers) and the data representations that are returned within the view (the models). It’s all about separation of concerns, that is, keeping code that serves one functional purpose (e.g. handling client requests) sequestered from code that serves an entirely different functional purpose (e.g. representing data).
It’s the same principle for why anybody who’s spent more than 5 min trying to build a website can appreciate the need to keep your HTML markup, JavaScript, and CSS in separate files: If you just dump all of your code into a single file, you end up with spaghetti that’s virtually un-editable later on.
Since you asked for possible "cons": I’m no authority on software architecture design, but based on my experience developing in MVC, I think it’s also important to point out that following a strict, no-frills MVC design pattern is most useful for 1) lightweight web apps, or 2) as the UI layer of a larger enterprise app. I’m surprised this specification isn’t talked about more, because MVC contains no explicit definitions for your business logic, domain models, or really anything in the data access layer of your app. When I started developing in ASP.NET MVC (i.e. before I knew other software architectures even existed), I would end up with very bloated controllers or even view models chock full of business logic that, had I been working on enterprise applications, would have made it difficult for other devs who were unfamiliar with my code to modify (i.e. more spaghetti).
A: Separation of concerns is the biggy.
Being able to tease these components apart makes the code easier to re-use and independently test. If you don't actually know what MVC is, be careful about trying to understand people's opinions as there is still some contention about what the "Model" is (whether it is the business objects/DataSets/DataTables or if it represents the underlying service layer).
I've seen all sorts of implementations that call themselves MVC but aren't exactly and as the comments in Jeff's article show MVC is a contentious point that I don't think developers will ever fully agree upon.
A good round up of all of the different MVC types is available here.
A: One con I can think of is if you need really fast access to your data in your view (for example, game animation data like bone positions.) It is very inefficient to keep a layer of separation in this case.
Otherwise, for most other applications which are more data driven than graphics driven, it seems like a logical way to drive a UI.
A: If you follow the stackoverflow podcasts you can hear Jeff (and Geoff?) discuss its greatness. https://blog.stackoverflow.com/2008/08/podcast-17/. But remember that using these separate layers means things are easier in the future--and harder now. And layers can make things slower. And you may not need them. But don't let that stop you from learning what it is--when building big, robust, long-lived systems, it's invaluable.
A: It separates Model and View controlled by a Controller,
As far as Model is concerned, Your Models has to follow OO architecture, future enhancements and other maintenance of the code base should be very easy and the code base should be reusable.
Same model can have any no.of views e.g) same info can be shown in as different graphical views.
Same view can have different no.of models e.g) different detailed can be shown as a single graph say as a bar graph.
This is what is re-usability of both View and Model.
Enhancements in views and other support of new technologies for building the view can be implemented easily.
Guy who is working on view dose not need to know about the underlying Model code base and its architecture, vise versa for the model.
A: One of the major advantages of MVC which has not mentioned here is that MVC provides RESTful urls which enables SEO. When you name your Controllers and Actions wisely, it makes it easier for search engines to find your site if they only take a look at your site Urls. For example you have a car sale website and a page which displays available Lamborghini Veneno cars, instead of having www.MyCarSale.com/product/6548 referring to the page you can choose www.MyCarSale.com/SportCar/Lamborghini-Veneno url for SEO purpose.
Here is a good answer to MVC Advantages and here is an article How to create a SEO friendly Url.
A: Main advantage of MVC architecture is differentiating the layers of a project in Model,View and Controller for the Re-usability of code, easy to maintain code and maintenance. The best thing is the developer feels good to add some code in between the project maintenance.
Here you can see the some more points on Main Advantages of MVC Architecture.
A: ![mvc architecture][1]
Model–view–controller (MVC) is a software architectural pattern for implementing user interfaces. It divides a given software application into three interconnected parts, so as to separate internal representations of information from the ways that information is presented to or accepted from the user.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: What PL/SQL Libraries For Auto-Generating JSON Do You Recommend? Are there any good PL/SQL libraries for JSON that you've worked with and found useful?
In PL/SQL, I'm having to tediously hand code the return of JSON values to JavaScript functions. I found one PL/SQL library for auto-generating JSON, but it doesn't do exactly everything I need it too. For example, I couldn't extend the base functions in the library to return a complex tree-like JSON data structure required by a JavaScript tree component I was using.
Note:
The system, which has been in production for 8+ years, was architected to use PL/SQL for the CRUDs and most of the business logic. The PL/SQL also generates 90% of the presentation layer (HTML/JavaScript), using mod PL/SQL. The other 10% is reported data done via Oracle Reports Builder.
A: @Geoff-
The system, which has been in production for 8+ years, was architected to use PL/SQL for the CRUDs and most of the business logic. The PL/SQL also generates 90% of the presentation layer (HTML/JavaScript), using mod PL/SQL. The other 10% is report data done via Oracle Reports Builder.
So, there isn't application code like you'd see in more modern, better architected systems. I do want to do things the right way, I just don't have that luxury given organizational constraints.
A: I wonder why you don't want to bring the data from Oracle into some application code and make JSON there?
A: Ouch - generating your interface in PL/SQL. You have my sympathy.
I've never done anything like this, but Googling found this page (which is also referenced from the json.org page).
A: A relatively new library called PLJSON (no slash) is on GitHub. We're using it in a pretty large project in production and have had no troubles with it at all. Parsing is a tad slow, but that is to be expected.
Disclaimer: I wrote it. If you find bugs or have suggestions, let me know.
A: In case that anyone is still interested in serving JSON using PL/SQL, I have just completed a PL/SQL data service framework named BackLogic. It is a full REST web service framework. It include a SQL utility to produce complex JSON structure from REF CURSOR, including the "complex tree-like JSON data structure required by a JavaScript tree component" mentioned in the original question, which the early PLJSON framework is not quite capable of doing.
I do see a bright future for PL/SQL in creating REST APIs. Until recently, the Object Relation Impedance has been taken care mainly by ORM frameworks in the middle tier. BackLogic solves this issue in the database, and thus is able to produce complex JSON structures needed by UI framework. Here is a link to BackLogic User Guide. You may find some non-trial examples in Section 5.3.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: wxpython: How do I examine dragged data in OnDragOver? I'm a bit perplexed by drag and drop in wxPython (but perhaps this questions pertains to drag and drop in other GUI frameworks as well). The frameworks provides a couple of callbacks (OnEnter and OnDragOver) that purportedly allow me to inform the system whether the current mouse position is a valid place to drop whatever it is that is being dragged. From these methods I can return wx.DragNone, wx.DragCopy, etc. What baffles me is that from within these methods I am not allowed to call GetData, which means I am not allowed to examine the data that the user is dragging. If I cannot see the data, how am I supposed to know whether it is OK for the user to drop here?
A: One solution, which is a hack of limited usefulness, is when a drag is initiated, store the dragged data in a global or static reference somewhere. This way, in the OnEnter and OnDragOver handlers, it is possible to get a reference to the data being dragged. This is of course only useful for drags within the same application (the same instance of the application, actually).
A: There is no way to see dragged data in OnEnter and OnDragOver methods.
The only solution I found is to store the dragged item in some instance variable that is then readable inside these methods.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Asp.net Reinstalling a DLL into the GAC I'm trying to re-install a DLL in the GAC, everything seems to work fine but the web application accessing it still seems to be using the old one.
The old DLL is the same version as the new one with only a minor edit, it will be used by 50 different sites so changing the version then changing the reference in the web.config is not a good solution.
Restarting the IIS server or the worker process isn't an option as there are already 50 sites running that must continue to do so.
does anyone know what i'm doing wrong or what i can do to remedy this situation?
A: AFAIK, you need to restart IIS for it to get a fresh reference to the updated DLL. Your best bet is to perform the reset at a low traffic time. If you are running multiple servers with load balancing, you can prevent new connections from hitting one server until all connections have been closed. Afterwards, update the DLL, restart IIS, and bring the server back into the connection pool. Repeat for each server with no visible downtime to the end users.
A: Since you don't make a reference to application pools, I'm going to assume you are on the old version of IIS. In that case, what you'll need to do is to "touch" all the DLLs in each site that references the DLL.
The problem is that the code is already loaded and you need to find a non-intrusive way to re-load the application. Recycling app-pools is an effective way to do this. If you are on the old IIS that doesn't have app-pools, then updating the last-modified in the /bin/ folders or web.config files will reload the application without affecting the other sites.
So a script of some kind to do the above is in order. All it needs to do is update the lastmodified on the DLLs in every /bin application directory.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: asp.net mvc - subfolders How does the new Microsoft asp.net mvc implementation handle partitioning your application - for example:
--index.aspx
--about.aspx
--contact.aspx
--/feature1
--/feature1/subfeature/action
--/feature2/subfeature/action
I guess what I am trying to say is that it seems everything has to go into the root of the views/controllers folders which could get unwieldy when working on a project that if built with web forms might have lots and lots of folders and sub-folders to partition the application.
I think I get the MVC model and I like the look of it compared to web forms but still getting my head round how you would build a large project in practice.
A: In terms of how you arrange your views, you can put your views in subfolders if you'd like and create your own view structure. All views can always be referenced by their full path using the ~syntax. So if you put Index.aspx in \Views\Feature1\Home then you could reference that view using ~/Views/Feature1/Home/Index.aspx.
A: Here's two good blog posts that I found that may help other readers:
http://stephenwalther.com/blog/archive/2008/07/23/asp-net-mvc-tip-24-retrieve-views-from-different-folders.aspx
This one talks a little more in-depth about what Haacked described above.
http://haacked.com/archive/2008/11/04/areas-in-aspnetmvc.aspx
This is a nice alternative for grouping your site into "areas."
A: There isn't any issues with organizing your controllers. You just need to setup the routes to take the organization into consideration. The problem you will run into is finding the view for the controller, since you changed the convention. There isn't any built in functionality for it yet, but it is easy to create a work around yourself with a ActionFilterAttribute and a custom view locator that inherits off ViewLocator. Then when creating your controller, you just specify what ViewLocator to use, so the controller knows how to find the view. I can post some code if needed.
This method kind of goes along with some advice I gave another person for separating their views out for a portal using ASP.NET MVC. Here is the link to the question as a reference.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Internalize Class and Methods in .NET Assembly I have a set of multiple assemblies (one assembly is to be used as an API and it depends on other assemblies). I would like to merge all assemblies into one single assembly but prevent all assemblies except the API one to be visible from the outside.
I will then obfuscate this assembly with Xenocode. From what I have seen, it is impossible to internalize assembly with Xenocode.
I have seen ILMerge from Microsoft, but was unable to figure if it can do what I want.
http://research.microsoft.com/~mbarnett/ILMerge.aspx
A: I have used ILMerge from microsoft to internalize DLL's into a single assembled library. There is a useful GUI for using ILMerge called NuGenUnify. You can find it here.
A: I know Xenocode can merge assemblies into one but I am not sure if it will internalize other non-primary assemblies.
I have found the /internalize switch in ILMerge that "internalize" all assemblies except the primary one. Pretty useful!
A: I suggest you look at the InternalsVisibleTo attribute on MSDN.
You can mark everything in all the assemblies (except the API assembly) as internal instead of public, then reshow them to just your API assembly.
Having done that, using ILMerge should give you a single assembly with just the API classes visible.
A: There are some issues with ILMerge, but I think if you add optimisations + merge + obfuscation you're likely to create a highly complex situation for little benefit.
Why not have just one assembly, and make only your API public?
If you're always distributing them as a single assembly there's no reason not to just compile them as that. You'll get more benefit from compiler optimisations and it will be quicker to compile too.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I make sure scrollbars don't overlap content? When creating scrollable user controls with .NET and WinForms I have repeatedly encountered situations where, for example, a vertical scrollbar pops up, overlapping the control's content, causing a horizontal scrollbar to also be needed. Ideally the content would shrink just a bit to make room for the vertical scrollbar.
My current solution has been to just keep my controls out of the far right 40 pixels or so that the vertical scroll-bar will be taking up. Since this is still effectively client space for the control, the horizontal scroll-bar still comes up when it gets covered by the vertical scroll-bar, even though no controls are being hidden at all. But then at least the user doesn't actually need to use the horizontal scrollbar that comes up.
Is there a better way to make this all work? Some way to keep the unneeded and unwanted scrollbars from showing up at all?
A: You will need your controls to resize slightly to accommodate the width of the vertical scroll bar. One way to achieve this achieved through docking. Rather than just dropping controls on the form, you'll have to play a bit with panels, padding, min/max sizing and docking.
Here is example code you can place behind a blank new Form1. Resize the form, in designer or runtime and you'll see that the horizontal scrollbar is not shown and the fields are not overlapped. I've also given the fields a max width for good measure :
#region Windows Form Designer generated code
/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent() {
this.textBox1 = new System.Windows.Forms.TextBox();
this.label1 = new System.Windows.Forms.Label();
this.panel1 = new System.Windows.Forms.Panel();
this.panel2 = new System.Windows.Forms.Panel();
this.textBox2 = new System.Windows.Forms.TextBox();
this.label2 = new System.Windows.Forms.Label();
this.panel1.SuspendLayout();
this.panel2.SuspendLayout();
this.SuspendLayout();
//
// textBox1
//
this.textBox1.Dock = System.Windows.Forms.DockStyle.Top;
this.textBox1.Location = new System.Drawing.Point(32, 0);
this.textBox1.MaximumSize = new System.Drawing.Size(250, 0);
this.textBox1.Name = "textBox1";
this.textBox1.Size = new System.Drawing.Size(250, 20);
this.textBox1.TabIndex = 0;
//
// label1
//
this.label1.AutoSize = true;
this.label1.Dock = System.Windows.Forms.DockStyle.Left;
this.label1.Location = new System.Drawing.Point(0, 0);
this.label1.Name = "label1";
this.label1.Padding = new System.Windows.Forms.Padding(0, 3, 0, 0);
this.label1.Size = new System.Drawing.Size(32, 16);
this.label1.TabIndex = 0;
this.label1.Text = "Field:";
//
// panel1
//
this.panel1.Controls.Add(this.textBox1);
this.panel1.Controls.Add(this.label1);
this.panel1.Dock = System.Windows.Forms.DockStyle.Top;
this.panel1.Location = new System.Drawing.Point(0, 0);
this.panel1.Name = "panel1";
this.panel1.Size = new System.Drawing.Size(392, 37);
this.panel1.TabIndex = 2;
//
// panel2
//
this.panel2.Controls.Add(this.textBox2);
this.panel2.Controls.Add(this.label2);
this.panel2.Dock = System.Windows.Forms.DockStyle.Top;
this.panel2.Location = new System.Drawing.Point(0, 37);
this.panel2.Name = "panel2";
this.panel2.Size = new System.Drawing.Size(392, 37);
this.panel2.TabIndex = 3;
//
// textBox2
//
this.textBox2.Dock = System.Windows.Forms.DockStyle.Top;
this.textBox2.Location = new System.Drawing.Point(32, 0);
this.textBox2.MaximumSize = new System.Drawing.Size(250, 0);
this.textBox2.Name = "textBox2";
this.textBox2.Size = new System.Drawing.Size(250, 20);
this.textBox2.TabIndex = 0;
//
// label2
//
this.label2.AutoSize = true;
this.label2.Dock = System.Windows.Forms.DockStyle.Left;
this.label2.Location = new System.Drawing.Point(0, 0);
this.label2.Name = "label2";
this.label2.Padding = new System.Windows.Forms.Padding(0, 3, 0, 0);
this.label2.Size = new System.Drawing.Size(32, 16);
this.label2.TabIndex = 0;
this.label2.Text = "Field:";
//
// Form1
//
this.AutoScaleDimensions = new System.Drawing.SizeF(6F, 13F);
this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Font;
this.AutoScroll = true;
this.ClientSize = new System.Drawing.Size(392, 116);
this.Controls.Add(this.panel2);
this.Controls.Add(this.panel1);
this.Name = "Form1";
this.Text = "Form1";
this.panel1.ResumeLayout(false);
this.panel1.PerformLayout();
this.panel2.ResumeLayout(false);
this.panel2.PerformLayout();
this.ResumeLayout(false);
}
#endregion
private System.Windows.Forms.TextBox textBox1;
private System.Windows.Forms.Label label1;
private System.Windows.Forms.Panel panel1;
private System.Windows.Forms.Panel panel2;
private System.Windows.Forms.TextBox textBox2;
private System.Windows.Forms.Label label2;
A: If your controls are inside a panel, try setting the AutoScroll property of the Panel to False. This will hide the scrollbars. I hope this points you in the right direction.
myPanel.AutoScroll = False
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: java.lang.IllegalArgumentException: Invalid in servlet mapping <servlet>
<servlet-name>myservlet</servlet-name>
<servlet-class>workflow.WDispatcher</servlet-class>
<load-on-startup>2</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>myservlet</servlet-name>
<url-pattern>*NEXTEVENT*</url-pattern>
</servlet-mapping>
Above is the snippet from Tomcat's web.xml. The URL pattern *NEXTEVENT* on start up throws
java.lang.IllegalArgumentException: Invalid <url-pattern> in servlet mapping
It will be greatly appreciated if someone can hint at the error.
A: <url-pattern>*NEXTEVENT*</url-pattern>
The URL pattern is not valid. It can either end in an asterisk or start with one (to denote a file extension mapping).
The url-pattern specification:
*
*A string beginning with a ‘/’ character and ending with a ‘/*’
suffix is used for path mapping.
*A string beginning with a ‘*.’ prefix is used as an extension
mapping.
*A string containing only the ’/’ character indicates the "default"
servlet of the application. In this
case the servlet path is the request
URI minus the context path and the
path info is null.
*All other strings are used for exact matches only.
See section 12.2 of the Java Servlet Specification Version 3.1 for more details.
A: A workaround that can achieve that is to add a servlet filter to do URL re-writes e.g.
re-write NEXTEVENT to /NEXTEVENT/(the one before the NEXTEVENT)/(the one after NEXTEVENT) or something similar.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44"
} |
Q: Getting all types that implement an interface Using reflection, how can I get all types that implement an interface with C# 3.0/.NET 3.5 with the least code, and minimizing iterations?
This is what I want to re-write:
foreach (Type t in this.GetType().Assembly.GetTypes())
if (t is IMyInterface)
; //do stuff
A: Mine would be this in c# 3.0 :)
var type = typeof(IMyInterface);
var types = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(s => s.GetTypes())
.Where(p => type.IsAssignableFrom(p));
Basically, the least amount of iterations will always be:
loop assemblies
loop types
see if implemented.
A: This worked for me. It loops though the classes and checks to see if they are derrived from myInterface
foreach (Type mytype in System.Reflection.Assembly.GetExecutingAssembly().GetTypes()
.Where(mytype => mytype .GetInterfaces().Contains(typeof(myInterface)))) {
//do stuff
}
A: I see so many overcomplicated answers here and people always tell me that I tend to overcomplicate things. Also using IsAssignableFrom method for the purpose of solving OP problem is wrong!
Here is my example, it selects all assemblies from the app domain, then it takes flat list of all available types and checks every single type's list of interfaces for match:
public static IEnumerable<Type> GetImplementingTypes(this Type itype)
=> AppDomain.CurrentDomain.GetAssemblies().SelectMany(s => s.GetTypes())
.Where(t => t.GetInterfaces().Contains(itype));
A: Other answer were not working with a generic interface.
This one does, just replace typeof(ISomeInterface) by typeof (T).
List<string> types = AppDomain.CurrentDomain.GetAssemblies().SelectMany(x => x.GetTypes())
.Where(x => typeof(ISomeInterface).IsAssignableFrom(x) && !x.IsInterface && !x.IsAbstract)
.Select(x => x.Name).ToList();
So with
AppDomain.CurrentDomain.GetAssemblies().SelectMany(x => x.GetTypes())
we get all the assemblies
!x.IsInterface && !x.IsAbstract
is used to exclude the interface and abstract ones and
.Select(x => x.Name).ToList();
to have them in a list.
A: I appreciate this is a very old question but I thought I would add another answer for future users as all the answers to date use some form of Assembly.GetTypes.
Whilst GetTypes() will indeed return all types, it does not necessarily mean you could activate them and could thus potentially throw a ReflectionTypeLoadException.
A classic example for not being able to activate a type would be when the type returned is derived from base but base is defined in a different assembly from that of derived, an assembly that the calling assembly does not reference.
So say we have:
Class A // in AssemblyA
Class B : Class A, IMyInterface // in AssemblyB
Class C // in AssemblyC which references AssemblyB but not AssemblyA
If in ClassC which is in AssemblyC we then do something as per accepted answer:
var type = typeof(IMyInterface);
var types = AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(s => s.GetTypes())
.Where(p => type.IsAssignableFrom(p));
Then it will throw a ReflectionTypeLoadException.
This is because without a reference to AssemblyA in AssemblyC you would not be able to:
var bType = typeof(ClassB);
var bClass = (ClassB)Activator.CreateInstance(bType);
In other words ClassB is not loadable which is something that the call to GetTypes checks and throws on.
So to safely qualify the result set for loadable types then as per this Phil Haacked article Get All Types in an Assembly and Jon Skeet code you would instead do something like:
public static class TypeLoaderExtensions {
public static IEnumerable<Type> GetLoadableTypes(this Assembly assembly) {
if (assembly == null) throw new ArgumentNullException("assembly");
try {
return assembly.GetTypes();
} catch (ReflectionTypeLoadException e) {
return e.Types.Where(t => t != null);
}
}
}
And then:
private IEnumerable<Type> GetTypesWithInterface(Assembly asm) {
var it = typeof (IMyInterface);
return asm.GetLoadableTypes().Where(it.IsAssignableFrom).ToList();
}
A: To find all types in an assembly that implement IFoo interface:
var results = from type in someAssembly.GetTypes()
where typeof(IFoo).IsAssignableFrom(type)
select type;
Note that Ryan Rinaldi's suggestion was incorrect. It will return 0 types. You cannot write
where type is IFoo
because type is a System.Type instance, and will never be of type IFoo. Instead, you check to see if IFoo is assignable from the type. That will get your expected results.
Also, Adam Wright's suggestion, which is currently marked as the answer, is incorrect as well, and for the same reason. At runtime, you'll see 0 types come back, because all System.Type instances weren't IFoo implementors.
A: All the answers posted thus far either take too few or too many assemblies into account. You need to only inspect the assemblies that reference the assembly containing the interface. This minimizes the number of static constructors being run unnecessarily and save a huge amount of time and possibly unexpected side effects in the case of third party assemblies.
public static class ReflectionUtils
{
public static bool DoesTypeSupportInterface(Type type, Type inter)
{
if (inter.IsAssignableFrom(type))
return true;
if (type.GetInterfaces().Any(i => i.IsGenericType && i.GetGenericTypeDefinition() == inter))
return true;
return false;
}
public static IEnumerable<Assembly> GetReferencingAssemblies(Assembly assembly)
{
return AppDomain
.CurrentDomain
.GetAssemblies().Where(asm => asm.GetReferencedAssemblies().Any(asmName => AssemblyName.ReferenceMatchesDefinition(asmName, assembly.GetName())));
}
public static IEnumerable<Type> TypesImplementingInterface(Type desiredType)
{
var assembliesToSearch = new Assembly[] { desiredType.Assembly }
.Concat(GetReferencingAssemblies(desiredType.Assembly));
return assembliesToSearch.SelectMany(assembly => assembly.GetTypes())
.Where(type => DoesTypeSupportInterface(type, desiredType));
}
public static IEnumerable<Type> NonAbstractTypesImplementingInterface(Type desiredType)
{
return TypesImplementingInterface(desiredType).Where(t => !t.IsAbstract);
}
}
A: Edit: I've just seen the edit to clarify that the original question was for the reduction of iterations / code and that's all well and good as an exercise, but in real-world situations you're going to want the fastest implementation, regardless of how cool the underlying LINQ looks.
Here's my Utils method for iterating through the loaded types. It handles regular classes as well as interfaces, and the excludeSystemTypes option speeds things up hugely if you are looking for implementations in your own / third-party codebase.
public static List<Type> GetSubclassesOf(this Type type, bool excludeSystemTypes) {
List<Type> list = new List<Type>();
IEnumerator enumerator = Thread.GetDomain().GetAssemblies().GetEnumerator();
while (enumerator.MoveNext()) {
try {
Type[] types = ((Assembly) enumerator.Current).GetTypes();
if (!excludeSystemTypes || (excludeSystemTypes && !((Assembly) enumerator.Current).FullName.StartsWith("System."))) {
IEnumerator enumerator2 = types.GetEnumerator();
while (enumerator2.MoveNext()) {
Type current = (Type) enumerator2.Current;
if (type.IsInterface) {
if (current.GetInterface(type.FullName) != null) {
list.Add(current);
}
} else if (current.IsSubclassOf(type)) {
list.Add(current);
}
}
}
} catch {
}
}
return list;
}
It's not pretty, I'll admit.
A: Even better when choosing the Assembly location. Filter most of the assemblies if you know all your implemented interfaces are within the same Assembly.DefinedTypes.
// We get the assembly through the base class
var baseAssembly = typeof(baseClass).GetTypeInfo().Assembly;
// we filter the defined classes according to the interfaces they implement
var typeList = baseAssembly.DefinedTypes.Where(type => type.ImplementedInterfaces.Any(inter => inter == typeof(IMyInterface))).ToList();
By Can Bilgin
A: There's no easy way (in terms of performance) to do what you want to do.
Reflection works with assemblys and types mainly so you'll have to get all the types of the assembly and query them for the right interface. Here's an example:
Assembly asm = Assembly.Load("MyAssembly");
Type[] types = asm.GetTypes();
Type[] result = types.where(x => x.GetInterface("IMyInterface") != null);
That will get you all the types that implement the IMyInterface in the Assembly MyAssembly
A: Other answers here use IsAssignableFrom. You can also use FindInterfaces from the System namespace, as described here.
Here's an example that checks all assemblies in the currently executing assembly's folder, looking for classes that implement a certain interface (avoiding LINQ for clarity).
static void Main() {
const string qualifiedInterfaceName = "Interfaces.IMyInterface";
var interfaceFilter = new TypeFilter(InterfaceFilter);
var path = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
var di = new DirectoryInfo(path);
foreach (var file in di.GetFiles("*.dll")) {
try {
var nextAssembly = Assembly.ReflectionOnlyLoadFrom(file.FullName);
foreach (var type in nextAssembly.GetTypes()) {
var myInterfaces = type.FindInterfaces(interfaceFilter, qualifiedInterfaceName);
if (myInterfaces.Length > 0) {
// This class implements the interface
}
}
} catch (BadImageFormatException) {
// Not a .net assembly - ignore
}
}
}
public static bool InterfaceFilter(Type typeObj, Object criteriaObj) {
return typeObj.ToString() == criteriaObj.ToString();
}
You can set up a list of interfaces if you want to match more than one.
A: loop through all loaded assemblies, loop through all their types, and check if they implement the interface.
something like:
Type ti = typeof(IYourInterface);
foreach (Assembly asm in AppDomain.CurrentDomain.GetAssemblies()) {
foreach (Type t in asm.GetTypes()) {
if (ti.IsAssignableFrom(t)) {
// here's your type in t
}
}
}
A: This worked for me (if you wish you could exclude system types in the lookup):
Type lookupType = typeof (IMenuItem);
IEnumerable<Type> lookupTypes = GetType().Assembly.GetTypes().Where(
t => lookupType.IsAssignableFrom(t) && !t.IsInterface);
A: There are many valid answers already but I'd like to add anther implementation as a Type extension and a list of unit tests to demonstrate different scenarios:
public static class TypeExtensions
{
public static IEnumerable<Type> GetAllTypes(this Type type)
{
var typeInfo = type.GetTypeInfo();
var allTypes = GetAllImplementedTypes(type).Concat(typeInfo.ImplementedInterfaces);
return allTypes;
}
private static IEnumerable<Type> GetAllImplementedTypes(Type type)
{
yield return type;
var typeInfo = type.GetTypeInfo();
var baseType = typeInfo.BaseType;
if (baseType != null)
{
foreach (var foundType in GetAllImplementedTypes(baseType))
{
yield return foundType;
}
}
}
}
This algorithm supports the following scenarios:
public static class GetAllTypesTests
{
public class Given_A_Sample_Standalone_Class_Type_When_Getting_All_Types
: Given_When_Then_Test
{
private Type _sut;
private IEnumerable<Type> _expectedTypes;
private IEnumerable<Type> _result;
protected override void Given()
{
_sut = typeof(SampleStandalone);
_expectedTypes =
new List<Type>
{
typeof(SampleStandalone),
typeof(object)
};
}
protected override void When()
{
_result = _sut.GetAllTypes();
}
[Fact]
public void Then_It_Should_Return_The_Right_Type()
{
_result.Should().BeEquivalentTo(_expectedTypes);
}
}
public class Given_A_Sample_Abstract_Base_Class_Type_When_Getting_All_Types
: Given_When_Then_Test
{
private Type _sut;
private IEnumerable<Type> _expectedTypes;
private IEnumerable<Type> _result;
protected override void Given()
{
_sut = typeof(SampleBase);
_expectedTypes =
new List<Type>
{
typeof(SampleBase),
typeof(object)
};
}
protected override void When()
{
_result = _sut.GetAllTypes();
}
[Fact]
public void Then_It_Should_Return_The_Right_Type()
{
_result.Should().BeEquivalentTo(_expectedTypes);
}
}
public class Given_A_Sample_Child_Class_Type_When_Getting_All_Types
: Given_When_Then_Test
{
private Type _sut;
private IEnumerable<Type> _expectedTypes;
private IEnumerable<Type> _result;
protected override void Given()
{
_sut = typeof(SampleChild);
_expectedTypes =
new List<Type>
{
typeof(SampleChild),
typeof(SampleBase),
typeof(object)
};
}
protected override void When()
{
_result = _sut.GetAllTypes();
}
[Fact]
public void Then_It_Should_Return_The_Right_Type()
{
_result.Should().BeEquivalentTo(_expectedTypes);
}
}
public class Given_A_Sample_Base_Interface_Type_When_Getting_All_Types
: Given_When_Then_Test
{
private Type _sut;
private IEnumerable<Type> _expectedTypes;
private IEnumerable<Type> _result;
protected override void Given()
{
_sut = typeof(ISampleBase);
_expectedTypes =
new List<Type>
{
typeof(ISampleBase)
};
}
protected override void When()
{
_result = _sut.GetAllTypes();
}
[Fact]
public void Then_It_Should_Return_The_Right_Type()
{
_result.Should().BeEquivalentTo(_expectedTypes);
}
}
public class Given_A_Sample_Child_Interface_Type_When_Getting_All_Types
: Given_When_Then_Test
{
private Type _sut;
private IEnumerable<Type> _expectedTypes;
private IEnumerable<Type> _result;
protected override void Given()
{
_sut = typeof(ISampleChild);
_expectedTypes =
new List<Type>
{
typeof(ISampleBase),
typeof(ISampleChild)
};
}
protected override void When()
{
_result = _sut.GetAllTypes();
}
[Fact]
public void Then_It_Should_Return_The_Right_Type()
{
_result.Should().BeEquivalentTo(_expectedTypes);
}
}
public class Given_A_Sample_Implementation_Class_Type_When_Getting_All_Types
: Given_When_Then_Test
{
private Type _sut;
private IEnumerable<Type> _expectedTypes;
private IEnumerable<Type> _result;
protected override void Given()
{
_sut = typeof(SampleImplementation);
_expectedTypes =
new List<Type>
{
typeof(SampleImplementation),
typeof(SampleChild),
typeof(SampleBase),
typeof(ISampleChild),
typeof(ISampleBase),
typeof(object)
};
}
protected override void When()
{
_result = _sut.GetAllTypes();
}
[Fact]
public void Then_It_Should_Return_The_Right_Type()
{
_result.Should().BeEquivalentTo(_expectedTypes);
}
}
public class Given_A_Sample_Interface_Instance_Type_When_Getting_All_Types
: Given_When_Then_Test
{
private Type _sut;
private IEnumerable<Type> _expectedTypes;
private IEnumerable<Type> _result;
class Foo : ISampleChild { }
protected override void Given()
{
var foo = new Foo();
_sut = foo.GetType();
_expectedTypes =
new List<Type>
{
typeof(Foo),
typeof(ISampleChild),
typeof(ISampleBase),
typeof(object)
};
}
protected override void When()
{
_result = _sut.GetAllTypes();
}
[Fact]
public void Then_It_Should_Return_The_Right_Type()
{
_result.Should().BeEquivalentTo(_expectedTypes);
}
}
sealed class SampleStandalone { }
abstract class SampleBase { }
class SampleChild : SampleBase { }
interface ISampleBase { }
interface ISampleChild : ISampleBase { }
class SampleImplementation : SampleChild, ISampleChild { }
}
A: I got exceptions in the linq-code so I do it this way (without a complicated extension):
private static IList<Type> loadAllImplementingTypes(Type[] interfaces)
{
IList<Type> implementingTypes = new List<Type>();
// find all types
foreach (var interfaceType in interfaces)
foreach (var currentAsm in AppDomain.CurrentDomain.GetAssemblies())
try
{
foreach (var currentType in currentAsm.GetTypes())
if (interfaceType.IsAssignableFrom(currentType) && currentType.IsClass && !currentType.IsAbstract)
implementingTypes.Add(currentType);
}
catch { }
return implementingTypes;
}
A: OfType Linq method can be used exactly for this kind of scenarios:
https://learn.microsoft.com/fr-fr/dotnet/api/system.linq.enumerable.oftype?view=netframework-4.8
A: If it helps anyone, this is what I'm using to make some of my unit tests easier :)
public static Type GetInterfacesImplementation(this Type type)
{
return type.Assembly.GetTypes()
.Where(p => type.IsAssignableFrom(p) && !p.IsInterface)
.SingleOrDefault();
}
A: You could use some LINQ to get the list:
var types = from type in this.GetType().Assembly.GetTypes()
where type is ISomeInterface
select type;
But really, is that more readable?
A: public IList<T> GetClassByType<T>()
{
return AppDomain.CurrentDomain.GetAssemblies()
.SelectMany(s => s.GetTypes())
.ToList(p => typeof(T)
.IsAssignableFrom(p) && !p.IsAbstract && !p.IsInterface)
.SelectList(c => (T)Activator.CreateInstance(c));
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "649"
} |
Q: Proper name space management in .NET XmlWriter I use .NET XML technologies quite extensively on my work. One of the things the I like very much is the XSLT engine, more precisely the extensibility of it. However there one little piece which keeps being a source of annoyance. Nothing major or something we can't live with but it is preventing us from producing the beautiful XML we would like to produce.
One of the things we do is transform nodes inline and importing nodes from one XML document to another.
Sadly , when you save nodes to an XmlTextWriter (actually whatever XmlWriter.Create(Stream) returns), the namespace definitions get all thrown in there, regardless of it is necessary (previously defined) or not. You get kind of the following xml:
<root xmlns:abx="http://bladibla">
<abx:child id="A">
<grandchild id="B">
<abx:grandgrandchild xmlns:abx="http://bladibla" />
</grandchild>
</abx:child>
</root>
Does anyone have a suggestion as to how to convince .NET to be efficient about its namespace definitions?
PS. As an added bonus I would like to override the default namespace, changing it as I write a node.
A: Did you try this?
Dim settings = New XmlWriterSettings With {.Indent = True,
.NamespaceHandling = NamespaceHandling.OmitDuplicates,
.OmitXmlDeclaration = True}
Dim s As New MemoryStream
Using writer = XmlWriter.Create(s, settings)
...
End Using
Interesting is the 'NamespaceHandling.OmitDuplicates'
A: Use this code:
using (var writer = XmlWriter.Create("file.xml"))
{
const string Ns = "http://bladibla";
const string Prefix = "abx";
writer.WriteStartDocument();
writer.WriteStartElement("root");
// set root namespace
writer.WriteAttributeString("xmlns", Prefix, null, Ns);
writer.WriteStartElement(Prefix, "child", Ns);
writer.WriteAttributeString("id", "A");
writer.WriteStartElement("grandchild");
writer.WriteAttributeString("id", "B");
writer.WriteElementString(Prefix, "grandgrandchild", Ns, null);
// grandchild
writer.WriteEndElement();
// child
writer.WriteEndElement();
// root
writer.WriteEndElement();
writer.WriteEndDocument();
}
This code produced desired output:
<?xml version="1.0" encoding="utf-8"?>
<root xmlns:abx="http://bladibla">
<abx:child id="A">
<grandchild id="B">
<abx:grandgrandchild />
</grandchild>
</abx:child>
</root>
A: I'm not sure this is what you're looking for, but you can use this kind of code when you start writing to the Xml stream:
myWriter.WriteAttributeString("xmlns", "abx", null, "http://bladibla");
The XmlWriter should remember it and not rewrite it anymore. It may not be 100% bulletproof, but it works most of the time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: MS SQL Server 2008 "linked server" to Oracle : schema not showing I have a Windows 2008 Server (x64) running Microsoft SQL 2008 (x64) and I'm creating a Linked Server connection to an Oracle server. I'm able to make the connection, but I cannot see any information regarding which schema a table belongs to.
In SQL 2005, my linked servers show the schema information as I would expect.
Does anyone know how to resolve this issue? Is it an issue with the provider, OraOLEDB.Oracle?
Any help or pointers would be appreciated.
A: @Boojiboy - When you are looking at the tables via a linked server, there used to be a column for what schema. It appears that in the latest the new Oracle OLEDB drivers don't show this information any longer.
A: It looks like sp_tables_ex will do the trick, it came from the below article.
--verify tables OK exec sp_tables_ex @table_server = 'LINKED_ORA',
@table_schema='MySchema'
@table_schema is optional. If not
provided, you will get a list of all
tables in all schemas.
http://it.toolbox.com/blogs/daniel-at-work/linking-sql-server-2005-to-oracle-26791
A: Also in the SQL 08 > Server Objects > Providers
make sure your OraOLEDB.Oracle provider is allowing inprocessing
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Parse string to TimeSpan I have some strings of xxh:yym format where xx is hours and yy is minutes like "05h:30m". What is an elegant way to convert a string of this type to TimeSpan?
A: DateTime.ParseExact or DateTime.TryParseExact lets you specify the exact format of the input. After you get the DateTime, you can grab the DateTime.TimeOfDay which is a TimeSpan.
In the absence of TimeSpan.TryParseExact, I think an 'elegant' solution is out of the mix.
@buyutec As you suspected, this method would not work if the time spans have more than 24 hours.
A: This seems to work, though it is a bit hackish:
TimeSpan span;
if (TimeSpan.TryParse("05h:30m".Replace("m","").Replace("h",""), out span))
MessageBox.Show(span.ToString());
A: Here'e one possibility:
TimeSpan.Parse(s.Remove(2, 1).Remove(5, 1));
And if you want to make it more elegant in your code, use an extension method:
public static TimeSpan ToTimeSpan(this string s)
{
TimeSpan t = TimeSpan.Parse(s.Remove(2, 1).Remove(5, 1));
return t;
}
Then you can do
"05h:30m".ToTimeSpan();
A: From another thread:
How to convert xs:duration to timespan
A: Are TimeSpan.Parse and TimeSpan.TryParse not options? If you aren't using an "approved" format, you'll need to do the parsing manually. I'd probably capture your two integer values in a regular expression, and then try to parse them into integers, from there you can create a new TimeSpan with its constructor.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Q: Perforce in a Microsoft Shop Our dev shop currently uses Visual SourceSafe. We all know how that could end up (badly), so we're investigating other systems. First up is Perforce. Does anyone have experience with using it and its integration into Visual Studio (2003/2005/2008)? Is it as good as any other, or is it pretty solid with good features, comparatively?
A: It's difficult to call $900 per user a good feature.
A: We used Perforce for well over a year before switching to SVN recently. While I did like the tools (for example, visual diff and merge and the admin bits), we had some really tiresome issues with binding, as Chris mentions; otherwise, the VS integration is satisfactory. If anything, I find working with SVN easier and more intuitive than Perforce. TortoiseSVN (the Windows Explorer shell extension) is great, and we bought a couple of VisualSVN licenses for VS integration. Contrary to Perforce, VisualSVN does not work with the MS SCC interface, but rather directly with the SVN client, which I personally see as an advantage. Perforce does have support for many other OSes, but our non-Windows devs feel more comfortable with SVN too. If I were to have to choose again, I'd stick with SVN.
A: Sourcegear Vault is the best SCM for migrating VSS users to.
And its cheap.
A: I used Perforce at my last 3 jobs (my current job I'm using Subversion, which I don't like nearly as much.) I'm a big fan of Perforce, and moving from SourceSafe it will seem like Nirvana. Just getting atomic checkin will be a big boost for your company. Otherwise, Perforce is fast, it has good tools, and the workflow is simple for doing things like merges and integrations. I wholeheartedly recommend it. It may not be all new and flashy like the latest distributed VCS's, but honestly, I prefer the client/server model for its speed, especially if you're working with people in other countries that may have slow connections to you.
The Visual Studio integration is pretty good, but it has a few irritating issues. If you run another Perforce client at the same time (like P4V), it's very poor at keeping changes from the other client in sync in terms of showing what files are currently checked in/out. You generally have to shut down Visual Studio and load the project again if you want it to sync correctly. But, the sync status doesn't actually affect checkins/checkouts/updates from working correctly, it just means you can be fooled in to thinking something is in a different state than it actually is while you're in Visual Studio. The Perforce clients will always show the correct status as they sync continually with the database.
Also, on occasion you'll find you need to work "offline" (not connected to the Perforce database for some reason) and when you load the project again the next time, your Perforce bindings may be lost and you'll have to rebind each project individually. If you work with a solution that contains many projects this can be a big pain in the patoot. Same goes for when you first check out a solution, binding to Perforce is needed before the integration occurs.
A: Perforce works fine with Visual Studio, including "offline" mode where VS will make your local files writable and sync with the server later.
I tend to use the Perforce GUI for many operations (submits, diffs) just because it's quicker/better, but the process of the IDE checking things out is seamless.
Perforce in my experience is rock-solid and the best mixed (code+data) version control product out their if cost is not a factor.
My biggest gripe is that the performance of the server under Windows is no where near as good as under *nix, and if you are using a *nix server they do not officially support the option for case-insensitive filenames (meaning you either forgo support relating to filesystem errors, or setup a trigger that prevents people adding foo.cpp if Foo.cpp exists).
My other main compaint is that for some common operations you have to revert to the command line, often piping functions together. One example would be getting a list of files in a directory that are not under source-control.
Both of these are issues that reflect more on the company than the product though. IMO Perforce know they're at the top of the market and thus see no reason to invest in fixing things like this.
A: I have experience using a Perforce derivative.
It seemed hard to manage from the admin's perspective, but it was fine to use from a programmer's perspective.
Then again, I'm big on command line version control so can't speak for VS integration.
A: I've used personally and managed a number of teams for a few years who have been doing Perforce & Visual Studio. It works perfectly well. There can be a couple of binding/rebinding gotchas, but these are generally easy to sort out - Perforce knowledgebase and/or the mailing list is a good source of info.
Never had any problems with using command line, visual clients, and VS IDe simultaneously - refresh normally works fine.
A: We use perforce extensively in the company, including branching for very large projects, development on Sun Solaris and Windows, and more than 120 users.
It is very fast, and the Windows GUI (P4V) is very nice. The Explorer integration is acceptable. I've disabled the VS integration, and use macros (calling e.g. p4 edit) to edit/revert/diff files. The VS integration is extremely annoying for large projects (our solution has >130 projects), but may work for smaller projects.
A: I haven't used Perforce, but I have found moving to Team Foundation Server as one of the best options while working with Visual Studio.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Rewarding code projects for *complete* beginners Courses for people who are being introduced to programming very often include a code project, which I think is a nice way to learn. However, such projects often feel too artificial, and are thus not very rewarding to work on.
What are your ideas of rewarding code projects? (Preferably easy to begin, and extendable at will for the more advanced!).
Edit:
@Mark: thanks for the link, though I'm more interested in projects for people who are completely new to programming (the link seems to refer more to people who are already proficient in at least one language, and trying to learn a new one -the typical SO audience I'd say :) -).
@Kevin, Vaibhav, gary: I was thinking of people who are learning programming through one language, so at the beginning of the course some don't know anything about control structures (and even less about any kind of syntax). However, I was thinking in quite a large project (typically in the 1k-10k lines of code range, possibly in groups of 2 or 3 students). This is what was done at my school for the complete beginners, and it sure seemed to work for them... except that most of them found their projects quite boring to work on!
A: As has been stated a few times, what you are trying to teach the beginner is very important to the project.
My advice to you for planning something like this:
1) Avoid making a computer game
A computer game, while fun to build, doesn't reward the programmer with results early on (it's very complex). You want to concentrate on small but useful application programs, such as a Port Scanner. The example there is a little complex, but it's one of the best learning projects I've seen on the web.
2) Teach graphics early
It's rewarding to see the fruits of your labors early on, and it motivates you to go further. Whether you're using WinForms, MFC or the Win32 API, OpenGL or DirectX, teach it early.
3) Many small lessons with in depth information
This principle is followed by the above linked Port Scanner project, and it works well. Teach each part thoroughly, and give time for the beginner to absorb the lesson. I think that ZophusX had a good format for giving the information. It's too bad he's mostly abandoned his site.
4) It takes time
Don't rush things. Nobody becomes a stellar programmer in a few weeks. Try and make the lessons simple, but engaging, and keep building from your previous lessons.
5) Get feedback early and often
You might think a project is incredibly interesting, or a particular lesson or such, but you aren't the one learning. Your student(s) will greatly appreciate it when you ask them early on how things are going, and what they'd like to know more about. Be flexible enough that you can accomodate some of those requests.
6) Have fun teaching
Have fun. Passion is contagious, and if your student(s) see how much you enjoy the subject matter, some of that enthusiasm will rub off on them as well.
I hope that helps!
A: Some good rewarding projects, in terms of what you can learn and which are quite scalable in terms of complexity, features are:
*
*Games
*A travel and transportation reservation/booking system
*Encyclopedia or a Dictionary of terms, articles
*Conversion Calculators (Currency, Units, etc.)
The key is to pick a project simple enough, so that some of its features are immediately apparent, when you look at the project title. And when really given a thought, will reveal more features that you can add to it.
The project should have enough difficulty to so that its features seem just beyond the beginner's reach, thereby motivating him to learn something new all the time.
A: If you are training new people in your company, then attaching them as intern resources on a live project is very rewarding.
This increases the work load of the main developers a little (because they have to review all the work that the intern does), but goes a long way in terms of training and development of the person.
A: I do think that games and puzzles are a good place to start as they can give great scope for developing more complex versions. For example a tic-tac-toe program can be built as a simple command line program initially that lets two players play the game.
This step can be used to show how a simple data structure or array can represent the game board, simple input to get user commands/moves, simple output to display the game board and prompts etc. Then you can start showing how an algorithm can be used to allow player vs computer mode. I like the simple magic square math algorithm for tic-tac-toe as it's based on very simple math. After this the sky's the limit, UI improvements, using file I/O to load and save games, more advanced algorithms to get the computer to play better etc. More complex and satisfying games can still be produced using text mode or simple graphics.
I've used the Sokoban game as a means of showing lots of techniques over the years.
The simplest game I've used is a number list reversing game. This involves a mixed up list of numbers from 1-9. The player can specify a number of digits to reverse on the left of the list. The aim is to get the list sorted. This is great for absolute beginners. Each little part of the game can be written and tested separately.
A: It really depends on what you're trying to teach the beginner. If you're trying to teach syntax, then simple "Hello World" programs and ones that spit out every odd number between 1 and 100 are fine to get them started. If you're trying to teach data structures, then maybe something like a 20 questions game or some simple sorting program. If you're trying to teach recursion, then maybe a breadth first search program. If you're trying to teach database manipulation, then something like a order tracking system would be appropriate.
A: Take a look at code examples in the book Python Programming for the Absolute Beginner
A: Text Adventure.
*
*It's a console app
*You'll need to do some useful things, hold inventory, map and room state and parse input
*It's fun and you can give it to others to play! :D
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: UITypeEditor and IExtenderProvider I have an extender (IExtenderProvider) which extends certain types of
controls with additional properties. For one of these properties, I have
written a UITypeEditor. So far, all works just fine.
The extender also has a couple of properties itself, which I am trying to
use as a sort of default for the UITypeEditor. What I want to do is to be
able to set a property on the extender itself (not the extended controls),
and when I open up the UITypeEditor for one of the additional properties on
an extended control, I want to set a value in the UITypeEditor to the value
of the property on the extender.
A simple example: The ExtenderProvider has a property DefaultExtendedValue. On the form I set the value of this property to "My Value". Extended controls have, through the provider, a property ExtendedValue with a UITypeEditor. When I open the editor for the property ExtendedValue the default (initial) value should be set to "My Value".
It seems to me that the best place to do this would be
UITypeEditor.EditValue, just before calling
IWindowsFormsEditorService.DropDownControl or .ShowDialog.
The only problem is that I can't (or I haven't discovered how to) get hold
of the extender provider itself in EditValue, to read the value of the property in question and set it in the UITypeEditor. Context gives me the extended
control, but that is of no use to me in this case.
Is there any way to achieve what I'm trying? Any help appreciated!
Thanks
Tom
@samjudson: That's not a bad idea, but unfortunately it doesn't quite get me there. I'd really like to be able to set this default value individually for each instance of the extender provider. (I might have more than one on a single form with different values for different groups of extended controls.)
A: Could you read the attribute yourself?
DefaultValueAttribute att = context.
PropertyDescriptor.Attributes.
OfType<DefaultValueAttribute>().
FirstOrDefault();
object myDefault = null;
if ( att != null )
myDefault = att.Value;
I've used Linq to simplify the code, but you could do something similar back in .Net 1
A: Hi I have found this : http://social.msdn.microsoft.com/forums/en-US/winformsdesigner/thread/07299eb0-3e21-42a3-b36b-12e37282af83/
Basically :
var Ctl = context.Instance as Control;
Type t = Type.GetType("System.ComponentModel.ExtendedPropertyDescriptor");
LocalizationProvider myProvider = GetValueOnPrivateMember(t, context.PropertyDescriptor, "provider") as MyOwnExtenderProvider;
And magically, myProvider got my IExtenderProvider control !
where GetValueOnPrivateMember should be implemented this way:
static object GetValueOnPrivateMember(Type type, object dataobject, string fieldname)
{
BindingFlags getFieldBindingFlags = BindingFlags.DeclaredOnly | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField;
return type.InvokeMember(fieldname,
getFieldBindingFlags,
null,
dataobject,
null);
}
A: Have you considered adding the DefaultValue as a static property of the ExtenderProvider, then you can access it without requiring an instance of the provider?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: ASP.Net: Using System.Web.UI.Control.ResolveUrl() in a shared/static function What is the best way to use ResolveUrl() in a Shared/static function in Asp.Net? My current solution for VB.Net is:
Dim x As New System.Web.UI.Control
x.ResolveUrl("~/someUrl")
Or C#:
System.Web.UI.Control x = new System.Web.UI.Control();
x.ResolveUrl("~/someUrl");
But I realize that isn't the best way of calling it.
A: I use System.Web.VirtualPathUtility.ToAbsolute.
A: I tend to use HttpContext.Current to get the page, then run any page/web control methods off that.
A: It's worth noting that although System.Web.VirtualPathUtility.ToAbsolute is very useful here, it is not a perfect replacement for Control.ResolveUrl.
There is at least one significant difference: Control.ResolveUrl handles Query Strings very nicely, but they cause VirtualPathUtility to throw an HttpException. This can be absolutely mystifying the first time it happens, especially if you're used to the way that Control.ResolveUrl works.
If you know the exact structure of the Query String you want to use, this is easy enough to work around, viz:
public static string GetUrl(int id)
{
string path = VirtualPathUtility.ToAbsolute("~/SomePage.aspx");
return string.Format("{0}?id={1}", path, id);
}
...but if the Query String is getting passed in from an unknown source then you're going to need to parse it out somehow. (Before you get too deep into that, note that System.Uri might be able to do it for you).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: How do you back up your development machine? How do you back up your development machine so that in the event of a catastrophic hardware malfunction, you are up and running in the least amount of time possible?
A: I use Mozy, and rarely think about it. That's one weight off my shoulders that I won't ever miss.
A: There's an important distinction between backing up your development machine and backing up your work.
For a development machine your best bet is an imaging solution that offers as near a "one-click-restore" process as possible. TimeMachine (Mac) and Windows Home Server (Windows) are both excellent for this purpose. Not only can you have your entire machine restored in 1-2 hours (depending on HDD size), but both run automatically and store deltas so you can have months of backups in relatively little space. There are also numerous "ghosting" packages, though they usually do not offer incremental/delta backups so take more time/space to backup your machine.
Less good are products such as Carbonite/Mozy/JungleDisk/RSync. These products WILL allow you to retrieve your data, but you will still have to reinstall the OS and programs. Some have limited/no histories either.
In terms of backing up your code and data then I would recommend a sourcecode control product like SVN. While a general backup solution will protect your data, it does not offer the labeling/branching/history functionality that SCC packages do. These functions are invaluable for any type of project with a shelf-life.
You can easily run a SVN server on your local machine. If your machine is backed up then your SVN database will be also. This IMO is the best solution for a home developer and is how I keep things.
A: Virtual machines and CVS.
Desktops are rolled out with ghost and are completely vanilla.
Except they have VirtualBox.
Then developers pull the configured baseline development environment
down from CVS.
They log into the development VM image as themselves, refresh the source and libraries from CVS and they're up and working agian.
This also makes doing develpment and maintenance at the same time a lot easier.
(I know some people won't like CVS or VirtualBox, so feel free to substiture your tools of choice)
oh, and You check you work into a private branch off Trunk daily.
There you go.
Total time to recover : 1 hour (tops)
Time to "adopt" a shbiy new laptop for a customer visit : 1 hour ( tops)
And a step towards CMMI Configuration Management.
A: BTW your development machine should not contain anything of value. All your work (and your company's work) should be in central repositories (SVN).
A: I use TimeMachine.
A: For my home and development machines I use Acronis True Image.
In my opinion, with the HD cheap prices nothing replaces a full incremental daily HD backup.
A: *
*All important files are in version control (Subversion)
*
*My subversion layout generally matches the file layout on my web server so I can just do a checkout and all of my library files and things are in the correct places.
*Twice-daily backups to an external hard drive
*Nightly rsync backups to a remote server.
*
*This means that I send stuff on my home server over to my webhost and all files & databases on my webhost back home so I'm not screwed if I lose either my house or my webhost.
A: A little preparation helps:
*
*All my code is kept organized in one single directory (with categorized sub-directories).
*All email is kept in various PSTs.
*All code is also checked into source control at the end of every day.
*All documents are kept in one place as well.
Backup:
*
*Backup your code, email, documents as often as it suits you (daily).
*Keep an image of your development environment always ready.
Failure and Recovery
*
*If everything fails, format and install the image.
*Copy back everything from backup and you are up and running.
Of course there are tweaks here and there (incremental backup, archiving, etc.) which you have to do to make this process real.
A: If you are talking absolute least amount of restore time... I've often setup machines to do Ghost (Symantec or something similar) backups on a nightly basis to either an image or just a direct copy to another drive. That way all you have to do is reimage the machine from the image or just swap the drives. You can be back up in under 10 minutes... The setup I did before was in situation where we had some production servers that were redundant and it was acceptable for them to be offline long enough to clone the drive...but only at night. During the day they had to be up 100%...it saved my butt a couple times when a main drive failed... I just opened the case, swapped the cables so the backup drive was the new master and was back online in 5 minutes.
A: I've finally gotten my "fully automated data back-up strategy" down to a fine art. I never have to manually intervene, and I'll never lose another harddrive worth of data. If my computer dies, I'll always have a full bootable back-up that is no more than 24 hours old, and incremental back-ups no more than an hour old. Here are the details of how I do it.
My only computer is a 160 gig MacBook running OSX Leopard.
On my desk at work I have 2 external 500 gig harddrives.
One of them is a single 500 gig partition called "External".
The other has a 160 gig partition called "Clone" and a 340 gig partition called TimeMachine.
TimeMachine runs whenever I'm at work, constantly backing up my "in progress" files (which are also committed to Version Control throughout the day).
Every weekday at 12:05, SuperDuper! automatically copies my entire laptop harddrive to the "Clone" drive. If my laptop's harddrive dies, I can actually boot directly from the Clone drive and pick up work without missing a beat -- giving me some time to replace the drive (This HAS happened to me TWICE since setting this up!). (Technical Note: It actually only copies over whatever has changed since the previous weekday at 12:05... not the entire drive every time. Works like a charm.)
At home I have a D-Link DNS-323, which is a 1TB (2x500 gig) Network Attached Storage device running a Mirrored RAID, so that everything on the first 500 gig drive is automatically copied to the second 500 gig drive. This way, you always have a backup, and it's fully automated. This little puppy has a built-in Dynamic DNS client, and FTP server.
So, on my WRT54G router, I forward the FTP port (21) to my DNS-323, and leave its FTP server up.
After the SuperDuper clone has been made, rSync runs and synchronizes my "External" drive with the DNS-323 at home, via FTP.
That's it.
Using 4 drives (2 external, 2 in the NAS) I have:
1) An always-bootable complete backup less than 24 hours old, Monday-Friday
2) A working-backup of all my in-progress files, which is never more than 30 minutes old, Monday-Friday (when I'm at work and connected to the external drives)
3) Access to all my MP3s (170GB) at documents at work on the "External" and at home on the NAS
4) Two complete backups of all my MP3s and documents on the NAS (External is original copy, both drives on NAS are mirrors via ChronoSync)
Why do I do all of this?
Because:
1) In 2000, I dropped a 40 gig harddrive 1 inch, and it cost me $2500 to get that data back.
2) In the past year, I've had to take my MacBook in for repair 4 times. One dead harddrive, two dead motherboards, and a dead webcam. On the 4th time, they replaced my MacBook with a newer better one at no charge, and I haven't had a problem since.
Thanks to my daily backups, I didn't lose any work, or productivity. If I hadn't had them, though, all my work would have been gone, along with my MP3s, and my writing, and all the photos of my trips to Peru, Croatia, England, France, Greece, Netherlands, Italy, and all my family photos. Can you imagine? I'm sure you can, because I bet you have a pile of digital photos sitting on your computer right now... not backed-up in any way.
A: A combination of RAID1, Acronis, xcopy, DVDs and ftp. See:
http://successfulsoftware.net/2008/02/04/your-harddrive-will-fail-its-just-a-question-of-when/
A: Maybe just a simple hardware hard disk raid would be a good start. This way if one drive fails, you still have the other drive in the raid. If something other than the drives fail you can pop these drives into another system and get your files quickly.
A: I'm just sorting this out at work for the team. An image with all common tools is on Network. (We actually have a hotswap machine ready). All work in progress is on network too.
So Developers machine goes boom. Use hotswap machine and continue. Downtime ~15 mins + coffee break.
A: We have a corporate solution pushed down on us called Altiris, which works when it wants to. It depends on whether or not it's raining outside. I think Altiris might be a rain-god, and just doesn't know it. I am actually delighted when it's not working, because it means that I can have my 99% of CPU usage back, thank you very much.
Other than that, we don't have any rights to install other software solutions for backing things up or places we are permitted to do so. We are not permitted to move data off of our machines.
So, I end up just crossing my fingers while laughing at the madness.
A: I don't.
We do continuous integration, submit code often to the central source control system (which is backed up like crazy!).
If my machine dies at most I've lost a couple of days work.
And all I need to do is get a clean disk at setup the dev environment from a ghost image or by spending a day sticking CDs in, rebooting after Windows update, etc. Not a pleasant day but I do get a nice clean machine.
A: At work NetBackup or PureDisk depending on the box, at home rsync.
A: like a few others, I have a clean copy of my virtual pc that I can grab and start fresh at anytime and all code is stored in subversion.
A: I use SuperDuper! and backup my Virtual Machine to another external drive (i have two).
All the code is on a SVN server.
I have a clean VM in case mine fails. But in either case it takes me a couple of hours to install WinXP+Vstudio. i don't use anything else in that box.
A: I use xcopy to copy all my personal files to an external hard drive on startup.
Here's my startup.bat:
xcopy d:\files f:\backup\files /D /E /Y /EXCLUDE:BackupExclude.txt
This recurses directories, only copies files that have been modified and suppresses the message to replace an existing file, the list of files/folders in BackupExclude.txt will not be copied.
A: Windows Home Server. My dev box has two drives with about 750GB of data between them (C: is a 300GB SAS 15K RPM drive with apps and system on it, D: is a mirrored 1TB set with all my enlistments). I use Windows Home Server to back this machine up and have successfully restored it several times after horking it.
A: My development machine is backed up using Retrospect and Acronis. These are nightly backups that run when I'm asleep - one to an external drive and one to a network drive.
All my source code is in SVN repositories, I keep all my repositories under a single directory so I have a scheduled task running a script that spiders a path for all SVN repositories and performs a number of hotcopies (using the hotcopy.py script) as well as an svndump of each repository.
My work machine gets backed up however they handle it, however I also have the same script running to do hotcopies and svndumps onto a couple of locations that get backed up.
I make sure that of the work backups, one location is NOT on the SAN, yes it gets backed up and managed, but when it is down, it is down.
A: I would like a recommendation for an external RAID container, or perhaps just an external drive container, preferably interfacing using FireWire 800.
I also would like a recommendation for a manufacturer for the backup drives to go into the container. I read so many reviews of drives saying that they failed I'm not sure what to think.
I don't like backup services like Mozy because I don't want to trust them to not look at my data.
A: *
*SuperDuper complete bootable backups every few weeks
*Time Machine backups for my most important directories daily
*Code is stored in network subversion/git servers
*Mysql backups with cron on the web servers, use ssh/rsync to pull it down onto our local servers also using cron nightly.
A: If you use a Mac, it's a no brainer - just plug in an external hard drive and the built in Time Machine software will back up your whole system, then maintain an incremental backup on the schedule you define. This has got me out of a hole many a time when I've messed up my environment; it also made it super easy to restore my system after installing a bigger hard drive.
For offsite backups, I like JungleDisk - it works on Mac, Windows and Linux and backs up to Amazon S3 (or, added very recently, the Rackspace cloud service). This is a nice solution if you have multiple machines (or even VMs) and want to keep certain directories backed up without having to think about it.
A: Home Server Warning!
I installed Home Server on my development Server for two reasons: Cheap version of Windows Server 2003 and for backup reasons.
The backup software side of things is seriously hit or miss. If you 'Add' a machine to the list of computers to be backed up right at the start of installing Home Server, generally everything is great.
BUT it seems it becomes a WHOLE lot harder to add any other machines after a certain amount of time has passed.
(Case in point: I did a complete rebuild on my laptop, tried to Add it - NOPE!)
So i'm seriously doubting the reliability of this platform for backup purposes. Seems to be defeating the purpose if you can't trust it 100%
A: I have the following backup scenarios and use rsync as a primary backup tool.
*
*(weekly) Windows backup for "bare metal" recovery
Content of System drive C:\ using Windows Backup for quick recovery after physical disk failure, as I don't want to reinstall Windows and applications from scratch. This is configured to run automatically using Windows Backup schedule.
*(daily and conditional) Active content backup using rsync
Rsync takes care of all changed files from laptop, phone, other devices. I backup laptop every night and after significant changes in content, like import of the recent photo RAWs from SD card to laptop.
I've created a bash script that I run from Cygwin on Windows to start rsync: https://github.com/paravz/windows-rsync-backup
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: XPath and Selecting a single node I'm using XPath in .NET to parse an XML document, along the lines of:
XmlNodeList lotsOStuff = doc.SelectNodes("//stuff");
foreach (XmlNode stuff in lotsOStuff) {
XmlNode stuffChild = stuff.SelectSingleNode("//stuffChild");
// ... etc
}
The issue is that the XPath Query for stuffChild is always returning the child of the first stuff element, never the rest. Can XPath not be used to query against an individual XMLElement?
A: The // you use in front of stuffChild means you're looking for stuffChild elements, starting from the root.
If you want to start from the current node (decendants of the current node), you should use .//, as in:
stuff.SelectSingleNode(".//stuffChild");
A: // at the beginning of an XPath expression starts from the document root. Try ".//stuffChild". . is shorthand for self::node(), which will set the context for the search, and // is shorthand for the descendant axis.
So you have:
XmlNode stuffChild = stuff.SelectSingleNode(".//stuffChild");
which translates to:
xmlNode stuffChild = stuff.SelectSingleNode("self::node()/descendant::stuffChild");
xmlNode stuffChild = stuff.SelectSingleNode("self::node()/descendant-or-self::stuffChild");
In the case where the child node could have the same name as the parent, you would want to use the slightly more verbose syntax that follows, to ensure that you don't re-select the parent:
xmlNode stuffChild = stuff.SelectSingleNode("self::node()/descendant::stuffChild");
Also note that if "stuffChild" is a direct descendant of "stuff", you can completely omit the prefixes, and just select "stuffChild".
XmlNode stuffChild = stuff.SelectSingleNode("stuffChild");
The W3Schools tutorial has helpful info in an easy to digest format.
A: If "stuffChild" is a child node of "stuff", then your xpath should just be:
XmlNode stuffChild = stuff.SelectSingleNode("stuffChild");
A: Selecting single node means you need only the first element. So, the best solution is:
XmlNode stuffChild = stuff.SelectSingleNode("descendant::stuffChild[1]");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: What is the best way to deal with DBNull's I frequently have problems dealing with DataRows returned from SqlDataAdapters. When I try to fill in an object using code like this:
DataRow row = ds.Tables[0].Rows[0];
string value = (string)row;
What is the best way to deal with DBNull's in this type of situation.
A: Add a reference to System.Data.DataSetExtensions, that adds Linq support for querying data tables.
This would be something like:
string value = (
from row in ds.Tables[0].Rows
select row.Field<string>(0) ).FirstOrDefault();
A: I always found it clear, concise, and problem free using a version of the If/Else check, only with the ternary operator. Keeps everything on one row, including assigning a default value if the column is null.
So, assuming a nullable Int32 column named "MyCol", where we want to return -99 if the column is null, but return the integer value if the column is not null:
return row["MyCol"] == DBNull.Value ? -99 : Convert.ToInt32(Row["MyCol"]);
It is the same method as the If/Else winner above - But I've found if you're reading multiple columns in from a datareader, it's a real bonus having all the column-read lines one under another, lined up, as it's easier to spot errors:
Object.ID = DataReader["ID"] == DBNull.Value ? -99 : Convert.ToInt32(DataReader["ID"]);
Object.Name = DataReader["Name"] == DBNull.Value ? "None" : Convert.ToString(DataReader["Name"]);
Object.Price = DataReader["Price"] == DBNull.Value ? 0.0 : Convert.ToFloat(DataReader["Price"]);
A: If you have control of the query that is returning the results, you can use ISNULL() to return non-null values like this:
SELECT
ISNULL(name,'') AS name
,ISNULL(age, 0) AS age
FROM
names
If your situation can tolerate these magic values to substitute for NULL, taking this approach can fix the issue through your entire app without cluttering your code.
A: Nullable types are good, but only for types that are not nullable to begin with.
To make a type "nullable" append a question mark to the type, for example:
int? value = 5;
I would also recommend using the "as" keyword instead of casting. You can only use the "as" keyword on nullable types, so make sure you're casting things that are already nullable (like strings) or you use nullable types as mentioned above. The reasoning for this is
*
*If a type is nullable, the "as" keyword returns null if a value is DBNull.
*It's ever-so-slightly faster than casting though only in certain cases. This on its own is never a good enough reason to use as, but coupled with the reason above it's useful.
I'd recommend doing something like this
DataRow row = ds.Tables[0].Rows[0];
string value = row as string;
In the case above, if row comes back as DBNull, then value will become null instead of throwing an exception. Be aware that if your DB query changes the columns/types being returned, using as will cause your code to silently fail and make values simple null instead of throwing the appropriate exception when incorrect data is returned so it is recommended that you have tests in place to validate your queries in other ways to ensure data integrity as your codebase evolves.
A: DBNull implements .ToString() like everything else. No need to do anything. Instead of the hard cast, call the object's .ToString() method.
DataRow row = ds.Tables[0].Rows[0];
string value;
if (row["fooColumn"] == DBNull.Value)
{
value = string.Empty;
}
else
{
value = Convert.ToString(row["fooColumn"]);
}
this becomes:
DataRow row = ds.Tables[0].Rows[0];
string value = row.ToString()
DBNull.ToString() returns string.Empty
I would imagine this is the best practice you're looking for
A: It is worth mentioning, that DBNull.Value.ToString() equals String.Empty
You can use this to your advantage:
DataRow row = ds.Tables[0].Rows[0];
string value = row["name"].ToString();
However, that only works for Strings, for everything else I would use the linq way or a extension method. For myself, I have written a little extension method that checks for DBNull and even does the casting via Convert.ChangeType(...)
int value = row.GetValueOrDefault<int>("count");
int value = row.GetValueOrDefault<int>("count", 15);
A: Often when working with DataTables you have to deal with this cases, where the row field can be either null or DBNull, normally I deal with that like this:
string myValue = (myDataTable.Rows[i]["MyDbNullableField"] as string) ?? string.Empty;
The 'as' operator returns null for invalid cast's, like DBNull to string, and the '??' returns
the term to the right of the expression if the first is null.
A: If you aren't using nullable types, the best thing to do is check to see if the column's value is DBNull. If it is DBNull, then set your reference to what you use for null/empty for the corresponding datatype.
DataRow row = ds.Tables[0].Rows[0];
string value;
if (row["fooColumn"] == DBNull.Value)
{
value = string.Empty;
}
else
{
value = Convert.ToString(row["fooColumn"]);
}
As Manu said, you can create a convert class with an overloaded convert method per type so you don't have to pepper your code with if/else blocks.
I will however stress that nullable types is the better route to go if you can use them. The reasoning is that with non-nullable types, you are going to have to resort to "magic numbers" to represent null. For example, if you are mapping a column to an int variable, how are you going to represent DBNull? Often you can't use 0 because 0 has a valid meaning in most programs. Often I see people map DBNull to int.MinValue, but that could potentially be problematic too. My best advice is this:
*
*For columns that can be null in the database, use nullable types.
*For columns that cannot be null in the database, use regular types.
Nullable types were made to solve this problem. That being said, if you are on an older version of the framework or work for someone who doesn't grok nullable types, the code example will do the trick.
A: You can also test with Convert.IsDBNull (MSDN).
A: I usually write my own ConvertDBNull class that wraps the built-in Convert class. If the value is DBNull it will return null if its a reference type or the default value if its a value type.
Example:
- ConvertDBNull.ToInt64(object obj) returns Convert.ToInt64(obj) unless obj is DBNull in which case it will return 0.
A: For some reason I've had problems with doing a check against DBNull.Value, so I've done things slightly different and leveraged a property within the DataRow object:
if (row.IsNull["fooColumn"])
{
value = string.Empty();
}
{
else
{
value = row["fooColumn"].ToString;
}
A: Brad Abrams posted something related just a couple of days ago
http://blogs.msdn.com/brada/archive/2009/02/09/framework-design-guidelines-system-dbnull.aspx
In Summary "AVOID using System.DBNull. Prefer Nullable instead."
And here is my two cents (of untested code :) )
// Or if (row["fooColumn"] == DBNull.Value)
if (row.IsNull["fooColumn"])
{
// use a null for strings and a Nullable for value types
// if it is a value type and null is invalid throw a
// InvalidOperationException here with some descriptive text.
// or dont check for null at all and let the cast exception below bubble
value = null;
}
else
{
// do a direct cast here. dont use "as", "convert", "parse" or "tostring"
// as all of these will swallow the case where is the incorect type.
// (Unless it is a string in the DB and really do want to convert it)
value = (string)row["fooColumn"];
}
And one question... Any reason you are not using an ORM?
A: You should also look at the extension methods. Here are some examples to deal with this scenerio.
Recommended read
A: If you are concerned with getting DBNull when expecting strings, one option is to convert all the DBNull values in the DataTable into empty string.
It is quite simple to do it but it would add some overhead especially if you are dealing with large DataTables. Check this link that shows how to do it if you are interested
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "46"
} |
Q: How to abort threads created with ThreadPool.QueueUserWorkItem is there a way to abort threads created with QueueUserWorkItem?
Or maybe I don't need to? What happens if the main application exits? Are all thread created from it aborted automatically?
A: The threadpool uses background threads. Hence, they will all be closed automatically when the application exits.
If you want to abort a thread yourself, you'll have to either manage the thread yourself (so you can call Thread.Abort() on the thread object) or you will have to set up some form of notification mechanism which will let you tell the thread that it should abort itself.
A: Yes, they will. However, if you are using unmanaged resources in those threads, you may end up in a lot of trouble.
A: You don't need to abort them. When your application exits, .NET will kill any threads with IsBackground = true. The .NET threadpool has all its threads set to IsBackground = true, so you don't have to worry about it.
Now if you're creating threads by newing up the Thread class, then you'll either need to abort them or set their IsBackground property to true.
A:
However, if you are using unmanaged
resources in those threads, you may
end up in a lot of trouble.
That would rather depend how you were using them - if these unmanaged resources were properly wrapped then they'd be dealt with by their wrapper finalization regardless of the mechanism used to kill threads which had referenced them. And unmanaged resources are freed up by the OS when an app exits anyway.
There is a general feeling that (Windows) applications spend much too much time trying to clean-up on app shutdown - often involving paging-in huge amounts of memory just so that it can be discarded again (or paging-in code which runs around freeing unmangaged objects which the OS would deal with anyway).
A: yeah, they are background, but f.ex if you have application where you use ThreadPool for some kinda multiple downloading or stuff, and you want to stop them, how do you stop ? my suggestion would be:
exit thread asap, f.ex
bool stop = false;
void doDownloadWork(object s)
{
if (!stop)
{
DownloadLink((String)s, location);
}
}
and if you set stop = true, second (currently in queue) threads automatically exit, after queue threads finishes it process.
A: According to Lukas Šalkauskas' answer.
But you should use:
volatile bool stop = false;
to tell the compiler this variable is used by several threads.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Uncollapsible CollapsiblePanelExtender I have a CollapsiblePanelExtender that will not collapse. I have "collapsed" set to true and all the ControlID set correctly. I try to collapse and it goes through the animation but then expands almost instantly. This is in an User Control with the following structure.
<asp:UpdatePanel ID="UpdatePanel1" runat="server">
<ContentTemplate>
<asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False"
DataSourceID="odsPartners" Width="450px" BorderWidth="0"
ShowHeader="false" ShowFooter="false" AllowSorting="true"
onrowdatabound="GridView1_RowDataBound">
<Columns>
<asp:TemplateField HeaderText="Contract Partners" SortExpression="Name">
<ItemTemplate>
<asp:Panel id="pnlRow" runat="server">
<table>
...Stuff...
</table>
</asp:Panel>
<ajaxToolkit:CollapsiblePanelExtender runat="server" ID="DDE"
Collapsed="true" ImageControlID="btnExpander" ExpandedImage="../Images/collapse.jpg" CollapsedImage="../Images/expand.jpg"
TargetControlID="DropPanel" CollapseControlID="btnExpander" ExpandControlID="btnExpander" />
<asp:Panel ID="DropPanel" runat="server" CssClass="CollapsedPanel">
<asp:Table ID="tblContracts" runat="server">
<asp:TableRow ID="row" runat="server">
<asp:TableCell ID="spacer" runat="server" Width="30"> </asp:TableCell>
<asp:TableCell ID="cellData" runat="server" Width="400">
<uc1:ContractList ID="ContractList1" runat="server" PartnerID='<%# Bind("ID") %>' />
</asp:TableCell>
</asp:TableRow>
</asp:Table>
</asp:Panel>
</ItemTemplate>
</asp:TemplateField>
</Columns>
</asp:GridView>
</ContentTemplate>
<Triggers>
<asp:AsyncPostBackTrigger ControlID="tbFilter" EventName="TextChanged" />
</Triggers>
</asp:UpdatePanel>
A: I am sorry I do not have time to trouble-shoot your code, so this is from the hip.
There is a good chance that this a client-side action that is failing. Make certain that your page has the correct doctype tag if you took it out of your page or masterPage. Furthermore, attempt to set the ClientState as well:
DDE.ClientState = true;
The issue is you have that thing wrapped inside of your TemplateField. I have ran into issues using the AjaxControlToolkit on repeated fields and usually side with using a lighter weight client-side option, up to and including rolling your own show/hide method that can be reused just by passing in an DOM understood id.
A: After checking the AutoExpand (which strangley had no visible effect) I checked the DOC Type. Sure enough. That was the culprit.
This is the correct one:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd" >
Thanks Ian!
A: Also check that the you have the following property set:
AutoExpand="False"
One of the features of the collapsible panel is that it will auto expand when you put your mouse over it, and this tag will make sure that doesn't happen.
A: It is working fine:
CollapsiblePanelExtender CpeForControls = (CollapsiblePanelExtender)tbl_Form.FindControl("cpe_controls");
CpeForControls.ClientState = "true";
CpeForControls.Collapsed = true;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Modifying SharePoint System Files What is the general feeling amongst developers regarding the changing of files in the 12 hive.
For example if you were asked to remove the sign is a different user menu item, you would need to modify the relevent user control on the filesystem. Now if you just go and modify it via notepad or copy over and then if you go and bring a new server into the farm you will need to remember to do the same on the new server.
Obvouisly you could deploy the changed file as a solution and have that done automatically, but I'm just wondering if people are hesitant to make changes to the default installed files?
A: I have done a bit of SharePoint development, and I must tell you that messing with the 12-hive is a ticket to a world of pain if you ever want to move the app.
I'd rather hack up some javascript to hide it, at least that can be bound to the master page, which is much more portable.
And remember, you never know when the next service pack comes around and nukes your changes :)
A: I agree with Lars. Sometimes you will not be able to avoid it, depending on your needs. But, in general the best policy is to avoid modification if at all possible.
I know that some of the other menu items in the current user menu (change login, my settings, etc) can be changed by removing permissions from the user. Under Users and Groups there is an option for permissions. I can't remember the exact setting (develop at work, not at home), but there are reasonable descriptions next to each of the 30+ permissions. Remove it and you start hiding menu options. No modifications to the 12-hive needed.
A: There is a very simple rule: if you want to keep official support from Microsoft, don't change any of the files in the 12 hive that are installed by SharePoint.
I've never encountered a situation where the only solution was to change such a file. For example if you want to change an out-of-the-box user control of SharePoint, you can do so by making use of the DelegateControl, and overriding it in a feature.
More info:
*
*http://msdn.microsoft.com/en-us/library/ms463169.aspx
*http://www.devx.com/enterprise/Article/36628
I know it's tempting to quickly change a file, and I have to admit sometimes I just do that on a DEV box, but don't go there on a production server!
A: Not sure if there is much use pitching in, as everyone else pretty much has it covered, but I would also say don't do it. As tempting as it is, its just impossible to know the full impact of that little change you have made.
From a support perspective you will make it difficult for Microsoft support (patches/hotfixes).
From a maintenance perspective you are also opening yourself up to long term costs.
Go the javascript route.
A: The way to go about it is to use a Sharepoint Solution (WSP) file.
To change the user control, create a new Sharepoint feature with the new functionality.
Include this feature in your solution.
Deploy the solution either using the stsadm command line, or through Central Site Admin.
This will then get automatically deployed to all the servers in your farm, and it avoids you overwriting anything default sharepoint files.
For more info, check out Sharepoint Nuts and Bolts blog on http://www.sharepointnutsandbolts.com/ which give an introduction to WSP and Sharepoint Features.
A: I've done this many times and I will speak from experience: Never ever touch the onet.xml files within the 12 hive under any circumstance. Any error that you make in there, and to make the CAML even more complex the file is largely whitespace sensitive, will have an impact on every part of SharePoint.
You should also consider that aside from the substantial risk to the installation, you may well be building in dependencies upon your changes that are then over-written in a future patch or service pack.
A: Most of the time, you can accomplish everything you want to using features and solution packages without modifying the files. However, there are a few (rather annoying) rare cases where your only option would be to modify a file on the system. I have used it for two particular cases so far. One was to add the PDF iFilter to the docicon.xml file, and the other was to add a theme to the themes.xml file. In both cases, it seemed to be the only way to achieve the goal. Still, we used a solution package to write those files out to all the servers in the farm.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How to represent cross-model information in MVC? I have an application, built using MVC, that produces a view which delivers summary information across a number of models. Further to that, some calculations are performed across the different sets of data.
There's no clear single model (that maps to a table at least) that seems to make sense as the starting point for this, so the various summaries are pulled from the contributing models in the controller, passed into the view and the calculations are performed there.
But that seems, well, dirty. But controllers are supposed to be lightweight, aren't they? And business logic shouldn't be in views, as I have it as present.
So where should this information be assembled? A new model, that doesn't map to a table? A library function/module? Or something else?
(Although I see this as mostly of an architectural/pattern question, I'm working in Rails, FWIW.)
Edit: Good answers all round, and a lot of consensus, which is reassuring. I "accepted" the answer I did to keep the link to Railscasts at the top. I'm behind in my Railscast viewing - something I shall make strenuous attempts to rectify!
A: As Brian said, you can create another model that marshals out the work that needs doing. There is a great Railscast on how to do this type of thing.
HTH
A: Why not create a model that doesn't inherit ActiveRecord::Base and execute the logic there (think the Cart class in Agile...With Rails).
A: Controllers don't have to map to specific models or views. Your model doesn't have to map one-to-one to a database table. That's sort of the idea of the framework. Separation of concerns that can all be tested in isolation.
A: Controllers don't have to be that lightweight.
However if you have some calculations that only rely on the model/s then you probably just need some sort of model wrapper for the models to perform the calculation. You can then place that into the API for the view so the view gets the end result.
A: You don't want the logic to be in the view. However you are free to create a database view. Except, rather than create it on the database side, create it as a new model. This will enable you to perform your calculations and your actual logic there, in one place. The pain of trying to keep your views in sync vs. the one time "pain" of creating the new model... I vote for a new model.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Why can't I connect to my CAS server with Perl's AuthCAS? I'm attempting to use an existing CAS server to authenticate login for a Perl CGI web script and am using the AuthCAS Perl module (v 1.3.1). I can connect to the CAS server to get the service ticket but when I try to connect to validate the ticket my script returns with the following error from the IO::Socket::SSL module:
500 Can't connect to [CAS Server]:443 (Bad hostname '[CAS Server]')
([CAS Server] substituted for real server name)
Symptoms/Tests:
*
*If I type the generated URL for the authentication into the web browser's location bar it returns just fine with the expected XML snippet. So it is not a bad host name.
*If I generate a script without using the AuthCAS module but using the IO::Socket::SSL module directly to query the CAS server for validation on the generated service ticket the Perl script will run fine from the command line but not in the browser.
*If I add the AuthCAS module into the script in item 2, the script no longer works on the command line and still doesn't work in the browser.
Here is the bare-bones script that produces the error:
#!/usr/bin/perl
use strict;
use warnings;
use CGI;
use AuthCAS;
use CGI::Carp qw( fatalsToBrowser );
my $id = $ENV{QUERY_STRING};
my $q = new CGI;
my $target = "http://localhost/cgi-bin/testCAS.cgi";
my $cas = new AuthCAS(casUrl => 'https://cas_server/cas');
if ($id eq ""){
my $login_url = $cas->getServerLoginURL($target);
printf "Location: $login_url\n\n";
exit 0;
} else {
print $q->header();
print "CAS TEST<br>\n";
## When coming back from the CAS server a ticket is provided in the QUERY_STRING
print "QUERY_STRING = " . $id . "</br>\n";
## $ST should contain the received Service Ticket
my $ST = $q->param('ticket');
my $user = $cas->validateST($target, $ST); #### This is what fails
printf "Error: %s\n", &AuthCAS::get_errors() unless (defined $user);
}
Any ideas on where the conflict might be?
The error is coming from the line directly above the snippet Cebjyre quoted namely
$ssl_socket = new IO::Socket::SSL(%ssl_options);
namely the socket creation. All of the input parameters are correct. I had edited the module to put in debug statements and print out all the parameters just before that call and they are all fine. Looks like I'm going to have to dive deeper into the IO::Socket::SSL module.
A: As usually happens when I post questions like this, I found the problem. It turns out the Crypt::SSLeay module was not installed or at least not up to date. Of course the error messages didn't give me any clues. Updating it and all the problems go away and things are working fine now.
A: Well, from the module source it looks like that IO::Socket error is coming from get_https2
[...]
unless ($ssl_socket) {
$errors = sprintf "error %s unable to connect https://%s:%s/\n",&IO::Socket::SSL::errstr,$host,$port;
return undef;
}
[...]
which is called by callCAS, which is called by validateST.
One option is to temporarily edit the module file to put some debug statements in if you can, but if I had to guess, I'd say the casUrl you are supplying isn't matching up to the _parse_url regex properly - maybe you have three slashes after the https?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: New Project : MySQL or SQL 2005 Express I am starting a new client/server project at work and I want to start using some of the newer technologies I've been reading about, LINQ and Generics being the main ones. Up until now I have been developing these types of applications with MySQL as clients were unwilling to pay the large licence costs for MSSQL.
I have played around a small amount with the express versions but have never actually developed anything with them. The new application will not have more than 5 concurrent connections but will be needed for daily reporting.
*
*Can MSSQL 2005 express still be downloaded? I cant seem to find it on the microsoft site. I would be hesitant to use MSSQL 2008 on a project so soon after its release.
*Are the express version adequate for my needs, I'm sure loads of people reading this have used them. Did you encounter any problems?
A: The answer to the question on any project in regards to what platform/technologies to use is: What does everyone know best?
*
*Yes express can still be downloaded.
*Will it fit your requirements? That depends on your requirements, of course. I have deployed MSSQL2005 Express on several enterprise level projects which I knew had a fixed database size that would never be exceeded (Express has a limit of each database of 4Gb). Also keep in mind there are other hardware constraints such as a 1 cpu limit.
Another thing to consider is if you need the Enterprise level tools that come with a paid edition of SQL Server. If you are moving a lot of flat data around you are stuck writing your own Bulk Copy Procs, which rule the house, but its an extra step, no doubt.
A: Note sure about #2 but you can download SQL Server Express 2005 here.
A: Sql express has more features, and is a lot more powerful, but will only run on windows boxes. If you ever need to scale Sql express can be switched easily to a commercial variant.
MySql doesn't support half the features, but does have most of the basic ones you actually need, and will run on windows or *nix boxes. It's also not throttled in the same way as Sql express is.
In my opinion (having used both extensively, but not touched MySql for a few years) Sql express is a far better DB system. If you're building .Net applications the Linq support is a deal clincher.
If you aren't going for pure Sql server support, I wouldn't go for pure MySql support instead. Use a DBFactory design pattern to load your data layer or use simple SQL:92 syntax that's a lowest common denominator.
A: Why not go to Sql server express 2008?
A: I'm mostly going to advocate MS SQL Server because of .NET integration. Linq To Sql is pretty much my favorite way to do deal with databases these days: anonymous functions make everything better! My current place of work has also used MSSQL Express for real projects, so you have at least two of us confirming that the restrictions aren't too harsh.
A: I have about 50 web sites running perl/apache/mysql and about 10 running C#/ASP.Net/SQL Server (Lite) and other (large) applications running on SQL Server (Heavy). I never have problems with SQL Server - it just works. I often have problems with MySQL.
My advice would be to go for the SQL Server based option even if you had to pay for it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Do you use distributed version control? I'd like to hear from people who are using distributed version control (aka distributed revision control, decentralized version control) and how they are finding it. What are you using, Mercurial, Darcs, Git, Bazaar? Are you still using it? If you've used client/server rcs in the past, are you finding it better, worse or just different? What could you tell me that would get me to jump on the bandwagon? Or jump off for that matter, I'd be interested to hear from people with negative experiences as well.
I'm currently looking at replacing our current source control system (Subversion) which is the impetus for this question.
I'd be especially interested in anyone who's used it with co-workers in other countries, where your machines may not be on at the same time, and your connection is very slow.
If you're not sure what distributed version control is, here are a couple articles:
Intro to Distributed Version Control
Wikipedia Entry
A: At the place where I work, we decided to move from SVN to Bazaar (after evaluating git and mercurial). Bazaar was easy to start off, with simple commands (not like the 140 commands that git has)
The advantages that we see is the ability to create local branches and work on it without disturbing the main version. Also being able to work without network access, doing diffs is faster.
One command in bzr which I like is the shelve extension. If you start working on two logically different pieces of code in a single file and want to commit only one piece, you can use the shelve extension to literally shelve the other changes later. In Git you can do the same with playing around in the index(staging area) but bzr has a better UI for it.
Most of the people were reluctant to move over as they have to type in two commands to commit and push (bzr ci + bzr push). Also it was difficult for them to understand the concept of branches and merging (no one uses branches or merges them in svn).
Once you understand that, it will increase the developer's productivity. Till everyone understands that, there will be inconsistent behaviour among everyone.
A: At my workplace we switched to Git from CVS about two months ago (the majority of my experience is with Subversion). While there was a learning curve involved in becoming familiar with the distributed system, I've found Git to be superior in two key areas: flexibility of working environment and merging.
I don't have to be on our VPN, or even have network connectivity at all, to have access to full versioning capabilities. This means I can experiment with ideas or perform large refactorings wherever I happen to be when the urge strikes, without having to remember to check in that huge commit I've built up or worrying about being unable to revert when I make a mess.
Because merges are performed client-side, they are much faster and less error-prone than initiating a server-side merge.
A: My company currently uses Subversion, CVS, Mercurial and git.
When we started five years ago we chose CVS, and we still use that in my division for our main development and release maintenance branch. However, many of our developers use Mercurial individually as a way to have private checkpoints without the pain of CVS branches (and particularly merging them) and we are starting to use Mercurial for some branches that have up to about 5 people. There's a good chance we'll finally ditch CVS in another year. Our use of Mercurial has grown organically; some people still never even touch it, because they are happy with CVS. Everyone who has tried Mercurial has ended up being happy with it, without much of a learning curve.
What works really nicely for us with Mercurial is that our (home brewed) continuous integration servers can monitor developer Mercurial repositories as well as the mainline. So, people commit to their repository, get our continuous integration server to check it, and then publish the changeset. We support lots of platforms so it is not feasible to do a decent level of manual checks. Another win is that merges are often easy, and when they are hard you have the information you need to do a good job on the merge. Once someone gets the merged version to work, they can push their merge changesets and then no one else has to repeat the effort.
The biggest obstacle is that you need to rewire your developers and managers brains so that they get away from the single linear branch model. The best medicine for this is a dose of Linus Torvalds telling you you're stupid and ugly if you use centralised SCM. Good history visualisation tools would help but I'm not yet satisfied with what's available.
Mercurial and CVS both work well for us with developers using a mix of Windows, Linux and Solaris, and I've noticed no problems with timezones. (Really, this isn't too hard; you just use epoch seconds internally, and I'd expect all the major SCM systems get this right).
It was possible, with a fair amount of effort, to import our mainline CVS history into Mercurial. It would have been easier if people had not deliberately introduced corner cases into our mainline CVS history as a way to test history migration tools. This included merging some Mercurial branches into the CVS history, so the project looks like it was using from day one.
Our silicon design group chose Subversion. They are mainly eight timezones away from my office, and even over a fairly good dedicated line between our offices SUbversion checkouts are painful, but workable. A big advantage of centralised systems is that you can potentially check big binaries into it (e.g. vendor releases) without making all the distributed repositories huge.
We use git for working with Linux kernel. Git would be more suitable for us once a native Windows version is mature, but I think the Mercurial design is so simple and elegant that we'll stick with it.
A: Not using distributed source control myself, but maybe these related questions and answers give you some insights:
*
*Distributed source control options
*Why is git better than Subversion
A: I personnaly use Mercurial source control system. I've been using it for a bit more than a year right now. It was actually my first experience with a VSC.
I tried Git, but never really pushed into it because I found it was too much for what I needed. Mercurial is really easy to pick up if you're a Subversion user since it shares a lot of commands with it. Plus I find the management of my repositories to be really easy.
I have 2 ways of sharing my code with people:
*
*I share a server with a co-worker and we keep a main repo for our project.
*For some OSS project I work on, we create patches of our work with Mercurial (hg export) and the maintener of the project just apply them on the repository (hg import)
Really easy to work with, yet very powerful. But generally, choosing a VSC really depends on our project's needs...
A: I've been using Mercurial both at work and in my own personal projects, and I am really happy with it. The advantages I see are:
*
*Local version control. Sometimes I'm working on something, and I want to keep a version history on it, but I'm not ready to push it to the central repositories. With distributed VCS, I can just commit to my local repo until it's ready, without branching. That way, if other people make changes that I need, I can still get them and integrate them into my code. When I'm ready, I push it out to the servers.
*Fewer merge conflicts. They still happen, but they seem to be less frequent, and are less of a risk, because all the code is checked in to my local repo, so even if I botch the merge, I can always back up and do it again.
*Separate repos as branches. If I have a couple development vectors running at the same time, I can just make several clones of my repo and develop each feature independently. That way, if something gets scrapped or slipped, I don't have to pull pieces out. When they're ready to go, I just merge them together.
*Speed. Mercurial is much faster to work with, mostly because most of your common operations are local.
Of course, like any new system, there was some pain during the transition. You have to think about version control differently than you did when you were using SVN, but overall I think it's very much worth it.
A: Back before we switched off of Sun workstations for embedded systems development we were using Sun's TeamWare solution. TeamWare is a fully distribution solution using SCCS as the local repository file revision system and then wrappers that with a set of tools to handle the merging operations (done through branch renaming) back to the centralized repositories of which there can be many. In fact, because it is distributed, there really is no master repository per se' (except by convention if you want it) and all users have their own copies of the entire source tree and revisions. During "put back" operations, the merge tool using 3-way diffs algorithmically sorts out what is what and allows you combine the changes from different developers that have accumulated over time.
After switching to Windows for our development platform, we ended up switching to AccuRev. While AccuRev, because it depends on a centralized server, is not truely a distributed solution, logically from a workflow model comes very close. Where TeamWare would have had completely seperate copies of everything at each client, including all the revisions of all files, under AccuRev this is maintained in the central database and the local client machines only have the flat file current version of things for editing locally. However these local copies can be versioned through the client connection to the server and tracked completely seperately from any other changes (ie: branches) implicitly created by other developers
Personally, I think the distributed model implemented by TeamWare or the sort of hybrid model implemented by AccuRev is superior to completely centralized solutions. The main reason for this is that there is no notion of having to check out a file or having a file locked by another user. Also, users don't have to create or define the branches; the tools do this for you implicitly. When there are larger teams or different teams contributing to or maintaining a set of source files this resolves "tool generated" locking related collisions and allows the code changes to be coordinated more at the developer level who ultimately have to coordinate changes anyway. In a sense, the distributed model allows for a much finer grained "lock" rather than the course grained locking instituted by the centralized models.
A: Have used darcs on a big project (GHC) and for lots of small projects. I have a love/hate relationship with darcs.
Pluses: incredibly easy to set up repository. Very easy to move changes around between repositories. Very easy to clone and try out 'branches' in separate repositories. Very easy to make 'commits' in small coherent groups that makes sense. Very easy to rename files and identifiers.
Minuses: no notion of history---you can't recover 'the state of things on August 5'. I've never really figured out how to use darcs to go back to an earlier version.
Deal-breaker: darcs does not scale. I (and many others) have gotten into big trouble with GHC using darcs. I've had it hang with 100% CPU usage for 9 days trying to pull in
3 months' worth of changes. I had a bad experience last summer where I lost two weeks
trying to make darcs function and eventually resorted to replaying all my changes by hand into a pristine repository.
Conclusion: darcs is great if you want a simple, lightweight way to keep yourself from shooting yourself in the foot for your hobby projects. But even with some of the performance problems addressed in darcs 2, it is still not for industrial strength stuff. I will not really believe in darcs until the vaunted 'theory of patches' is something a bit more than a few equations and some nice pictures; I want to see a real theory published in a refereed venue. It's past time.
A: I really love Git, especially with GitHub. It's so nice being able to commit and roll back locally. And cherry-picking merges, while not trivial, is not terribly difficult, and far more advanced than anything Svn or CVS can do.
A: My group at work is using Git, and it has been all the difference in the world. We were using SCCS and a steaming pile of csh scripts to manage quite large and complicated projects that shared code between them (attempted to, anyway).
With Git, submodule support makes a lot of this stuff easy, and only a minimum of scripting is necessary. Our release engineering effort has gone way, way down because branches are easy to maintain and track. Being able to cheaply branch and merge really makes it reasonably easy to maintain a single collection of sources across several projects (contracts), whereas before, any disruption to the typical flow of things was very, very expensive. We've also found the scriptabability of Git to be a huge plus, because we can customize its behavior through hooks or through scripts that do . git-sh-setup, and it doesn't seem like a pile of kludges like before.
We also sometimes have situations in which we have to maintain our version control across distributed, non-networked sites (in this case, disconnected secure labs), and Git has mechanisms for dealing with that quite smoothly (bundles, the basic clone mechanism, formatted patches, etc).
Some of this is just us stepping out of the early 80s and adopting some modern version control mechanisms, but Git "did it right" in most areas.
I'm not sure of the extent of answer you're looking for, but our experience with Git has been very, very positive.
A: Using Subversion with SourceForge and other servers over a number of different connections with medium sized teams and it's working very well.
A: I am a huge proponent of centralized source control for a lot of reasons, but I did try BitKeeper on a project briefly. Perhaps after years of using a centralized model in one format or another (Perforce, Subversion, CVS) I just found distributed source control difficult to use.
I am of the mindset that our tools should never get in the way of the actual work; they should make work easier. So, after a few head pounding experiences, I bailed. I would advise doing some really hardy tests with your team before rocking the boat because the model is very different than what most devs are probably accustomed to in the SCM world.
A: I've used bazaar for a little while now and love it. Trivial branching and merging back in give great confidence in using branches as they should be used. (I know that central vcs tools should allow this, but the common ones including subversion don't allow this easily).
bzr supports quite a few different workflows from solo, through working as a centralised repository to fully distributed. With each branch (for a developer or a feature) able to be merged independently, code reviews can be done on a per branch basis.
bzr also has a great plugin (bzr-svn) allowing you to work with a subversion repository. You can make a copy of the svn repo (which initially takes a while as it fetches the entire history for your local repo). You can then make branches for different features. If you want to do a quick fix to the trunk while half way through your feature, you can make an extra branch, work in that, and then merge back to trunk, leaving your half done feature untouched and outside of trunk. Wonderful. Working against subversion has been my main use so far.
Note I've only used it on Linux, and mostly from the command line, though it is meant to work well on other platforms, has GUIs such as TortoiseBZR and a lot of work is being done on integration with IDEs and the like.
A: I'm playing around with Mercurial for my home projects. So far, what I like about it is that I can have multiple repositories. If I take my laptop to the cabin, I've still got version control, unlike when I ran CVS at home. Branching is as easy as hg clone and working on the clone.
A:
Using Subversion
Subversion isn't distributed, so that makes me think I need a wikipedia link in case people aren't sure what I'm talking about :)
A: Been using darcs 2.1.0 and its great for my projects. Easy to use. Love cherry picking changes.
A: I use Git at work, together with one of my coworkers. The main repository is SVN, though. We often have to switch workstations and Git makes it very easy to just pull changes from a local repository on another machine. When we're working as a team on the same feature, merging our work is effortless.
The git-svn bridge is a little wonky, because when checking into SVN it rewrites all the commits to add its git-svn-id comment. This destroys the nice history of merges between my coworker's repo an mine. I predict that we wouldn't use a central repository at all if every teammember would be using Git.
You didn't say what os you develop on, but Git has the disadvantage that you have to use the command line to get all the features. Gitk is a nice gui for visualizing the merge history, but the merging itself has to be done manually. Git-Gui and the Visual Studio plugins are not that polished yet.
A: We use distributed version control (Plastic SCM) for both multi-site and disconnected scenarios.
1- Multi-site: if you have distant groups, sometimes you can't rely on the internet connection, or it's not fast enough and slows down developers. Then having independent server which can synchronize back (Plastic replicates branches back and forth) is very useful and speed up things. It's probably one of the most common scenarios for companies since most of them are still concerned of "totally distributed" practices where each developer has its own replicated repository.
2- Disconnected (or truly distributed if you prefer): every developer has his own repository which is replicated back and forth with his peers or the central location. It's very convenient to go to a customer's location or just go home with your laptop, and continue being able to switch branches, checkout and checkin code, look at the history, run annotates and so on, without having to access the remote "central" server. Then whenever you go back to the office you just replicate your changes (normally branches) back with a few clicks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: guid REST URL for ado.net dataservice call? Trying to use a guid as a resource id in a rest url but cant find any resource on how to.
My best right now (not working though) is:
http://localhost:49157/PhotogalleryDS.svc/gallery('1d03beb3-6d63-498b-aad7-fab0b1480996')
(I've tried duouble/single quotes. And also {guid....} with braces. Still no luck.
i only get:
Syntax error '"' at position 0.
Question is: How would I (with regard to specification) write the url for a resource where the id is a guid?
A: I figured it out!
http://localhost:49157/PhotogalleryDS.svc/photo(guid'01231cc7-1e26-4f33-9fdf-fdf7015267dd')
This is the way
A: My best guess based on what you've written is that you're wrapping that URL in double quotes. The position 0 in that error message probably refers to the character directly before http
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What's the fastest way to determine a full URL from a relative URL (given a base URL) I'm currently using the module URI::URL to generate a full URL from a relative URL; however, it isn't running as fast as I'd like it to be. Does anyone know another way to do this that may be faster?
A: Just happened across this article which point out shortcomings in Redhat/Centos/Fedora implementations of Perl which affect URI profoundly.
If you are running one of these Linux flavours, you might want to recompile Perl from original source (not RPM source).
I realized that anyone running perl code with the distribution perl interpretter on Redhat 5.2, Centos 5.2 or Fedora 9 is likely a victim. Yes, even if your code doesn’t use the fancy bless/overload idiom, many CPAN modules do! This google search shows 1500+ modules use the bless/overload idiom and they include some really popular ones like URI, JSON. ...
... At this point, I decided to recompile perl from source. The bug was gone. And the difference was appalling. Everything got seriously fast. CPUs were chilling at a loadavg below 0.10 and we were processing data 100x to 1000x faster!
A: The following code should work.
$uri = URI->new_abs( $str, $base_uri )
You should also take a look at the URI page on search.cpan.org.
A: Brendan, I should have clarified that I can't guarantee what the relative path is going to look like. It could be pretty tricky (e.g. has a slash at the front, doesn't have a slash, has "../", etc).
Peter, that's what I'm using now. Or is that faster then using the URI::URL->new($path)->abs?
A: Could depend a bit how you obtain those 2 strings. Probably the secure, fireproof way to do that is what is in URI::URL or similar libraries, where all alternatives, including malicious ones, would be considered. Maybe slower, but in some environments faster will be the speed of a bullet going to your own foot.
But if you expect there something plain and not tricky could see if it starts with /, chains of ../, or any other char. The 1st would put the server name + the url, the 2nd chop paths from the base uri till getting in one of the other 2 alternatives, or just add it to the base url.
A: Perhaps I got the wrong end of the stick but wouldn't,
$full_url = $base_url . $relative_url
work? IIRC Perl text processing is pretty quick.
@lennysan Ah sure yes of course. Sorry I can't help, my Perl is pretty rusty.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do you programmatically fill in a form and 'POST' a web page? Using C# and ASP.NET I want to programmatically fill in some values (4 text boxes) on a web page (form) and then 'POST' those values. How do I do this?
Edit: Clarification: There is a service (www.stopforumspam.com) where you can submit ip, username and email address on their 'add' page. I want to be able to create a link/button on my site's page that will fill in those values and submit the info without having to copy/paste them across and click the submit button.
Further clarification: How do automated spam bots fill out forms and click the submit button if they were written in C#?
A: The code will look something like this:
WebRequest req = WebRequest.Create("http://mysite/myform.aspx");
string postData = "item1=11111&item2=22222&Item3=33333";
byte[] send = Encoding.Default.GetBytes(postData);
req.Method = "POST";
req.ContentType = "application/x-www-form-urlencoded";
req.ContentLength = send.Length;
Stream sout = req.GetRequestStream();
sout.Write(send, 0, send.Length);
sout.Flush();
sout.Close();
WebResponse res = req.GetResponse();
StreamReader sr = new StreamReader(res.GetResponseStream());
string returnvalue = sr.ReadToEnd();
A: You can use the UploadValues method on WebClient - all it requires is passing a URL and a NameValueCollection. It is the easiest approach that I have found, and the MS documentation has a nice example:
http://msdn.microsoft.com/en-us/library/9w7b4fz7.aspx
Here is a simple version with some error handling:
var webClient = new WebClient();
Debug.Info("PostingForm: " + url);
try
{
byte [] responseArray = webClient.UploadValues(url, nameValueCollection);
return new Response(responseArray, (int) HttpStatusCode.OK);
}
catch (WebException e)
{
var response = (HttpWebResponse)e.Response;
byte[] responseBytes = IOUtil.StreamToBytes(response.GetResponseStream());
return new Response(responseBytes, (int) response.StatusCode);
}
The Response class is a simple wrapper for the response body and status code.
A: View the source of the page and use the WebRequest class to do the posting. No need to drive IE. Just figure out what IE is sending to the server and replicate that. Using a tool like Fiddler will make it even easier.
A: I had a situation where I needed to post free text from a html textarea programmatically and I had issues where I was getting <br /> in my param list i was building.
My solution was a replace of the br tags with linebreak characters and htmlencoding just to be safe.
Regex.Replace( HttpUtility.HtmlDecode( test ), "(<br.*?>)", "\r\n" ,RegexOptions.IgnoreCase);
A: Where you encode the string:
Encoding.Default.GetBytes(postData);
Use Ascii instead for the google apis:
Encoding.ASCII.GetBytes(postData);
this makes your request the same as and equivalent "curl --data "..." [url]" request
A: you can send a post/get request with many ways. Different types of library is there to help.
I found it is confusing to choose which one I should use and what are the differences among them.
After surfing stack overflow this is the best answer I found. this thread explains all
https://stackoverflow.com/a/4015346/1999720
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
} |
Q: Limitations of screen readers I'm a web developer, and I want to make the web sites I develop more accessible to those using screen readers. What limitations do screen readers have that I should be most aware of, and what can I do to avoid hitting these limitations.
This question was sparked by reading another question about non-image based captchas. In there, a commenter said that honey pot form fields (form fields hidden with CSS that only a bot would fill in), are a bad idea, because screen readers would still pick them up.
Are screen readers really so primitive that they would read text that isn't even displayed on the screen? Ideally, couldn't you make a screen reader that waited until the page was finished loading, applied all css, and even ran Javascript onload functions before it figured out what was actually displayed, and then read that off to the user? You could probably even identify parts of the page that are menus or table of contents, and give some sort of easy way for those parts to be read exclusively or skipped over. I would think that the programming community could come up with a better solution to this problem.
A:
Are screen readers really so primitive that they would read text that isn't even displayed on the screen?
What you have to remember is that any HTML parser doesn't read the screen - it reads the source markup. Whta you see on the screen is the browser's attempt to apply CSS to the source code. It's irrelevant.
You could probably even identify parts of the page that are menus or table of contents, and give some sort of easy way for those parts to be read exclusively or skipped over.
You could, if there were a standard for such a thing.
I'm not very hot on the limitations of screen readers, however I've read a lot about them not being ideal. The best thing I can reccommend is to put your source in order - how you'd read it.
There are a set of CSS properties you should also look at for screen readers.
A: Recommended listening: Hanselminutes
It's an interview with a blind programmer.
A: How many forms just have a * or bold to indicate to a sight user that a field is required for correct submission? What's the screen reader doing? Saying "star"?
Below is an example of code that is helpful by articulating verbally but not visually.
(note - in the example below the word "required." is spoken but not seen on screen)
In the template:
<label for="Requestor" accesskey="9"><span class="required"> Requestor * </span><span class="hidden">required.</span></label>
In the CSS:
#hidden {
position:absolute;
left:0px;
top:-500px;
width:1px;
height:1px;
overflow:hidden;
}
or
.hidden {
position:absolute;
left:0px;
top:-500px;
width:1px;
height:1px;
overflow:hidden;
}
There can be a whole parallel view behind the "seen" in every X/HTML page.
A: Have a look at ARIA, it's a standard for developing accessible rich-web-client applications.
A: @robertmyers
CSS contains the aural media type specifically to control the "rendering" of things when screen readers are doing their work. So, for you example, you would only set it as visible for the aural media type.
@Ross
I'm quite aware that the screen reader doesn't actually read the screen, but you would think that to work well, it would have to build a model of what a person with sight would see, otherwise, it seems like it would do a really poor job of getting across to the user what's actually on the page. Also , putting things in the order you would read them doesn't really work, as a sighted person would scan the page quickly and read the section they want to read. Do you put the contents first so that the user has to listen to them every time, or do you put them at the end so that they can get to the content first? Also, putting content in order would mean some tricky CSS to get things positioned where you wanted them to be for sighted users.
It seems to me that most web pages contain very similar construction, and that it should be possible to, in many cases, pick out where the repeated headers and side columns are. When viewing many subsequent pages on the same site with the same formatting, it should be easy to figure out which sections are navigation, and which are content. Doing this, the screen reader could completely skip the navigation sections, and move right onto the content, as most sighted users would do.
I realize there are limitations, and that doing these types of things wouldn't be easy. However, I feel like as far as screen readers go, we only did the bare minimum and left it at that.
A: @Kibbee,
What you are describing as "primitive", is in fact a feature of screen readers that can be, and is, used to make sites more accessible. For example, if you have a tabbed interface, implemented with an unordered list and list items, then the sighted user would normally see the selected tab highlighted with a background color that is different (or some other visual treatment). Blind users cannot see this. So adding some additional text into the page and hiding it off screen is the technique used to communicate to the blind user what tab is active.
In the accessibility lingo, this information is known as role, name, value and state.
There are many other scenarios where this technique can be used to add information that is useful for a blind user.
More recently, WAI-ARIA has been added to allow this state, role, name and value information, so you can now implement a limited number of widgets (like tabs) using HTML attributes. However, the more general "off screen" technique is still useful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How do I REALLY reset the Visual Studio window layout? I had a plugin installed in Visual Studio 2008, and it created some extra dockable windows. I have uninstalled it, and I can't get rid of the windows it created - I close them, but they always come back. They're just empty windows now, since the plugin is no longer present, but nothing I've tried gets rid of them. I've tried:
*
*Window -> Reset Window Layout
*Deleting the .suo files in my project directories
*Deleting the Visual Studio 9.0 folder in my Application Settings directory
Any ideas?
A: If you want to reset the window layout. Then
go to "WINDOW" -> "RESET WINDOW LAYOUT"
A: If you have an old backup copy of CurrentSettings.vssettings, you can try restoring it.
I had a completely corrupted Visual Studio layout. When I tried to enter debug, I was told that VS had become unstable. When I restarted, my window layout would then be totally screwed. I tried restoring the VS current user settings in the registry from a backup, but that didn't help. However, restoring CurrentSettings.vssettings seems to have cured it.
There seems to be a bunch of binary stuff in there and I can imagine it gets irretrievably corrupted sometimes.
A: Note: if you have vs2010 and vs2008 and you want to reset the 2008, you will need to specify in command line the whole path. like this:
"C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe" /resetsettings
If you don't specify the path (like devenv.exe /resetsettings), it will reset the latest version of Visual studio installed on your computer.
A: Try devenv.exe /resetuserdata. I think it's more aggressive than the Tools > Import and Export options suggested.
Also check Tools > Add In Manager and make sure there aren't any orphans there.
A:
I close them, but they always come back
When you say "they always come back" do you mean "next time you restart Visual Studio" or "immediately"?
One quirk of Visual Studio (at least VS2005) is that settings aren't saved until you exit. That means that if VS crashes at all while you are using it, any layout changes you made will be lost. The way around this is to always gracefully exit when you have set up everything like you want it to be.
Not sure if this will help your particular situation though.
A: I tried most of the suggestions, and none of them worked. I didn't get a chance to try /resetuserdata. Finally I reinstalled the plugin and uninstalled it again, and the windows went away.
A: Have you tried this? In Visual Studio go to Tools > Import and Export Settings > Reset all settings
Be sure you back up your settings before you do this. I made the mistake of trying this to fix an issue and didn't realize it would undo all my appearance settings and toolbars as well. Took a lot of time to get back to the way I like things.
A: How about running the following from command line,
Devenv.exe /ResetSettings
You could also save those settings in to a file, like so,
Devenv.exe /ResetSettings "C:\My Files\MySettings.vssettings"
The /ResetSettings switch, Restores Visual Studio default settings. Optionally resets the settings to the specified .vssettings file.
MSDN link
A: I had similar problem except that it happened without installing any plugin. I begin to get this dialog about source control every time I open the project + tons of windows popping up and floating which I had to close one by one.
Windows -> Rest Windows Layout, fixed it for me without any problems. It does bring the default setting which I don't mind at all :)
A: If you've ever backed up your settings (Tools -> Import and Export Settings), you can restore the settings file to get back to a prior state. This is the only thing that I've found to work.
A: If you want to reset your development environment of your visual studio, then you can use Import and Export setting wizard. see this for all steps:
http://www.authorcode.com/forums/topic/how-to-reset-development-environment-settings-of-your-visual-studio/
A: Window -> Reset Window Layout didn't exist for me. For anybody looking in 2022 or later, I finally found the answer! The crucial information, buried in a VSCode update release note, was right at the bottom of this section. Here it is if the link breaks in the future:
If you'd like to reset all views back to the default layout, you can run Views: Reset View Locations from the Command Palette.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "130"
} |
Q: Setting up a large Xcode project I have a large exiting C++ project involving:
*
*4 applications
*50+ libraries
*20+ third party libraries
It all builds fine on Windows using VS8, Linux using QMake (project uses Qt a lot). I also build it on OS X using QMake but I was wanting to setup an Xcode project to handle it in an IDE. I'm struggling to setup proper configuration to easily define dependencies, both to internal libraries and to the third party. I can do property sheets and .pri files in my (disturbed) sleep, but would appreciate some advice on building such large projects in Xcode.
I've been experiencing with Xcode configuration files and #including one from another but it does not seem to work as I would expect, especially when defining standard locations for header files etc.
Is there some good book describing the process of setting up Xcode (remember it's C++, I'm not wanting to learn ObjC at this time)?
Or maybe a good open source project I could learn from?
Thanks!
A: Step in to Xcode may be the book you're looking for. It's got a whole section devoted to using AppleScript to automate configuration includes. I've been going through the book myself on O'Reilly Safari as I've found myself in a situation similar to yourself!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Difference between wiring events with and without "new" In C#, what is the difference (if any) between these two lines of code?
tmrMain.Elapsed += new ElapsedEventHandler(tmrMain_Tick);
and
tmrMain.Elapsed += tmrMain_Tick;
Both appear to work exactly the same. Does C# just assume you mean the former when you type the latter?
A: It used to be (.NET 1.x days) that the long form was the only way to do it. In both cases you are newing up a delegate to point to the Program_someEvent method.
A: I did this
static void Hook1()
{
someEvent += new EventHandler( Program_someEvent );
}
static void Hook2()
{
someEvent += Program_someEvent;
}
And then ran ildasm over the code.
The generated MSIL was exactly the same.
So to answer your question, yes they are the same thing.
The compiler is just inferring that you want someEvent += new EventHandler( Program_someEvent );
-- You can see it creating the new EventHandler object in both cases in the MSIL
A: I don't think there's any difference. Certainly resharper says the first line has redundant code.
A: A little offtopic :
You could instantiate a delegate (new EventHandler(MethodName)) and (if appropriate) reuse that instance.
A: Wasn't the new XYZEventHandler require until C#2003, and you were allowed to omit the redundant code in C#2005?
A: I think the one way to really tell would be to look at the MSIL produced for the code.. Tends to be a good acid test..
I have funny concerns that it may somehow mess with GC.. Seems odd that there would be all the overhead of declaring the new delegate type if it never needed to be done this way, you know?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Browser scrollbar I have a website that is perfectely centered aligned. The CSS code works fine. The problem doesn't really have to do with CSS. I have headers for each page that perfectely match eachother.
However, when the content gets larger, Opera and FireFox show a scrollbar at the left so you can scroll to the content not on the screen. This makes my site jump a few pixels to the left. Thus the headers are not perfectely aligned anymore.
IE always has a scrollbar, so the site never jumps around in IE.
Does anyone know a JavaScript/CSS/HTML solution for this problem?
A: I use
html { overflow-y: scroll; }
To standardize the scrollbar behavior in IE and FF
A: FWIW: I use
html { height: 101%; }
to force scrollbars to always appear in Firefox.
A: Are you aligning with percentage widths or fixed widths? I'm also guessing you're applying a background to the body - I've had this problem myself.
It'll be much easier to help you if you upload the page so we can see the source code however.
A: #middle
{
position: relative;
margin: 0px auto 0px auto;
width: 1000px;
max-width: 1000px;
}
is my centered DIV
A: Well you don't need the position: relative; - it should work fine without it.
I take it that div has to be 1000px wide? It would still be a lot easier to answer this with the actual website.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Postback with Modified Query String from Dropdown in ASP.NET My asp.net page will render different controls based on which report a user has selected e.g. some reports require 5 drop downs, some two checkboxes and 6 dropdowns).
They can select a report using two methods. With SelectedReport=MyReport in the query string, or by selecting it from a dropdown. And it's a common case for them to come to the page with SelectedReport in the query string, and then change the report selected in the drop down.
My question is, is there anyway of making the dropdown modify the query string when it's selected. So I'd want SelectedReport=MyNewReport in the query string and the page to post back.
At the moment it's just doing a normal postback, which leaves the SelectedReport=MyReport in the query string, even if it's not the currently selected report.
Edit: And I also need to preserve ViewState.
I've tried doing Server.Transfer(Request.Path + "?SelectedReport=" + SelectedReport, true) in the event handler for the Dropdown, and this works function wise, unfortunately because it's a Server.Transfer (to preserve ViewState) instead of a Response.Redirect the URL lags behind what's shown.
Maybe I'm asking the impossible or going about it completely the wrong way.
@Craig The QueryString collection is read-only and cannot be modified.
@Jason That would be great, except I'd lose the ViewState wouldn't I? (Sorry I added that after seeing your response).
A: You need to turn off autopostback on the dropdown - then, you need to hook up some javascript code that will take over that role - in the event handler code for the onchange event for the dropdown, you would create a URL based on the currently-selected value from the dropdown and use javascript to then request that page.
EDIT: Here is some quick and dirty code that is indicative of what would do the trick:
<script>
function changeReport(dropDownList) {
var selectedReport = dropDownList.options[dropDownList.selectedIndex];
window.location = ("scratch.htm?SelectedReport=" + selectedReport.value);
}
</script>
<select id="SelectedReport" onchange="changeReport(this)">
<option value="foo">foo</option>
<option value="bar">bar</option>
<option value="baz">baz</option>
</select>
Obviously you would need to do a bit more, but this does work and would give you what it seems you are after. I would recommend using a JavaScript toolkit (I use MochiKit, but it isn't for everyone) to get some of the harder work done - use unobtrusive JavaScript techniques if at all possible (unlike what I use in this example).
@Ray: You use ViewState?! I'm so sorry. :P Why, in this instance, do you need to preserve it. pray tell?
A: If it's an automatic post when the data changes then you should be able to redirect to the new query string with a server side handler of the dropdown's 'onchange' event. If it's a button, handle server side in the click event. I'd post a sample of what I'm talking about but I'm on the way out to pick up the kids.
A: Have you tried to modify the Request.QueryString[] on the SelectedIndexChanged for the DropDown? That should do the trick.
A: You could populate your dropdown based on the querystring on non-postbacks, then always use the value from the dropdown. That way the user's first visit to the page will be based on the querystring and subsequent changes they make to the dropdown will change the selected report.
A: The view state only lasts for multiple requests for the same page. Changing the query string in the URL is requesting a new page, thus clearing the view state.
Is it possible to remove the reliance on the view state by adding more query string parameters? You can then build a new URL and Response.Redirect to it.
Another option is to use the Action property on the form to clear the query string so at least the query string does not contradict what's displayed on the page after the user selects a different report.
Form.Action = Request.Path;
A: You can use the following function to modify the querystring on postback in asp.net using the Webresource.axd script as below.
var url = updateQueryStringParameter(window.location.href,
'Search',
document.getElementById('txtSearch').value);
WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions("searchbutton", "",
true, "aa", url, false, true));
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Recommendations for Javascript Editor on Windows? Are there any good recommendations anyone can provide for a good Javascript editor on Windows?
I currently use combinations of FireBug and TextPad but would hate to miss out on the party if there are better options out there.
Thanks.
A: (This is a cross-answer post)
Netbeans
I've tried out all of the above and my vote goes for Netbeans, which has been mentioned. However the answer didn't really sell you on the features which you can find here.
It has:
*
*Intellisense including jQuery built in
*Extended (Eclipse-style) documentation for functions
*Function and field outlining
*Code folding
*Refactoring
It makes Visual Studio 2010's Javascript support look very primitive.
A: The Zeus editor has support for Java Script.
It has the stock standard set of features like code folding and syntax highlighting features etc, but more importantly Zeus is fully scriptable and Zeus scripts can be written in Java Script.
A: In case you're a .Net programmer: VS 2008 has pretty great JS support including intellisense on dynamically added methods/properties and comfortable debugging.
A: The best that I've ever used is Netbeans, although its kind of heavyweight for some tasks due to being a fullblown multi-language IDE (not just Javascript). I've also had pretty good experiences with Aptana IDE, though, and I hear that IntelliJ is good if you don't mind paying the price.
A: WebStorm. If you have used any Jetbrains products you'll love it. It has Autocomplete and all the other javascript goodies. Even node.js support is provided. Check it out
A: If you are using eclipse, then I would recomend JSEclipse
A: I'm still a huge fan of HomeSite, even though Adobe discontinued development in May 2009: http://www.adobe.com/products/homesite/.
A: Both NetBeans and Eclipse have JavaScript editing support. The latest version of NetBeans actually does a really good job. They are both free and you can use them for other languages as well, this way you have a chance to get to know the IDE and the shortcuts as well.
A: Komodo Ide or Komodo Edit of course.
A: I know jsight already mentioned this, but Aptana Studio really is a great, free editor for JavaScript if you find yourself doing a lot of work with it - it has great support for most of the well-known libraries. If it were not for the fact that I work with C# in addition to JavaScript, I would use Aptana for all of my work.
A: I use NotePad++ and am happy (of course, that is when I am not using Visual Studio).
NotePad++ contains support for intellisense type feature as well.
A: Editra may be worth a look, the code colouring isn't bad, and I believe it has plugins to enable script execution.. Although I have not used this myself.
A: GVim is still awesome - not only for JavaScript for for almost all languages.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37"
} |
Q: How can you require a constructor with no parameters for types implementing an interface? Is there a way?
I need all types that implement a specific interface to have a parameterless constructor, can it be done?
I am developing the base code for other developers in my company to use in a specific project.
There's a proccess which will create instances of types (in different threads) that perform certain tasks, and I need those types to follow a specific contract (ergo, the interface).
The interface will be internal to the assembly
If you have a suggestion for this scenario without interfaces, I'll gladly take it into consideration...
A: You can use type parameter constraint
interface ITest<T> where T: new()
{
//...
}
class Test: ITest<Test>
{
//...
}
A: Juan,
Unfortunately there is no way to get around this in a strongly typed language. You won't be able to ensure at compile time that the classes will be able to be instantiated by your Activator-based code.
(ed: removed an erroneous alternative solution)
The reason is that, unfortunately, it's not possible to use interfaces, abstract classes, or virtual methods in combination with either constructors or static methods. The short reason is that the former contain no explicit type information, and the latter require explicit type information.
Constructors and static methods must have explicit (right there in the code) type information available at the time of the call. This is required because there is no instance of the class involved which can be queried by the runtime to obtain the underlying type, which the runtime needs to determine which actual concrete method to call.
The entire point of an interface, abstract class, or virtual method is to be able to make a function call without explicit type information, and this is enabled by the fact that there is an instance being referenced, which has "hidden" type information not directly available to the calling code. So these two mechanisms are quite simply mutually exclusive. They can't be used together because when you mix them, you end up with no concrete type information at all anywhere, which means the runtime has no idea where to find the function you're asking it to call.
A: Juan Manuel said:
that's one of the reasons I don't understand why it cannot be a part of the contract in the interface
It's an indirect mechanism. The generic allows you to "cheat" and send type information along with the interface. The critical thing to remember here is that the constraint isn't on the interface that you are working with directly. It's not a constraint on the interface itself, but on some other type that will "ride along" on the interface. This is the best explanation I can offer, I'm afraid.
By way of illustration of this fact, I'll point out a hole that I have noticed in aku's code. It's possible to write a class that would compile fine but fail at runtime when you try to instantiate it:
public class Something : ITest<String>
{
private Something() { }
}
Something derives from ITest<T>, but implements no parameterless constructor. It will compile fine, because String does implement a parameterless constructor. Again, the constraint is on T, and therefore String, rather than ITest or Something. Since the constraint on T is satisfied, this will compile. But it will fail at runtime.
To prevent some instances of this problem, you need to add another constraint to T, as below:
public interface ITest<T>
where T : ITest<T>, new()
{
}
Note the new constraint: T : ITest<T>. This constraint specifies that what you pass into the argument parameter of ITest<T> must also derive from ITest<T>.
Even so this will not prevent all cases of the hole. The code below will compile fine, because A has a parameterless constructor. But since B's parameterless constructor is private, instantiating B with your process will fail at runtime.
public class A : ITest<A>
{
}
public class B : ITest<A>
{
private B() { }
}
A: So you need a thing that can create instances of an unknown type that implements an interface. You've got basically three options: a factory object, a Type object, or a delegate. Here's the givens:
public interface IInterface
{
void DoSomething();
}
public class Foo : IInterface
{
public void DoSomething() { /* whatever */ }
}
Using Type is pretty ugly, but makes sense in some scenarios:
public IInterface CreateUsingType(Type thingThatCreates)
{
ConstructorInfo constructor = thingThatCreates.GetConstructor(Type.EmptyTypes);
return (IInterface)constructor.Invoke(new object[0]);
}
public void Test()
{
IInterface thing = CreateUsingType(typeof(Foo));
}
The biggest problem with it, is that at compile time, you have no guarantee that Foo actually has a default constructor. Also, reflection is a bit slow if this happens to be performance critical code.
The most common solution is to use a factory:
public interface IFactory
{
IInterface Create();
}
public class Factory<T> where T : IInterface, new()
{
public IInterface Create() { return new T(); }
}
public IInterface CreateUsingFactory(IFactory factory)
{
return factory.Create();
}
public void Test()
{
IInterface thing = CreateUsingFactory(new Factory<Foo>());
}
In the above, IFactory is what really matters. Factory is just a convenience class for classes that do provide a default constructor. This is the simplest and often best solution.
The third currently-uncommon-but-likely-to-become-more-common solution is using a delegate:
public IInterface CreateUsingDelegate(Func<IInterface> createCallback)
{
return createCallback();
}
public void Test()
{
IInterface thing = CreateUsingDelegate(() => new Foo());
}
The advantage here is that the code is short and simple, can work with any method of construction, and (with closures) lets you easily pass along additional data needed to construct the objects.
A: Not to be too blunt, but you've misunderstood the purpose of interfaces.
An interface means that several people can implement it in their own classes, and then pass instances of those classes to other classes to be used. Creation creates an unnecessary strong coupling.
It sounds like you really need some kind of registration system, either to have people register instances of usable classes that implement the interface, or of factories that can create said items upon request.
A: Call a RegisterType method with the type, and constrain it using generics. Then, instead of walking assemblies to find ITest implementors, just store them and create from there.
void RegisterType<T>() where T:ITest, new() {
}
A: I don't think so.
You also can't use an abstract class for this.
A: I would like to remind everyone that:
*
*Writing attributes in .NET is easy
*Writing static analysis tools in .NET that ensure conformance with company standards is easy
Writing a tool to grab all concrete classes that implement a certain interface/have an attribute and verifying that it has a parameterless constructor takes about 5 mins of coding effort. You add it to your post-build step and now you have a framework for whatever other static analyses you need to perform.
The language, the compiler, the IDE, your brain - they're all tools. Use them!
A: No you can't do that. Maybe for your situation a factory interface would be helpful? Something like:
interface FooFactory {
Foo createInstance();
}
For every implementation of Foo you create an instance of FooFactory that knows how to create it.
A: You do not need a parameterless constructor for the Activator to instantiate your class. You can have a parameterized constructor and pass all the parameters from the Activator. Check out MSDN on this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Getting QMake to generate a proper .app I have a large exiting C++ project involving:
*
*4 applications
*50+ libraries
*20+ third party libraries
The project uses QMake (part of Trolltech's Qt) to build the production version on Linux, but I've been playing around at building it on MacOS.
I can build in on MacOS using QMake just fine but I'm having trouble producing the final .app. It needs collecting all the third party frameworks and dynamic libraries, all the project's dynamic libraries and making sure the application finds them.
I've read online about using install_name_tool but was wondering if there's a process to automate it.
(Maybe the answer is to use XCode, see related question, but it would have issues with building uic and moc)
Thanks
A: I'm sure this could be of some great help for you :
deployqt
Hope this helps !
A: We have the same problem at Last.fm, I looked at DeployQt and it's not much use if you have third party libraries. In the end I wrote a perl script that generates a Makefile, which you can use to generate a .app and/or .dmg.
I uploaded it here: http://www.methylblue.com/detritus/QMake.dmg/
To use it add this to your application's pro file:
macx*:!macx-xcode:release {
system( QT=\'$$QT\' QMAKE_LIBDIR_QT=\'$$QMAKE_LIBDIR_QT\' $$ROOT_DIR/common/dist/mac/Makefile.dmg.pl $$DESTDIR $$VERSION $$LIBS > Makefile.dmg )
QMAKE_EXTRA_INCLUDES += Makefile.dmg
}
I'm sure it's not all yet portable, but it would be good for someone else to use and see if that is so.
This is basically the first official release of this code, so please send me bug reports, and also, improvements. Thanks.
A: I side-stepped this problem completely by building my Qt app statically on OS X. That might not be practical for you though.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I delete a directory with cc.net / cruiscontrol?
Possible Duplicate:
Pre-build task - deleting the working copy in CruiseControl.NET
I would like to delete my working directory during the cruisecontrol build process...I'm sure this is easy, but I have been unable to find an example of it...
If you know how to create a directory, that would be useful as well.
Thanks.
A: One of two ways.
*
*If you're already using an MSBuild file or something similar, add the action to the MSBuild file.
*Instead of directly executing some command, create a batch file that executes that command and then deletes the directory, and have CCnet call that batch file instead.
A: My guess is that you want to delete the working directory before CruiseControl.NET gets the latest code from source control. If this is the case, then the only way to accomplish this is to write a custom source control provider for CruiseControl.NET that first deletes the working directory and then gets the latest code. Have a look at CruiseControl.NET's source code for examples of how to write a source control provider.
If you want to delete the working directory after the latest code is retrieved from source control, then you can use CruiseControl.NET's executable task by running "cmd /c del directoryname".
A: In the ASP.NET work, for me, the easiest way I do it (which allows me to hit either MSBUild or NAnt depending upon the project) was to roll my own exe that takes an argument which I pass in with a bat file fired by CC.NET. It's not the safest thing in the world, but if you have total control over your automated build machine; it's not too shabby. Quick and reusable.
Drop in the exe somewhere that does the recursive delete:
static void Main(string[] args)
{
for (int n = 0; n < args.Length; n++)
{
if (Directory.Exists(args[n].ToString()))
{
Directory.Delete(args[n].ToString(), true);
}
}
}
Drop it in somewhere multiple files can pass arguments to it and just write a custom .bat file for each project. So my task block looks like this:
<tasks>
<msbuild>
<executable>C:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe</executable>
<workingDirectory>Z:\WorkingDirectory</workingDirectory>
<projectFile>YourSolution.sln</projectFile>
<logger>C:\Program Files\CruiseControl.NET\server\ThoughtWorks.CruiseControl.MsBuild.dll</logger>
</msbuild>
<exec>
<executable>Z:\SomePathToBuildScripts\YourCustomBat.bat</executable>
</exec>
</tasks>
Then the final step is setting up that .bat file to perform the delete/rebuild functions after use. In the bat file just make sure you rebuild ("MD") the directories you deleted if youexpect to publish a site back to them. On our dev boxes I found this to be the best way to prevent the beloved Frankenbuild.
A: The way I've done this in the past is to not have CC.Net checkout source itself. Instead, there are two <msbuild> elements for the project, the first one calling a build target that runs svn-clean.pl (compiled to .exe), and then updates the source using svn.exe. The second <msbuild> element starts the main build process.
You can easily replace svn-clean with a delete command. For my projects, deleting chaff from a checkout has always been faster than checking out a fresh working copy.
The two msbuild elements are necessary because the main project build file is often updated. This is important because updates to your build file(s) will only be reloaded if you start a new msbuild process.
This setup breaks down when I (very rarely) move or change the dependencies of that clean-and-update build target to the extent that the msbuild process would need to reload for valid instructions to run the clean-and-update target. When this happens, I stop CC.Net before committing, go into the CC.Net server, and do an 'svn update' by hand.
Sidelight: It could well be that CC.Net has a natural clean-before-build operation by now. I've since moved to TeamCity, which is configurable to do this every build or only when the developer chooses (e.g., when you know you've made a change that would not update cleanly--svn moves of directories with build products comes to mind).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to implement a web scraper in PHP? What built-in PHP functions are useful for web scraping? What are some good resources (web or print) for getting up to speed on web scraping with PHP?
A: Scraping generally encompasses 3 steps:
*
*first you GET or POST your request
to a specified URL
*next you receive
the html that is returned as the
response
*finally you parse out of
that html the text you'd like to
scrape.
To accomplish steps 1 and 2, below is a simple php class which uses Curl to fetch webpages using either GET or POST. After you get the HTML back, you just use Regular Expressions to accomplish step 3 by parsing out the text you'd like to scrape.
For regular expressions, my favorite tutorial site is the following:
Regular Expressions Tutorial
My Favorite program for working with RegExs is Regex Buddy. I would advise you to try the demo of that product even if you have no intention of buying it. It is an invaluable tool and will even generate code for your regexs you make in your language of choice (including php).
Usage:
$curl = new Curl();
$html = $curl->get("http://www.google.com");
// now, do your regex work against $html
PHP Class:
<?php
class Curl
{
public $cookieJar = "";
public function __construct($cookieJarFile = 'cookies.txt') {
$this->cookieJar = $cookieJarFile;
}
function setup()
{
$header = array();
$header[0] = "Accept: text/xml,application/xml,application/xhtml+xml,";
$header[0] .= "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5";
$header[] = "Cache-Control: max-age=0";
$header[] = "Connection: keep-alive";
$header[] = "Keep-Alive: 300";
$header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7";
$header[] = "Accept-Language: en-us,en;q=0.5";
$header[] = "Pragma: "; // browsers keep this blank.
curl_setopt($this->curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.1.7) Gecko/20070914 Firefox/2.0.0.7');
curl_setopt($this->curl, CURLOPT_HTTPHEADER, $header);
curl_setopt($this->curl,CURLOPT_COOKIEJAR, $this->cookieJar);
curl_setopt($this->curl,CURLOPT_COOKIEFILE, $this->cookieJar);
curl_setopt($this->curl,CURLOPT_AUTOREFERER, true);
curl_setopt($this->curl,CURLOPT_FOLLOWLOCATION, true);
curl_setopt($this->curl,CURLOPT_RETURNTRANSFER, true);
}
function get($url)
{
$this->curl = curl_init($url);
$this->setup();
return $this->request();
}
function getAll($reg,$str)
{
preg_match_all($reg,$str,$matches);
return $matches[1];
}
function postForm($url, $fields, $referer='')
{
$this->curl = curl_init($url);
$this->setup();
curl_setopt($this->curl, CURLOPT_URL, $url);
curl_setopt($this->curl, CURLOPT_POST, 1);
curl_setopt($this->curl, CURLOPT_REFERER, $referer);
curl_setopt($this->curl, CURLOPT_POSTFIELDS, $fields);
return $this->request();
}
function getInfo($info)
{
$info = ($info == 'lasturl') ? curl_getinfo($this->curl, CURLINFO_EFFECTIVE_URL) : curl_getinfo($this->curl, $info);
return $info;
}
function request()
{
return curl_exec($this->curl);
}
}
?>
A: If you need something that is easy to maintain, rather than fast to execute, it could help to use a scriptable browser, such as SimpleTest's.
A: Scraping can be pretty complex, depending on what you want to do. Have a read of this tutorial series on The Basics Of Writing A Scraper In PHP and see if you can get to grips with it.
You can use similar methods to automate form sign ups, logins, even fake clicking on Ads! The main limitations with using CURL though are that it doesn't support using javascript, so if you are trying to scrape a site that uses AJAX for pagination for example it can become a little tricky...but again there are ways around that!
A: I recommend Goutte, a simple PHP Web Scraper.
Example Usage:-
Create a Goutte Client instance (which extends
Symfony\Component\BrowserKit\Client):
use Goutte\Client;
$client = new Client();
Make requests with the request() method:
$crawler = $client->request('GET', 'http://www.symfony-project.org/');
The request method returns a Crawler object
(Symfony\Component\DomCrawler\Crawler).
Click on links:
$link = $crawler->selectLink('Plugins')->link();
$crawler = $client->click($link);
Submit forms:
$form = $crawler->selectButton('sign in')->form();
$crawler = $client->submit($form, array('signin[username]' => 'fabien', 'signin[password]' => 'xxxxxx'));
Extract data:
$nodes = $crawler->filter('.error_list');
if ($nodes->count())
{
die(sprintf("Authentification error: %s\n", $nodes->text()));
}
printf("Nb tasks: %d\n", $crawler->filter('#nb_tasks')->text());
A: ScraperWiki is a pretty interesting project.
Helps you build scrapers online in Python, Ruby or PHP - i was able to get a simple attempt up in a few minutes.
A: here is another one: a simple PHP Scraper without Regex.
A: file_get_contents() can take a remote URL and give you the source. You can then use regular expressions (with the Perl-compatible functions) to grab what you need.
Out of curiosity, what are you trying to scrape?
A: I'd either use libcurl or Perl's LWP (libwww for perl). Is there a libwww for php?
A: Scraper class from my framework:
<?php
/*
Example:
$site = $this->load->cls('scraper', 'http://www.anysite.com');
$excss = $site->getExternalCSS();
$incss = $site->getInternalCSS();
$ids = $site->getIds();
$classes = $site->getClasses();
$spans = $site->getSpans();
print '<pre>';
print_r($excss);
print_r($incss);
print_r($ids);
print_r($classes);
print_r($spans);
*/
class scraper
{
private $url = '';
public function __construct($url)
{
$this->url = file_get_contents("$url");
}
public function getInternalCSS()
{
$tmp = preg_match_all('/(style=")(.*?)(")/is', $this->url, $patterns);
$result = array();
array_push($result, $patterns[2]);
array_push($result, count($patterns[2]));
return $result;
}
public function getExternalCSS()
{
$tmp = preg_match_all('/(href=")(\w.*\.css)"/i', $this->url, $patterns);
$result = array();
array_push($result, $patterns[2]);
array_push($result, count($patterns[2]));
return $result;
}
public function getIds()
{
$tmp = preg_match_all('/(id="(\w*)")/is', $this->url, $patterns);
$result = array();
array_push($result, $patterns[2]);
array_push($result, count($patterns[2]));
return $result;
}
public function getClasses()
{
$tmp = preg_match_all('/(class="(\w*)")/is', $this->url, $patterns);
$result = array();
array_push($result, $patterns[2]);
array_push($result, count($patterns[2]));
return $result;
}
public function getSpans(){
$tmp = preg_match_all('/(<span>)(.*)(<\/span>)/', $this->url, $patterns);
$result = array();
array_push($result, $patterns[2]);
array_push($result, count($patterns[2]));
return $result;
}
}
?>
A: The curl library allows you to download web pages. You should look into regular expressions for doing the scraping.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "61"
} |
Q: NHibernate vs LINQ to SQL As someone who hasn't used either technology on real-world projects I wonder if anyone knows how these two complement each other and how much their functionalities overlap?
A: Fluent NHibernate can generate your mapping files based on simple conventions. No XML-writing and strongly typed.
I've recently worked on a project, where we needed to change from Linq To SQL to NHibernate for performance reasons. Especially L2S's way of materializing the objects seems slower than NHibernate's ditto and the change management is quite slow too. And it can be hard to turn the change management off for specific scenarios where it is not needed.
If you are going to use your entities disconnected from the DataContext - in WCF scenarios for example - you're may have a lot of trouble connecting them to the DataContext again for updating the changes. I have had no problems with that with NHibernate.
The thing I will miss from L2S is mostly the code generation that keeps relations up-to-date on both ends of the entities. But I guess there are some tools for NHibernate to do that out there too...
A: Can you clarify what you mean by "LINQ"?
LINQ isn't an data access technology, it's just a language feature which supports querying as a native construct. It can query any object model which supports specific interfaces (e.g. IQueryable).
Many people refer to LINQ To SQL as LINQ, but that's not at all correct. Microsoft has just released LINQ To Entities with .NET 3.5 SP1. Additionally, NHibernate has a LINQ interface, so you could use LINQ and NHibernate to get at your data.
A: Two points that have been missed so far:
*
*LINQ to SQL does not work with Oracle
or any database apart from SqlServer. However 3rd parties do offer better support for Oracle, e.g. devArt's dotConnect, DbLinq, Mindscape's LightSpeed and ALinq. (I do not have any personal experience with these)
*Linq to NHibernate lets you used
Linq with a Nhiberate, so it may
remove a reason not to use.
Also the new fluent interface to Nhibernate seems to make it less painful to configure Nhibernate’s mapping. (Removing one of the pain points of Nhibernate)
Update
Linq to Nhiberate is better in Nhiberate v3 that is now in alpha. Looks like Nhiberate v3 may ship towards the end of this year.
The Entity Frame Work as of .net 4 is also starting to look like a real option.
A: @Kevin: I think the problem with the example you are presenting is that you are using a poor database design. I would have thought you'd create a customer table and an address table and normalized the tables. If you do that you can definately use Linq To SQL for the scenario you're suggesting. Scott Guthrie has a great series of posts on using Linq To SQL which I would strongly suggest you check out.
I don't think you could say Linq and NHibernate complement each other as that would imply that they could be used together, and whilst this is possible, you're much better off choosing one and sticking to it.
NHibernate allows you to map your database tables to your domain objects in a highly flexible way. It also allows you to use HBL to query the database.
Linq to SQL also allows you to map your domain objects to the database however it use the Linq query syntax to query the database
The main difference here is that the Linq query syntax is checked at compile time by the compiler to ensure your queries are valid.
Some things to be aware of with linq is that it's only available in .net 3.x and is only supported in VS2008. NHibernate is available in 2.0 and 3.x as well as VS2005.
Some things to be aware of with NHibernate is that it does not generate your domain objects, nor does it generate the mapping files. You need to do this manually. Linq can
do this automatically for you.
A: By LINQ, I'm assuming you mean LINQ to SQL because LINQ, by itself, has no database "goings on" associated with it. It's just an query language that has a boat-load of syntac sugar to make it look SQL-ish.
In the very basic of basic examples, NHibernate and LINQ to SQL seem to both be solving the same problem. Once you get pass that you soon realize that NHibernate has support for a lot of features that allow you to create truly rich domain models. There is also a LINQ to NHibernate project that allows you to use LINQ to query NHibernate in much the same way as you would use LINQ to SQL.
A: LINQ to SQL forces you to use the table-per-class pattern. The benefits of using this pattern are that it's quick and easy to implement and it takes very little effort to get your domain running based on an existing database structure. For simple applications, this is perfectly acceptable (and oftentimes even preferable), but for more complex applications devs will often suggest using a domain driven design pattern instead (which is what NHibernate facilitates).
The problem with the table-per-class pattern is that your database structure has a direct influence over your domain design. For instance, let's say you have a Customers table with the following columns to hold a customer's primary address information:
*
*StreetAddress
*City
*State
*Zip
Now, let's say you want to add columns for the customer's mailing address as well so you add in the following columns to the Customers table:
*
*MailingStreetAddress
*MailingCity
*MailingState
*MailingZip
Using LINQ to SQL, the Customer object in your domain would now have properties for each of these eight columns. But if you were following a domain driven design pattern, you would probably have created an Address class and had your Customer class hold two Address properties, one for the mailing address and one for their current address.
That's a simple example, but it demonstrates how the table-per-class pattern can lead to a somewhat smelly domain. In the end, it's up to you. Again, for simple apps that just need basic CRUD (create, read, update, delete) functionality, LINQ to SQL is ideal because of simplicity. But personally I like using NHibernate because it facilitates a cleaner domain.
Edit: @lomaxx - Yes, the example I used was simplistic and could have been optimized to work well with LINQ to SQL. I wanted to keep it as basic as possible to drive home the point. The point remains though that there are several scenarios where having your database structure determine your domain structure would be a bad idea, or at least lead to suboptimal OO design.
A: First let´s separate two different things:
Database modeling is concerned about the data while object modeling is concerned about entities and relationships.
Linq-to-SQL advantage is to quickly generate classes out of database schema so that they can be used as active record objects (see active record design pattern definition).
NHibernate advantage is to allow flexibility between your object modeling and database modeling. Database can be modeled to best reflect your data taking in consideration performance for instance. While your object modeling will best reflect the elements of the business rule using an approach such as Domain-Driven-Design. (see Kevin Pang comment)
With legacy databases with poor modeling and/or naming conventions then Linq-to-SQL will reflect this unwanted structures and names to your classes. However NHibernate can hide this mess with data mappers.
In greenfield projects where databases have good naming and low complexity, Linq-to-SQL can be good choice.
However you can use Fluent NHibernate with auto-mappings for this same purpose with mapping as convention. In this case you don´t worry about any data mappers with XML or C# and let NHibernate to generate the database schema from your entities based on a convention that you can customize.
On the other hand learning curve of Linq-to-SQL is smaller then NHibernate.
A: Or you could use the Castle ActiveRecords project. I've been using that for a short time to ramp up some new code for a legacy project. It uses NHibernate and works on the active record pattern (surprising given its name I know). I haven't tried, but I assume that once you've used it, if you feel the need to drop to NHibernate support directly, it wouldn't be too much to do so for part or all of your project.
A: As you written "for a person who have not used either of the them"
LINQ to SQL is easy to use so any one can use it easily
It also support procedures, which helps most of the time.
Suppose you want to get data from more than one table then write a procedure and drag that procedure to designer and it will create everything for you,
Suppose your procedure name is "CUSTOMER_ORDER_LINEITEM" which fetch record from all these three table then just write
MyDataContext db = new MyDataContext();
List<CUSTOMER_ORDER_LINEITEMResult> records = db.CUSTOMER_ORDER_LINEITEM(pram1, param2 ...).ToList<CUSTOMER_ORDER_LINEITEMResult>();
you can use you records object in foreach loop as well, which is not supported by NHibernate
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "117"
} |
Q: What is the general rule of thumbs for creating an Exception in Java? I have been in both situations:
*
*Creating too many custom Exceptions
*Using too many general Exception class
In both cases the project started OK but soon became an overhead to maintain (and refactor).
So what is the best practice regarding the creation of your own Exception classes?
A: My rule of thumb is when the client (the caller) might reasonably want to do something different, depending on the type of exception thrown, the additional exception types are warranted. More often than not, however, the extra exception types are not needed. For instance, if the caller is writing code like
try {
doIt();
} catch (ExceptionType1 ex1) {
// do something useful
} catch (ExceptionType2 ex2) {
// do the exact same useful thing that was done in the block above
}
then clearly the additional exception types are not needed. All too often I see (or am forced to write) code like this because the code being called was overzealous in its creation of new exception types.
A: If I can't find an exception that has a name describing what type of error was caused then I make my own.
That's my rule-o-thumb.
A: Basically, each job deserves an own exception. When you catch exceptions, you don't distinguish different instances, like you would normally do with objects, therefore you need different subtypes. Using too many custom exceptions is a case which I see hardly occurring.
One advice would be to create exceptions as needed, and if it becomes apparent that one exception type is a duplicate of another, refactor the code by merging the two. Of course it helps if some thought goes into structuring exceptions from the beginning. But generally, use custom exceptions for all cases that have no 1:1 correspondence to existing, situation-specific exceptions.
On the other hand, NullPointerExceptions and IndexOutofBoundsExceptions might actually often be appropriate. Don't catch these, though (except for logging) as they're a programming error which means that after throwing them, the program is in an undefined state.
A: My own rule of thumb:
I never throw Exception, except in unit tests when what you throw is irrelevant and theres no reason to spend any extra time on it.
I create my own custom exception type for errors occuring in my custom business logic. This exception type is used as much as possible for recasting other exceptions, except in cases where it makes sense for the client to have visibility into what actually occurred.
A: While creating your own exception:
*
*All exceptions must be a child of the Throwable Class
*If you want to write a checked exception that is automatically enforced by the Handle or Declare Rule, you need to extend the Exception Class
*If you want to write a runtime execption, you need to extend the Runtime Exception Class.
A: The Java Specialists wrote a post about Exceptions in Java, and in it they list a few "best practices" for creating Exceptions, summarized below:
*
*Don't Write Own Exceptions (there are lots of useful Exceptions that are already part of the Java API)
*Write Useful Exceptions (if you have to write your own Exceptions, make sure they provide useful information about the problem that occurred)
A: Don't do what the developers at my company did. Somebody created an [sic] InvalidArguementException that parallels java.lang.IllegalArgumentException, and we now use it in (literally) hundreds of classes. Both indicate that a method has been passed an illegal or inappropriate argument. Talk about a waste...
Joshua Bloch covers this in Effective Java Programming Language Guide [my bible of first resort on Best Practices] Chapter 8. Exceptions Item 42: Favor the use of standard exceptions. Here's a bit of what he says,
Reusing preexisting exceptions has several benefits. Chief among these, it makes your API easier to learn and use because it matches established conventions with which programmers are already familiar [my emphasis, not Bloch's]. A close second is that programs using your API are easier to read because they aren't cluttered with unfamiliar exceptions. Finally, fewer exception classes mean a smaller memory footprint and less time spent loading classes.
The most commonly reused exception is IllegalArgumentException. This is generally the exception to throw when the caller passes in an argument whose value is inappropriate. For example, this would be the exception to throw if the caller passed a negative number in a parameter representing the number of times some action were to be repeated.
That said, you should never throw Exception itself. Java has a well-chosen, diverse and well-targeted bunch of built-in exceptions that cover most situations AND describe the exception that occurred well enough so that you can remedy the cause.
Be friendly to the programmers who have to maintain your code in the future.
A: Don't eat exceptions, throw them https://stackoverflow.com/a/921583/1097600
Avoid creating your own exception. Use the below ones that are already there.
IllegalStateException
UnsupportedOperationException
IllegalArgumentException
NoSuchElementException
NullPointerException
Throw unchecked exceptions.
Example
public void validate(MyObject myObjectInstance) {
if (!myObjectList.contains(myObjectInstance))
throw new NoSuchElementException("object not present in list");
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: How does one implement FxCop / static analysis on an existing code base What are some of the strategies that are used when implementing FxCop / static analysis on existing code bases with existing violations? How can one most effectively reduce the static analysis violations?
A: Rewrite your code in a passing style!
Seriously, an old code base will have hundreds of errors - but that's why we have novice/intern programmers. Correcting FxCop violations is a great way to get an overview of the code base and also learn how to write conforming .NET code.
So just bite the bullet, drink lots of caffeine, and just get through it in a couple days!
A: Make liberal use of [SuppressMessage] attribute to begin with. At least at the beginning. Once you get the count to 0 via the attribute, you then put in a rule that new checkins may not introduce FxCop violations.
Visual Studio 2008 has a nice code analysis feature that allows you to ensure that code analysis runs on every build and you can treat warnings as errors. That might slow things down a bit so I recommend setting up a continuous integration server (like CruiseControl.NET) and having it run code analysis on every checkin.
Once you get it under control and aren't introducing new violations with every checkin, start to tackle whole classes of FxCop violations at a time with the goal of removing the SuppressMessageAttributes that you used.
The way to keep track of which ones you really want to keep is to always add a Justification value to the ones you really want to suppress.
A: NDepend looks like it could do what you're after, but I'm not sure if it can be integrated into a CruiseControl.Net automated build, and fail the build if the code doesn't meet the requirements (which is what I'd like to happen).
Any other ideas?
A: An alternative to FxCop would be to use the tool NDepend. This tool lets write Code Rules over C# LINQ Queries (what we call CQLinq). Disclaimer: I am one of the developers of the tool
More than 200 code rules are proposed by default. Customizing existing rules or creating your own rules is straightforward thanks to the well-known C# LINQ syntax.
To keep the number of false-positives low, CQLinq offers the unique capabilities to define what is the set JustMyCode through special code queries prefixed with notmycode. More explanations about this feature can be found here. Here are for example two notmycode default queries:
*
*Discard generated and designer Methods from JustMyCode
*Discard generated Types from JustMyCode
To keep the number of false-positives low, with CQLinq you can also focus rules result only on code added or code refactored, since a defined baseline in the past. See the following rule, that detect methods too complex added or refactored since the baseline:
warnif count > 0
from m in Methods
where m.CyclomaticComplexity > 20 &&
m.WasAdded() || m.CodeWasChanged()
select new { m, m.CyclomaticComplexity }
Finally, notice that with NDepend code rules can be verified live in Visual Studio and at build process time, in a generated HTML+javascript report.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Single responsiblity principle: granularity of the reason to change When applying the Single Responsibility Principle and looking at a class's reason to change, how do you determine whether that reason too change is too granular, or not granular enough?
A: I don't know that there's a good answer to this one other than "apply your judgement, based on your experience." Failing that, get help, which I guess is what you're doing here ;)
Seriously, though, if you find that you're creating a gazillion classes to do what seems like a simple job, then you're probably being too granular. If your classes all seem collossal, then you're probably being too coarse. Please pardon me if that's a statement of the obvious.
I think this is one of those fuzzy, no-hard-and-fast-rules cases that show us why we need human programmers. Just try something, seeking balance, and refactor if you find you're going too far in one direction or the other. And remember: if it's worth doing, it's worth doing badly.
A: *
*I wouldn't be too worried about granularity initially. I will just go with separation of concern at a broader level initially. Basic point is that we should avoid over-engineering here. But just enough. I agree with Lucas here, that this first step will improve with experience.
*As the requirements change, as I am starting to get the 'smells', as my understanding of the problem improves I would refactor the design by factoring out the separate concerns as they become obvious. Basically separation of concern shall also be evolutionary as with overall design.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Alternating coloring groups of rows in Excel I have an Excel Spreadsheet like this
id | data for id
| more data for id
id | data for id
id | data for id
| more data for id
| even more data for id
id | data for id
| more data for id
id | data for id
id | data for id
| more data for id
Now I want to group the data of one id by alternating the background color of the rows
var color = white
for each row
if the first cell is not empty and color is white
set color to green
if the first cell is not empty and color is green
set color to white
set background of row to color
Can anyone help me with a macro or some VBA code
Thanks
A: I think this does what you are looking for. Flips color when the cell in column A changes value. Runs until there is no value in column B.
Public Sub HighLightRows()
Dim i As Integer
i = 1
Dim c As Integer
c = 3 'red
Do While (Cells(i, 2) <> "")
If (Cells(i, 1) <> "") Then 'check for new ID
If c = 3 Then
c = 4 'green
Else
c = 3 'red
End If
End If
Rows(Trim(Str(i)) + ":" + Trim(Str(i))).Interior.ColorIndex = c
i = i + 1
Loop
End Sub
A: I use this formula to get the input for a conditional formatting:
=IF(B2=B1,E1,1-E1)) [content of cell E2]
Where column B contains the item that needs to be grouped and E is an auxiliary column. Every time that the upper cell (B1 on this case) is the same as the current one (B2), the upper row content from column E is returned. Otherwise, it will return 1 minus that content (that is, the outupt will be 0 or 1, depending on the value of the upper cell).
A: Based on Jason Z's answer, which from my tests seems to be wrong (at least on Excel 2010), here's a bit of code that happens to work for me :
Public Sub HighLightRows()
Dim i As Integer
i = 2 'start at 2, cause there's nothing to compare the first row with
Dim c As Integer
c = 2 'Color 1. Check http://dmcritchie.mvps.org/excel/colors.htm for color indexes
Do While (Cells(i, 1) <> "")
If (Cells(i, 1) <> Cells(i - 1, 1)) Then 'check for different value in cell A (index=1)
If c = 2 Then
c = 34 'color 2
Else
c = 2 'color 1
End If
End If
Rows(Trim(Str(i)) + ":" + Trim(Str(i))).Interior.ColorIndex = c
i = i + 1
Loop
End Sub
A: Do you have to use code?
if the table is static, then why not use the auto formatting capability?
It may also help if you "merge cells" of the same data. so maybe if you merge the cells of the "data, more data, even more data" into one cell, you can more easily deal with classic "each row is a row" case.
A: I'm barrowing this and tried to modify it for my use. I have order numbers in column a and some orders take multiple rows. Just want to alternate the white and gray per order number. What I have here alternates each row.
ChangeBackgroundColor()
' ChangeBackgroundColor Macro
'
' Keyboard Shortcut: Ctrl+Shift+B
Dim a As Integer
a = 1
Dim c As Integer
c = 15 'gray
Do While (Cells(a, 2) <> "")
If (Cells(a, 1) <> "") Then 'check for new ID
If c = 15 Then
c = 2 'white
Else
c = 15 'gray
End If
End If
Rows(Trim(Str(a)) + ":" + Trim(Str(a))).Interior.ColorIndex = c
a = a + 1
Loop
End Sub
A: If you select the Conditional Formatting menu option under the Format menu item, you will be given a dialog that lets you construct some logic to apply to that cell.
Your logic might not be the same as your code above, it might look more like:
Cell Value is | equal to | | and | White .... Then choose the color.
You can select the add button and make the condition as large as you need.
A: I have reworked Bartdude's answer, for Light Grey / White based upon a configurable column, using RGB values. A boolean var is flipped when the value changes and this is used to index the colours array via the integer values of True and False. Works for me on 2010. Call the sub with the sheet number.
Public Sub HighLightRows(intSheet As Integer)
Dim intRow As Integer: intRow = 2 ' start at 2, cause there's nothing to compare the first row with
Dim intCol As Integer: intCol = 1 ' define the column with changing values
Dim Colr1 As Boolean: Colr1 = True ' Will flip True/False; adding 2 gives 1 or 2
Dim lngColors(2 + True To 2 + False) As Long ' Indexes : 1 and 2
' True = -1, array index 1. False = 0, array index 2.
lngColors(2 + False) = RGB(235, 235, 235) ' lngColors(2) = light grey
lngColors(2 + True) = RGB(255, 255, 255) ' lngColors(1) = white
Do While (Sheets(intSheet).Cells(intRow, 1) <> "")
'check for different value in intCol, flip the boolean if it's different
If (Sheets(intSheet).Cells(intRow, intCol) <> Sheets(intSheet).Cells(intRow - 1, intCol)) Then Colr1 = Not Colr1
Sheets(intSheet).Rows(intRow).Interior.Color = lngColors(2 + Colr1) ' one colour or the other
' Optional : retain borders (these no longer show through when interior colour is changed) by specifically setting them
With Sheets(intSheet).Rows(intRow).Borders
.LineStyle = xlContinuous
.Weight = xlThin
.Color = RGB(220, 220, 220)
End With
intRow = intRow + 1
Loop
End Sub
Optional bonus : for SQL data, colour any NULL values with the same yellow as used in SSMS
Public Sub HighLightNULLs(intSheet As Integer)
Dim intRow As Integer: intRow = 2 ' start at 2 to avoid the headings
Dim intCol As Integer
Dim lngColor As Long: lngColor = RGB(255, 255, 225) ' pale yellow
For intRow = intRow To Sheets(intSheet).UsedRange.Rows.Count
For intCol = 1 To Sheets(intSheet).UsedRange.Columns.Count
If Sheets(intSheet).Cells(intRow, intCol) = "NULL" Then Sheets(intSheet).Cells(intRow, intCol).Interior.Color = lngColor
Next intCol
Next intRow
End Sub
A: I use this rule in Excel to format alternating rows:
*
*Highlight the rows you wish to apply an alternating style to.
*Press "Conditional Formatting" -> New Rule
*Select "Use a formula to determine which cells to format" (last entry)
*Enter rule in format value: =MOD(ROW(),2)=0
*Press "Format", make required formatting for alternating rows, eg. Fill -> Color.
*Press OK, Press OK.
If you wish to format alternating columns instead, use =MOD(COLUMN(),2)=0
Voila!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Best approaches to versioning Mac "bundle" files So you know a lot of Mac apps use "bundles": It looks like a single file to your application, but it's actually a folder with many files inside.
For a version control system to handle this, it needs to:
*
*check out all the files in a directory, so the app can modify them as necessary
*at checkin,
*
*commit files which have been modified
*add new files which the application has created
*mark as deleted files which are no longer there (since the app deleted them)
*manage this as one atomic change
Any ideas on the best way to handle this with existing version control systems? Are any of the versioning systems more adept in this area?
A: Mercurial in particular versions based on file, not directory structure. Therefore, your working tree, which is a fully-fledged repository, doesn't spit out .svn folders at each level.
It also means that a directory that is replaced, like an Application or other Bundle, will still find it's contents with particular file names under revision control. File names are monitored, not inodes or anything fancy like that!
Obviously, if a new file is added to the Bundle, you'll need to explicitly add this to your repository. Similarly, removing a file from a Bundle should be done with an 'hg rm'.
There aren't any decent Mercurial GUIs for OS X yet, but if all you do is add/commit/merge, it isn't that hard to use a command line.
A: For distributed SCM systems like git and mercurial shouldn't be a problem as Matthew mentioned.
If you need to use a centralized SCM like Subversion or CVS, then you can zip up (archive) your bundles before checking them into source control. This can be painful and takes an extra step. There is a good blog post about this at Tapestry Central:
Mac OS X bundles vs. Subversion
This article demonstrates a ruby script that manages the archiving for you.
A: An update from the future:
If I recall, the problem with managing bundles in SVN was all the .svn folders getting cleared each time you made a bundle. This shouldn't be a problem any more, now that SVN stores everything in a single .svn folder at the root.
A: Bringing this thread back to daylight, since the October 2013 iWork (Pages 5.0 etc.) no longer allows storing in 'flat file' (zipped), but only as bundles.
The problem is not the creation of version control hidden folders inside such structures (well, for svn it is), but as Mark says in the question: getting automatic, atomic update of files added or removed (by the application, in this case iWork) so I wouldn't need to do that manually.
Clearly, iWork and Apple are only bothered by iCloud usability. Yet I have a genuine case for storing .pages, .numbers and .keynote in a Mercurial repo. After the update, it blows everything apart. What to do?
Addendum:
Found 'hg addremove' that does the trick for me.
$ hg help addremove
hg addremove [OPTION]... [FILE]...
add all new files, delete all missing files
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Comparing Arrays of Objects in JavaScript I want to compare 2 arrays of objects in JavaScript code. The objects have 8 total properties, but each object will not have a value for each, and the arrays are never going to be any larger than 8 items each, so maybe the brute force method of traversing each and then looking at the values of the 8 properties is the easiest way to do what I want to do, but before implementing, I wanted to see if anyone had a more elegant solution. Any thoughts?
A: As serialization doesn't work generally (only when the order of properties matches: JSON.stringify({a:1,b:2}) !== JSON.stringify({b:2,a:1})) you have to check the count of properties and compare each property as well:
const objectsEqual = (o1, o2) =>
Object.keys(o1).length === Object.keys(o2).length
&& Object.keys(o1).every(p => o1[p] === o2[p]);
const obj1 = { name: 'John', age: 33};
const obj2 = { age: 33, name: 'John' };
const obj3 = { name: 'John', age: 45 };
console.log(objectsEqual(obj1, obj2)); // true
console.log(objectsEqual(obj1, obj3)); // false
If you need a deep comparison, you can call the function recursively:
const obj1 = { name: 'John', age: 33, info: { married: true, hobbies: ['sport', 'art'] } };
const obj2 = { age: 33, name: 'John', info: { hobbies: ['sport', 'art'], married: true } };
const obj3 = { name: 'John', age: 33 };
const objectsEqual = (o1, o2) =>
typeof o1 === 'object' && Object.keys(o1).length > 0
? Object.keys(o1).length === Object.keys(o2).length
&& Object.keys(o1).every(p => objectsEqual(o1[p], o2[p]))
: o1 === o2;
console.log(objectsEqual(obj1, obj2)); // true
console.log(objectsEqual(obj1, obj3)); // false
Then it's easy to use this function to compare objects in arrays:
const arr1 = [obj1, obj1];
const arr2 = [obj1, obj2];
const arr3 = [obj1, obj3];
const arraysEqual = (a1, a2) =>
a1.length === a2.length && a1.every((o, idx) => objectsEqual(o, a2[idx]));
console.log(arraysEqual(arr1, arr2)); // true
console.log(arraysEqual(arr1, arr3)); // false
A: EDIT: You cannot overload operators in current, common browser-based implementations of JavaScript interpreters.
To answer the original question, one way you could do this, and mind you, this is a bit of a hack, simply serialize the two arrays to JSON and then compare the two JSON strings. That would simply tell you if the arrays are different, obviously you could do this to each of the objects within the arrays as well to see which ones were different.
Another option is to use a library which has some nice facilities for comparing objects - I use and recommend MochiKit.
EDIT: The answer kamens gave deserves consideration as well, since a single function to compare two given objects would be much smaller than any library to do what I suggest (although my suggestion would certainly work well enough).
Here is a naïve implemenation that may do just enough for you - be aware that there are potential problems with this implementation:
function objectsAreSame(x, y) {
var objectsAreSame = true;
for(var propertyName in x) {
if(x[propertyName] !== y[propertyName]) {
objectsAreSame = false;
break;
}
}
return objectsAreSame;
}
The assumption is that both objects have the same exact list of properties.
Oh, and it is probably obvious that, for better or worse, I belong to the only-one-return-point camp. :)
A: There is a optimized code for case when function needs to equals to empty arrays (and returning false in that case)
const objectsEqual = (o1, o2) => {
if (o2 === null && o1 !== null) return false;
return o1 !== null && typeof o1 === 'object' && Object.keys(o1).length > 0 ?
Object.keys(o1).length === Object.keys(o2).length &&
Object.keys(o1).every(p => objectsEqual(o1[p], o2[p]))
: (o1 !== null && Array.isArray(o1) && Array.isArray(o2) && !o1.length &&
!o2.length) ? true : o1 === o2;
}
A: Here is my attempt, using Node's assert module + npm package object-hash.
I suppose that you would like to check if two arrays contain the same objects, even if those objects are ordered differently between the two arrays.
var assert = require('assert');
var hash = require('object-hash');
var obj1 = {a: 1, b: 2, c: 333},
obj2 = {b: 2, a: 1, c: 444},
obj3 = {b: "AAA", c: 555},
obj4 = {c: 555, b: "AAA"};
var array1 = [obj1, obj2, obj3, obj4];
var array2 = [obj3, obj2, obj4, obj1]; // [obj3, obj3, obj2, obj1] should work as well
// calling assert.deepEquals(array1, array2) at this point FAILS (throws an AssertionError)
// even if array1 and array2 contain the same objects in different order,
// because array1[0].c !== array2[0].c
// sort objects in arrays by their hashes, so that if the arrays are identical,
// their objects can be compared in the same order, one by one
var array1 = sortArrayOnHash(array1);
var array2 = sortArrayOnHash(array2);
// then, this should output "PASS"
try {
assert.deepEqual(array1, array2);
console.log("PASS");
} catch (e) {
console.log("FAIL");
console.log(e);
}
// You could define as well something like Array.prototype.sortOnHash()...
function sortArrayOnHash(array) {
return array.sort(function(a, b) {
return hash(a) > hash(b);
});
}
A: Honestly, with 8 objects max and 8 properties max per object, your best bet is to just traverse each object and make the comparisons directly. It'll be fast and it'll be easy.
If you're going to be using these types of comparisons often, then I agree with Jason about JSON serialization...but otherwise there's no need to slow down your app with a new library or JSON serialization code.
A: My practice implementation with sorting, tested and working.
const obj1 = { name: 'John', age: 33};
const obj2 = { age: 33, name: 'John' };
const obj3 = { name: 'John', age: 45 };
const equalObjs = ( obj1, obj2 ) => {
let keyExist = false;
for ( const [key, value] of Object.entries(obj1) ) {
// Search each key in reference object and attach a callback function to
// compare the two object keys
if( Object.keys(obj2).some( ( e ) => e == key ) ) {
keyExist = true;
}
}
return keyExist;
}
console.info( equalObjs( obj1, obj2 ) );
Compare your arrays
// Sort Arrays
var arr1 = arr1.sort(( a, b ) => {
var fa = Object.keys(a);
var fb = Object.keys(b);
if (fa < fb) {
return -1;
}
if (fa > fb) {
return 1;
}
return 0;
});
var arr2 = arr2.sort(( a, b ) => {
var fa = Object.keys(a);
var fb = Object.keys(b);
if (fa < fb) {
return -1;
}
if (fa > fb) {
return 1;
}
return 0;
});
const equalArrays = ( arr1, arr2 ) => {
// If the arrays are different length we an eliminate immediately
if( arr1.length !== arr2.length ) {
return false;
} else if ( arr1.every(( obj, index ) => equalObjs( obj, arr2[index] ) ) ) {
return true;
} else {
return false;
}
}
console.info( equalArrays( arr1, arr2 ) );
A: I am sharing my compare function implementation as it might be helpful for others:
/*
null AND null // true
undefined AND undefined // true
null AND undefined // false
[] AND [] // true
[1, 2, 'test'] AND ['test', 2, 1] // true
[1, 2, 'test'] AND ['test', 2, 3] // false
[undefined, 2, 'test'] AND ['test', 2, 1] // false
[undefined, 2, 'test'] AND ['test', 2, undefined] // true
[[1, 2], 'test'] AND ['test', [2, 1]] // true
[1, 'test'] AND ['test', [2, 1]] // false
[[2, 1], 'test'] AND ['test', [2, 1]] // true
[[2, 1], 'test'] AND ['test', [2, 3]] // false
[[[3, 4], 2], 'test'] AND ['test', [2, [3, 4]]] // true
[[[3, 4], 2], 'test'] AND ['test', [2, [5, 4]]] // false
[{x: 1, y: 2}, 'test'] AND ['test', {x: 1, y: 2}] // true
1 AND 1 // true
{test: 1} AND ['test', 2, 1] // false
{test: 1} AND {test: 1} // true
{test: 1} AND {test: 2} // false
{test: [1, 2]} AND {test: [1, 2]} // true
{test: [1, 2]} AND {test: [1]} // false
{test: [1, 2], x: 1} AND {test: [1, 2], x: 2} // false
{test: [1, { z: 5 }], x: 1} AND {x: 1, test: [1, { z: 5}]} // true
{test: [1, { z: 5 }], x: 1} AND {x: 1, test: [1, { z: 6}]} // false
*/
function is_equal(x, y) {
const
arr1 = x,
arr2 = y,
is_objects_equal = function (obj_x, obj_y) {
if (!(
typeof obj_x === 'object' &&
Object.keys(obj_x).length > 0
))
return obj_x === obj_y;
return Object.keys(obj_x).length === Object.keys(obj_y).length &&
Object.keys(obj_x).every(p => is_objects_equal(obj_x[p], obj_y[p]));
}
;
if (!( Array.isArray(arr1) && Array.isArray(arr2) ))
return (
arr1 && typeof arr1 === 'object' &&
arr2 && typeof arr2 === 'object'
)
? is_objects_equal(arr1, arr2)
: arr1 === arr2;
if (arr1.length !== arr2.length)
return false;
for (const idx_1 of arr1.keys())
for (const idx_2 of arr2.keys())
if (
(
Array.isArray(arr1[idx_1]) &&
this.is_equal(arr1[idx_1], arr2[idx_2])
) ||
is_objects_equal(arr1[idx_1], arr2[idx_2])
)
{
arr2.splice(idx_2, 1);
break;
}
return !arr2.length;
}
A: I know this is an old question and the answers provided work fine ... but this is a bit shorter and doesn't require any additional libraries ( i.e. JSON ):
function arraysAreEqual(ary1,ary2){
return (ary1.join('') == ary2.join(''));
}
A: I have worked a bit on a simple algorithm to compare contents of two objects and return an intelligible list of difference. Thought I would share. It borrows some ideas for jQuery, namely the map function implementation and the object and array type checking.
It returns a list of "diff objects", which are arrays with the diff info. It's very simple.
Here it is:
// compare contents of two objects and return a list of differences
// returns an array where each element is also an array in the form:
// [accessor, diffType, leftValue, rightValue ]
//
// diffType is one of the following:
// value: when primitive values at that index are different
// undefined: when values in that index exist in one object but don't in
// another; one of the values is always undefined
// null: when a value in that index is null or undefined; values are
// expressed as boolean values, indicated wheter they were nulls
// type: when values in that index are of different types; values are
// expressed as types
// length: when arrays in that index are of different length; values are
// the lengths of the arrays
//
function DiffObjects(o1, o2) {
// choose a map() impl.
// you may use $.map from jQuery if you wish
var map = Array.prototype.map?
function(a) { return Array.prototype.map.apply(a, Array.prototype.slice.call(arguments, 1)); } :
function(a, f) {
var ret = new Array(a.length), value;
for ( var i = 0, length = a.length; i < length; i++ )
ret[i] = f(a[i], i);
return ret.concat();
};
// shorthand for push impl.
var push = Array.prototype.push;
// check for null/undefined values
if ((o1 == null) || (o2 == null)) {
if (o1 != o2)
return [["", "null", o1!=null, o2!=null]];
return undefined; // both null
}
// compare types
if ((o1.constructor != o2.constructor) ||
(typeof o1 != typeof o2)) {
return [["", "type", Object.prototype.toString.call(o1), Object.prototype.toString.call(o2) ]]; // different type
}
// compare arrays
if (Object.prototype.toString.call(o1) == "[object Array]") {
if (o1.length != o2.length) {
return [["", "length", o1.length, o2.length]]; // different length
}
var diff =[];
for (var i=0; i<o1.length; i++) {
// per element nested diff
var innerDiff = DiffObjects(o1[i], o2[i]);
if (innerDiff) { // o1[i] != o2[i]
// merge diff array into parent's while including parent object name ([i])
push.apply(diff, map(innerDiff, function(o, j) { o[0]="[" + i + "]" + o[0]; return o; }));
}
}
// if any differences were found, return them
if (diff.length)
return diff;
// return nothing if arrays equal
return undefined;
}
// compare object trees
if (Object.prototype.toString.call(o1) == "[object Object]") {
var diff =[];
// check all props in o1
for (var prop in o1) {
// the double check in o1 is because in V8 objects remember keys set to undefined
if ((typeof o2[prop] == "undefined") && (typeof o1[prop] != "undefined")) {
// prop exists in o1 but not in o2
diff.push(["[" + prop + "]", "undefined", o1[prop], undefined]); // prop exists in o1 but not in o2
}
else {
// per element nested diff
var innerDiff = DiffObjects(o1[prop], o2[prop]);
if (innerDiff) { // o1[prop] != o2[prop]
// merge diff array into parent's while including parent object name ([prop])
push.apply(diff, map(innerDiff, function(o, j) { o[0]="[" + prop + "]" + o[0]; return o; }));
}
}
}
for (var prop in o2) {
// the double check in o2 is because in V8 objects remember keys set to undefined
if ((typeof o1[prop] == "undefined") && (typeof o2[prop] != "undefined")) {
// prop exists in o2 but not in o1
diff.push(["[" + prop + "]", "undefined", undefined, o2[prop]]); // prop exists in o2 but not in o1
}
}
// if any differences were found, return them
if (diff.length)
return diff;
// return nothing if objects equal
return undefined;
}
// if same type and not null or objects or arrays
// perform primitive value comparison
if (o1 != o2)
return [["", "value", o1, o2]];
// return nothing if values are equal
return undefined;
}
A: I tried JSON.stringify() and worked for me.
let array1 = [1,2,{value:'alpha'}] , array2 = [{value:'alpha'},'music',3,4];
JSON.stringify(array1) // "[1,2,{"value":"alpha"}]"
JSON.stringify(array2) // "[{"value":"alpha"},"music",3,4]"
JSON.stringify(array1) === JSON.stringify(array2); // false
A: The objectsAreSame function mentioned in @JasonBunting's answer works fine for me. However, there's a little problem: If x[propertyName] and y[propertyName] are objects (typeof x[propertyName] == 'object'), you'll need to call the function recursively in order to compare them.
A: Please try this one:
function used_to_compare_two_arrays(a, b)
{
// This block will make the array of indexed that array b contains a elements
var c = a.filter(function(value, index, obj) {
return b.indexOf(value) > -1;
});
// This is used for making comparison that both have same length if no condition go wrong
if (c.length !== a.length) {
return 0;
} else{
return 1;
}
}
A: not sure about the performance ... will have to test on big objects .. however, this works great for me.. the advantage it has compared to the other solutions is, the objects/array do not have to be in the same order ....
it practically takes the first object in the first array, and scans the second array for every objects .. if it's a match, it will proceed to another
there is absolutely a way for optimization but it's working :)
thx to @ttulka I got inspired by his work ... just worked on it a little bit
const objectsEqual = (o1, o2) => {
let match = false
if(typeof o1 === 'object' && Object.keys(o1).length > 0) {
match = (Object.keys(o1).length === Object.keys(o2).length && Object.keys(o1).every(p => objectsEqual(o1[p], o2[p])))
}else {
match = (o1 === o2)
}
return match
}
const arraysEqual = (a1, a2) => {
let finalMatch = []
let itemFound = []
if(a1.length === a2.length) {
finalMatch = []
a1.forEach( i1 => {
itemFound = []
a2.forEach( i2 => {
itemFound.push(objectsEqual(i1, i2))
})
finalMatch.push(itemFound.some( i => i === true))
})
}
return finalMatch.every(i => i === true)
}
const ar1 = [
{ id: 1, name: "Johnny", data: { body: "Some text"}},
{ id: 2, name: "Jimmy"}
]
const ar2 = [
{name: "Jimmy", id: 2},
{name: "Johnny", data: { body: "Some text"}, id: 1}
]
console.log("Match:",arraysEqual(ar1, ar2))
jsfiddle: https://jsfiddle.net/x1pubs6q/
or just use lodash :))))
const _ = require('lodash')
const isArrayEqual = (x, y) => {
return _.isEmpty(_.xorWith(x, y, _.isEqual));
};
A: using _.some from lodash: https://lodash.com/docs/4.17.11#some
const array1AndArray2NotEqual =
_.some(array1, (a1, idx) => a1.key1 !== array2[idx].key1
|| a1.key2 !== array2[idx].key2
|| a1.key3 !== array2[idx].key3);
A: There`s my solution. It will compare arrays which also have objects and arrays. Elements can be stay in any positions.
Example:
const array1 = [{a: 1}, {b: 2}, { c: 0, d: { e: 1, f: 2, } }, [1,2,3,54]];
const array2 = [{a: 1}, {b: 2}, { c: 0, d: { e: 1, f: 2, } }, [1,2,3,54]];
const arraysCompare = (a1, a2) => {
if (a1.length !== a2.length) return false;
const objectIteration = (object) => {
const result = [];
const objectReduce = (obj) => {
for (let i in obj) {
if (typeof obj[i] !== 'object') {
result.push(`${i}${obj[i]}`);
} else {
objectReduce(obj[i]);
}
}
};
objectReduce(object);
return result;
};
const reduceArray1 = a1.map(item => {
if (typeof item !== 'object') return item;
return objectIteration(item).join('');
});
const reduceArray2 = a2.map(item => {
if (typeof item !== 'object') return item;
return objectIteration(item).join('');
});
const compare = reduceArray1.map(item => reduceArray2.includes(item));
return compare.reduce((acc, item) => acc + Number(item)) === a1.length;
};
console.log(arraysCompare(array1, array2));
A: This is work for me to compare two array of objects without taking into consideration the order of the items
const collection1 = [
{ id: "1", name: "item 1", subtitle: "This is a subtitle", parentId: "1" },
{ id: "2", name: "item 2", parentId: "1" },
{ id: "3", name: "item 3", parentId: "1" },
]
const collection2 = [
{ id: "3", name: "item 3", parentId: "1" },
{ id: "2", name: "item 2", parentId: "1" },
{ id: "1", name: "item 1", subtitle: "This is a subtitle", parentId: "1" },
]
const contains = (arr, obj) => {
let i = arr.length;
while (i--) {
if (JSON.stringify(arr[i]) === JSON.stringify(obj)) {
return true;
}
}
return false;
}
const isEqual = (obj1, obj2) => {
let n = 0
if (obj1.length !== obj2.length) {
return false;
}
for (let i = 0; i < obj1.length; i++) {
if (contains(obj2, obj1[i])) {
n++
}
}
return n === obj1.length
}
console.log(isEqual(collection1,collection2))
if you take into consideration the order of the items use built in function in lodash isEqual
A: If you stringify them...
type AB = {
nome: string;
}
const a: AB[] = [{ nome: 'Célio' }];
const b: AB[] = [{ nome: 'Célio' }];
console.log(a === b); // false
console.log(JSON.stringify(a) === JSON.stringify(b)); // true
A: comparing with json is pretty bad. try this package to compare nested arrays and get the difference.
https://www.npmjs.com/package/deep-object-diff
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "111"
} |
Q: Javascript: declaring a variable before the conditional result? My JavaScript is pretty nominal, so when I saw this construction, I was kind of baffled:
var shareProxiesPref = document.getElementById("network.proxy.share_proxy_settings");
shareProxiesPref.disabled = proxyTypePref.value != 1;
Isn't it better to do an if on proxyTypePref.value, and then declare the var inside the result, only if you need it?
(Incidentally, I also found this form very hard to read in comparison to the normal usage. There were a set of two or three of these conditionals, instead of doing a single if with a block of statements in the result.)
UPDATE:
The responses were very helpful and asked for more context. The code fragment is from Firefox 3, so you can see the code here:
http://mxr.mozilla.org/firefox/source/browser/components/preferences/connection.js
Basically, when you look at the Connect preferences window in Firefox, clicking the proxy modes (radio buttons), causes various form elements to enable|disable.
A:
(Incidentally, I also found this form very hard to read in comparison to the normal usage.
Not necessarily, although that was my first thought, too. A code should always emphasize its function, especially if it has side effects. If the writer's intention was to emphasize the assignment to sharedProxiesPref.disabled then hey, roll with it. On the other hand, it could have been clearer that the action taking place here is to disable the object, in which case the conditional block would have been better.
A: It depends on the context of this code. If it's running on page load, then it would be better to put this code in an if block.
But, if this is part of a validation function, and the field switches between enabled and disabled throughout the life of the page, then this code sort of makes sense.
It's important to remember that setting disabled to false also alters page state.
A: It's hard to say what's better to do without more context.
If this code being executed every time that proxyTypePref changes, then you're always going to need set shareProxiesPref.disabled.
I would agree than an if statement would be a bit more readable than the current code.
Isn't it better to do an if on proxyTypePref.value, and then declare the var inside the result, only if you need it?
If you're talking strictly about variable declaration, then it doesn't matter whether or not you put it inside an if statement. Any Javascript variable declared inside a function is in scope for the entire function, regardless of where it is declared.
If you're talking about the execution of document.getElementById, then yes, it is much better to not make that call if you don't have to.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I set LANG to ascii? I'm accessing an Ubuntu machine using PuTTY, and using gcc.
The default LANG environment variable on this machine is set to en_NZ.UTF-8, which causes GCC to think PuTTY is capable of displaying UTF-8 text, which it doesn't seem to be.
Maybe it's my font, I don't know - it does this:
foo.c:1: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â at end of input
If I set it with export LANG=en_NZ, then this causes GCC to behave correctly, I get:
foo.c:1: error: expected '=', ',', ';', 'asm' or '__attribute__' at end of input
but this then causes everything else to go wrong. For example
man foo
man: can't set the locale; make sure $LC_* and $LANG are correct
I've trawled Google and I can't for the life of me find out what I have to put in there for it to just use ASCII. en_NZ.ASCII doesn't work, nor do any of the other things I can find.
Thanks
A: LANG=en_NZ is correct. However, you must make locale files for en_NZ.
For Ubuntu, edit /var/lib/locales/supported.d/local and add en_NZ ISO-8859-1 to the file. If your system is another distribution (including Debian), the location will be different. Look at /usr/sbin/locale-gen and see where it stores this info.
Afterwards, run locale-gen to create the en_NZ locale file. Hope this helps!
A: Putty can display utf - I think it is in appearance -> translation (or something, I don't have access to it right now).
A: For Debian 5.0 Lenny:
aptitude install locales
If that's already installed:
dpkg-reconfigure locales
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do you deal with the light side and dark side of distributed version control systems? I've had some discussions recently at work about switching from Subversion to a DVCS like bazaar, and I would like to get other people's opinion.
I've managed to crystallize my reluctance to do so into a simple parallel.
Version Control can be used well or badly.
The 'light side' of version control is when you use it to keep track of your changes, are able to go back to older versions when you break stuff, and when you publish your changes so your peers can see your work-in-progress.
The 'dark side' of version control is when you don't use it properly so you don't 'checkpoint' your work by committing regularly, you keep a bunch of changes in your local checkout, and you don't share your changes with others as you make them.
Subversion makes both the light side and the dark side relatively hard. All the basics work, but few people really use branching in Subversion (beyond tagging and releasing) because merging is not straightforward at all. The support for it in Subversion itself is terrible, and there are scripts like svnmerge that make it better, but it's still not very good. So, these days, with good branching and merging support considered more and more like the necessity it is for collaborative development, Subversion doesn't match up.
On the other hand, the 'dark side' is pretty tough to follow too. You only need to be bitten once by not having your local changes commited once in a while to the online repository, and breaking your code with a simple edit you don't even remember making. So you end up making regular commits and people can see the work you're doing.
So, in the end Subversion ends up being a good middle-of-the-road VCS that, while a bit cumbersome for implementing the best practices, still makes it hard to get things very wrong.
In contrast, with a DVCS the cost of either going completely light side or dark side is a lot lower. Branching and merging is a lot simpler with these modern VCS systems. But the distributed aspect makes it easy to work in a set of local branches on your own machine, giving you the granular commits you need to checkpoint your work, possibly without ever publishing your changes so others can see, review, and collaborate. The friction of keeping your changes in your local branches and not publishing them is typically lower than publishing them in some branch on a publically available server.
So in a nutshell, here's the question: if I give our developers at work a DVCS, how can I make sure they use it to go to the 'light side', still publish their changes in a central location regularly, and make them understand that their one week local hack they didn't want to share yet might be just the thing some other developer could use to finish a feature while the first one is on holiday?
If both the light side and the dark side of DVCS are so easy to get to, how do I keep them away from the dark side?
A: If there are developers on your team that don't want to share their "one week local hack" then thats the problem, not the source control tool you are using. A better term for the "dark side" you are describing is "the wrong way" of coding for a team. Source control is a tool used to facilitate collaborative work. If your team is not clear about the fact that the goal is to share the work, then the best reason to use source control is not even applicable.
Also, I think you might be a little confused about distributed source control. There is no publishing to a central locations. Some branches are more important than others and there exists many many branches. Keeping that in mind, I think that distributed source control really works best for popular open source projects. I'm under the perception that centralized source control is still better for development withing a company or some other clearly defined entity.
A: Nick,
while I agree that the problem is 'not sharing your work', the main argument is that tools promote a certain work flow and not all tools apply to all problems with equal friction. My concern is that a DVCS makes it easier to not share your work since you don't have the drawbacks of not sharing your work you get with SVN. If the friction of not sharing is lower than the friction of not sharing (which it is in DVCS), a developer, all else being equal, might easily choose the path of least friction.
I don't think I'm confused about distributed source control. I know that there is no 'central location' by default. But most projects using DVCS still have the concept of a 'master' branch at a 'central location' that they release from. For the purpose of my question though, I only make the distinction between 'private' (only accessible by the developer) and 'public' (accessible by all other developers) branches.
A: You make a distinction between 'private' and 'public' branches, but the only real difference between these cases is whether the branch's repository is only available locally or company-wide. A central repository is only one way to have company-wide availability.
Instead, why not say that all repositories must be publically available throughout the company? For example, you could make all developer run their own local VCS server, and share their branches via zeroconf.
A: I believe svn's merging has been somewhat overhauled in the latest release.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Tool to read and display Java .class versions Do any of you know of a tool that will search for .class files and then display their compiled versions?
I know you can look at them individually in a hex editor but I have a lot of class files to look over (something in my giant application is compiling to Java6 for some reason).
A: If you are on a unix system you could just do a
find /target-folder -name \*.class | xargs file | grep "version 50\.0"
(my version of file says "compiled Java class data, version 50.0" for java6 classes).
A: Yet another java version check
od -t d -j 7 -N 1 ApplicationContextProvider.class | head -1 | awk '{print "Java", $2 - 44}'
A: It is easy enough to read the class file signature and get these values without a 3rd party API. All you need to do is read the first 8 bytes.
ClassFile {
u4 magic;
u2 minor_version;
u2 major_version;
For class file version 51.0 (Java 7), the opening bytes are:
CA FE BA BE 00 00 00 33
...where 0xCAFEBABE are the magic bytes, 0x0000 is the minor version and 0x0033 is the major version.
import java.io.*;
public class Demo {
public static void main(String[] args) throws IOException {
ClassLoader loader = Demo.class.getClassLoader();
try (InputStream in = loader.getResourceAsStream("Demo.class");
DataInputStream data = new DataInputStream(in)) {
if (0xCAFEBABE != data.readInt()) {
throw new IOException("invalid header");
}
int minor = data.readUnsignedShort();
int major = data.readUnsignedShort();
System.out.println(major + "." + minor);
}
}
}
Walking directories (File) and archives (JarFile) looking for class files is trivial.
Oracle's Joe Darcy's blog lists the class version to JDK version mappings up to Java 7:
Target Major.minor Hex
1.1 45.3 0x2D
1.2 46.0 0x2E
1.3 47.0 0x2F
1.4 48.0 0x30
5 (1.5) 49.0 0x31
6 (1.6) 50.0 0x32
7 (1.7) 51.0 0x33
8 (1.8) 52.0 0x34
9 53.0 0x35
A: In eclipse if you don't have sources attached. Mind the first line after the attach source button.
// Compiled from CDestinoLog.java (version 1.5 : 49.0, super bit)
A: Read the 8th byte to decimal:
Unix-like: hexdump -s 7 -n 1 -e '"%d"' Main.class
Windows: busybox.exe hexdump -s 7 -n 1 -e '"%d"' Main.class
Output example:
55
Explain:
*
*-s 7 Offset 7
*-n 1 Limit 1
*-e '"%d"' Print as decimal
Version map:
JDK 1.1 = 45 (0x2D hex)
JDK 1.2 = 46 (0x2E hex)
JDK 1.3 = 47 (0x2F hex)
JDK 1.4 = 48 (0x30 hex)
Java SE 5.0 = 49 (0x31 hex)
Java SE 6.0 = 50 (0x32 hex)
Java SE 7 = 51 (0x33 hex)
Java SE 8 = 52 (0x34 hex)
Java SE 9 = 53 (0x35 hex)
Java SE 10 = 54 (0x36 hex)
Java SE 11 = 55 (0x37 hex)
Java SE 12 = 56 (0x38 hex)
Java SE 13 = 57 (0x39 hex)
Java SE 14 = 58 (0x3A hex)
Java SE 15 = 59 (0x3B hex)
Java SE 16 = 60 (0x3C hex)
Java SE 17 = 61 (0x3D hex)
A: Maybe this helps somebody, too. Looks there is more easy way to get JAVA version used to compile/build .class. This way is useful to application/class self check on JAVA version.
I have gone through JDK library and found this useful constant:
com.sun.deploy.config.BuiltInProperties.CURRENT_VERSION.
I do not know since when it is in JAVA JDK.
Trying this piece of code for several version constants I get result below:
src:
System.out.println("JAVA DEV ver.: " + com.sun.deploy.config.BuiltInProperties.CURRENT_VERSION);
System.out.println("JAVA RUN v. X.Y: " + System.getProperty("java.specification.version") );
System.out.println("JAVA RUN v. W.X.Y.Z: " + com.sun.deploy.config.Config.getJavaVersion() ); //_javaVersionProperty
System.out.println("JAVA RUN full ver.: " + System.getProperty("java.runtime.version") + " (may return unknown)" );
System.out.println("JAVA RUN type: " + com.sun.deploy.config.Config.getJavaRuntimeNameProperty() );
output:
JAVA DEV ver.: 1.8.0_77
JAVA RUN v. X.Y: 1.8
JAVA RUN v. W.X.Y.Z: 1.8.0_91
JAVA RUN full ver.: 1.8.0_91-b14 (may return unknown)
JAVA RUN type: Java(TM) SE Runtime Environment
In class bytecode there is really stored constant - see red marked part of Main.call - constant stored in .class bytecode
Constant is in class used for checking if JAVA version is out of date (see How Java checks that is out of date)...
A: On Unix-like
file /path/to/Thing.class
Will give the file type and version as well. Here is what the output looks like:
compiled Java class data, version 49.0
A: A java-based solution using version magic numbers. Below it is used by the program itself to detect its bytecode version.
import java.io.IOException;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
import org.apache.commons.codec.DecoderException;
import org.apache.commons.codec.binary.Hex;
import org.apache.commons.io.IOUtils;
public class Main {
public static void main(String[] args) throws DecoderException, IOException {
Class clazz = Main.class;
Map<String,String> versionMapping = new HashMap();
versionMapping.put("002D","1.1");
versionMapping.put("002E","1.2");
versionMapping.put("002F","1.3");
versionMapping.put("0030","1.4");
versionMapping.put("0031","5.0");
versionMapping.put("0032","6.0");
versionMapping.put("0033","7");
versionMapping.put("0034","8");
versionMapping.put("0035","9");
versionMapping.put("0036","10");
versionMapping.put("0037","11");
versionMapping.put("0038","12");
versionMapping.put("0039","13");
versionMapping.put("003A","14");
InputStream stream = clazz.getClassLoader()
.getResourceAsStream(clazz.getName().replace(".", "/") + ".class");
byte[] classBytes = IOUtils.toByteArray(stream);
String versionInHexString =
Hex.encodeHexString(new byte[]{classBytes[6],classBytes[7]});
System.out.println("bytecode version: "+versionMapping.get(versionInHexString));
}
}
A: Use the javap tool that comes with the JDK. The -verbose option will print the version number of the class file.
> javap -verbose MyClass
Compiled from "MyClass.java"
public class MyClass
SourceFile: "MyClass.java"
minor version: 0
major version: 46
...
To only show the version:
WINDOWS> javap -verbose MyClass | find "version"
LINUX > javap -verbose MyClass | grep version
A: The simplest way is to scan a class file using many of the answers here which read the class file magic bytes.
However some code is packaged in jars or other archive formats like WAR and EAR, some of which contain other archives or class files, plus you now have multi-release JAR files - see JEP-238 which use different JDK compilers per JAR.
This program scans classes from a list of files + folders and prints summary of java class file versions for each component including each JAR within WAR/EARs:
public static void main(String[] args) throws IOException {
var files = Arrays.stream(args).map(Path::of).collect(Collectors.toList());
ShowClassVersions v = new ShowClassVersions();
for (var f : files) {
v.scan(f);
}
v.print();
}
Example output from a scan:
Version: 49.0 ~ JDK-5
C:\jars\junit-platform-console-standalone-1.7.1.jar
Version: 50.0 ~ JDK-6
C:\jars\junit-platform-console-standalone-1.7.1.jar
Version: 52.0 ~ JDK-8
C:\java\apache-tomcat-10.0.12\lib\catalina.jar
C:\jars\junit-platform-console-standalone-1.7.1.jar
Version: 53.0 ~ JDK-9
C:\java\apache-tomcat-10.0.12\lib\catalina.jar
C:\jars\junit-platform-console-standalone-1.7.1.jar
The scanner:
public class ShowClassVersions {
private TreeMap<String, ArrayList<String>> vers = new TreeMap<>();
private static final byte[] CLASS_MAGIC = new byte[] { (byte) 0xca, (byte) 0xfe, (byte) 0xba, (byte) 0xbe };
private final byte[] bytes = new byte[8];
private String versionOfClass(InputStream in) throws IOException {
int c = in.readNBytes(bytes, 0, bytes.length);
if (c == bytes.length && Arrays.mismatch(bytes, CLASS_MAGIC) == CLASS_MAGIC.length) {
int minorVersion = (bytes[4] << 8) + (bytes[4] << 0);
int majorVersion = (bytes[6] << 8) + (bytes[7] << 0);
return ""+ majorVersion + "." + minorVersion;
}
return "Unknown";
}
private Matcher classes = Pattern.compile("\\.(class|ear|war|jar)$").matcher("");
// This code scans any path (dir or file):
public void scan(Path f) throws IOException {
try (var stream = Files.find(f, Integer.MAX_VALUE,
(p, a) -> a.isRegularFile() && classes.reset(p.toString()).find())) {
stream.forEach(this::scanFile);
}
}
private void scanFile(Path f) {
String fn = f.getFileName().toString();
try {
if (fn.endsWith(".ear") || fn.endsWith(".war") || fn.endsWith(".jar"))
scanArchive(f);
else if (fn.endsWith(".class"))
store(f.toAbsolutePath().toString(), versionOfClass(f));
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
private void scanArchive(Path p) throws IOException {
try (InputStream in = Files.newInputStream(p)) {
scanArchive(p.toAbsolutePath().toString(), Files.newInputStream(p));
}
}
private void scanArchive(String desc, InputStream in) throws IOException {
HashSet<String> versions = new HashSet<>();
ZipInputStream zip = new ZipInputStream(in);
for (ZipEntry entry = null; (entry = zip.getNextEntry()) != null; ) {
String name = entry.getName();
// There could be different compiler versions per class in one jar
if (name.endsWith(".class")) {
versions.add(versionOfClass(zip));
} else if (name.endsWith(".jar") || name.endsWith(".war")) {
scanArchive(desc + " => " + name, zip);
}
}
if (versions.size() > 1)
System.out.println("Warn: "+desc+" contains multiple versions: "+versions);
for (String version : versions)
store(desc, version);
}
private String versionOfClass(Path p) throws IOException {
try (InputStream in = Files.newInputStream(p)) {
return versionOfClass(in);
}
}
private void store(String path, String jdkVer) {
vers.computeIfAbsent(jdkVer, k -> new ArrayList<>()).add(path);
}
// Could add a mapping table for JDK names, this guesses based on (JDK17 = 61.0)
public void print() {
for (var ver : vers.keySet()) {
System.out.println("Version: " + ver + " ~ " +jdkOf(ver));
for (var p : vers.get(ver)) {
System.out.println(" " + p);
}
}
}
private static String jdkOf(String ver) {
try {
return "JDK-"+((int)Float.parseFloat(ver)-44);
}
catch(NumberFormatException nfe)
{
return "JDK-??";
}
}
}
A: Just another reference that can also check if a class was compiled with preview features:
public final class JavaClassVersion {
private final int major;
private final int minor;
private JavaClassVersion(int major, int minor) {
this.major = major;
this.minor = minor;
}
public int major() {
return major;
}
public int minor() {
return minor;
}
public static JavaClassVersion of(Path artifactPath) {
try (InputStream in = Files.newInputStream(artifactPath);
DataInputStream data = new DataInputStream(in)) {
if (0xCAFEBABE != data.readInt()) {
throw new IOException("invalid header");
}
int minor = data.readUnsignedShort();
int major = data.readUnsignedShort();
return new JavaClassVersion(major, minor);
} catch (IOException ex) {
throw new UncheckedIOException(ex);
}
}
String getJavaVersion() {
switch (major) {
case 50:
return "1.6";
case 51:
return "1.7";
case 52:
return "1.8";
case 53:
return "9";
case 54:
return "10";
case 55:
return "11";
case 56:
return "12";
case 57:
return "13";
case 58:
return "14";
case 59:
return "15";
case 60:
return "16";
case 61:
return "17";
case 62:
return "18";
case 63:
return "19";
case 64:
return "20";
case 65:
return "21";
default:
return "";
}
}
boolean isPreview() {
return minor == 65535;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "128"
} |
Q: Pointer to Pointer Managed C++ I have an old C library with a function that takes a void**:
oldFunction(void** pStuff);
I'm trying to call this function from managed C++ (m_pStuff is a member of the parent ref class of type void*):
oldFunction( static_cast<sqlite3**>( &m_pStuff ) );
This gives me the following error from Visual Studio:
error C2440: 'static_cast' : cannot convert from 'cli::interior_ptr' to 'void **'
I'm guessing the compiler is converting the void* member pointer to a cli::interior_ptr behind my back.
Any advice on how to do this?
A: EDIT: Fixed answer, see below.
Really you need to know what oldFunction is going to be doing with pStuff. If pStuff is a pointer to some unmanaged data you can try wrapping the definition of m_pStuff with:
#pragma unmanaged
void* m_pStuff
#pragma managed
This will make the pointer unmanaged which can then be passed into unmanaged functions. Of course you will not be able to assign any managed objects to this pointer directly.
Fundamentally unmanaged and managed pointers are not the same and can't be converted without some sort of glue code that copies the underlying data. Basically managed pointers point to the managed heap and since this is garbage collected the actual memory address they point to can change over time. Unmanaged pointers do not change the memory address without you explicitly doing so.
Scratch that, you can't define unmanaged / managed inside a class definition. But this test code seems to work just fine:
// TestSol.cpp : main project file.
#include "stdafx.h"
using namespace System;
#pragma unmanaged
void oldFunction(void** pStuff)
{
return;
}
#pragma managed
ref class Test
{
public:
void* m_test;
};
int main(array<System::String ^> ^args)
{
Console::WriteLine(L"Hello World");
Test^ test = gcnew Test();
void* pStuff = test->m_test;
oldFunction(&pStuff);
test->m_test = pStuff;
return 0;
}
Here I copy the pointer out of the managed object first and then pass that in by to the oldFunction. Then I copy the result (probably updated by oldFunction) back into the managed object. Since the managed object is on the managed heap, the compiler won't let you pass a reference to the pointer contained in that object as it may move when the garbage collector runs.
A: Thanks for the advice, the pointer is to an C style abstract structure which I think if I leave that structure exposed to the managed code is going to cause further pain due to its lack of defined structure. So what I think I will do is wrap the C library in C++ and then wrap the C++ wrapper with managed C++, which will prevent exposing those C structures to managed code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Subsets and Splits