source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
19,387 | What would you recommend for OS X development of a graphical application like those possible in WPF? My specific background is in Smalltalk & Java, but I currently work mostly in DHTML/.NET (ASP.NET/C#). | Cocoa. Considered by many to be the best application framework ever. The language is Objective-C, SmallTalk-like language that inspired the creators of Java. Really, there is no reasonable alternative to Cocoa for OS X development, unless you have specific needs like wanting to be cross-platform. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19387",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2237/"
]
} |
19,409 | This is kinda oddball, but I was poking around with the GNU assembler today (I want to be able to at least read the syntax), and was trying to get this little contrived example of mine to work. Namely I just want to go from 0 to 100, printing out numbers all the while. So a few minutes later I come up with this: # count.s: print the numbers from 0 to 100. .textstring: .asciz "%d\n" .globl _main_main: movl $0, %eax # The starting point/current value. movl $100, %ebx # The ending point._loop: # Display the current value. pushl %eax pushl $string call _printf addl $8, %esp # Check against the ending value. cmpl %eax, %ebx je _end # Increment the current value. incl %eax jmp _loop _end: All I get from this is 3 printed over and over again. Like I said, just a little contrived example, so don't worry too much about it, it's not a life or death problem. (The formatting's a little messed up, but nothing major). | You can't trust what any called procedure does to any of the registers. Either push the registers onto the stack and pop them back off after calling printf or have the increment and end point values held in memory and read/written into registers as you need them. I hope the following works. I'm assuming that pushl has an equivalant popl and you can push an extra couple of numbers onto the stack. # count.s: print the numbers from 0 to 100. .textstring: .asciz "%d\n" .globl _main_main: movl $0, %eax # The starting point/current value. movl $100, %ebx # The ending point._loop: # Remember your registers. pushl %eax pushl %ebx # Display the current value. pushl %eax pushl $string call _printf addl $8, %esp # reinstate registers. popl %ebx popl %eax # Check against the ending value. cmpl %eax, %ebx je _end # Increment the current value. incl %eax jmp _loop _end: | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/61/"
]
} |
19,412 | How can I request a random row (or as close to truly random as is possible) in pure SQL? | See this post: SQL to Select a random row from a database table . It goes through methods for doing this in MySQL, PostgreSQL, Microsoft SQL Server, IBM DB2 and Oracle (the following is copied from that link): Select a random row with MySQL: SELECT column FROM tableORDER BY RAND()LIMIT 1 Select a random row with PostgreSQL: SELECT column FROM tableORDER BY RANDOM()LIMIT 1 Select a random row with Microsoft SQL Server: SELECT TOP 1 column FROM tableORDER BY NEWID() Select a random row with IBM DB2 SELECT column, RAND() as IDX FROM table ORDER BY IDX FETCH FIRST 1 ROWS ONLY Select a random record with Oracle: SELECT column FROM( SELECT column FROM tableORDER BY dbms_random.value )WHERE rownum = 1 | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/19412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/473/"
]
} |
19,442 | How can I create this file in a directory in windows 2003 SP2: .hgignore I get error: You must type a file name. | That's a "feature" of Windows Explorer. Try to create your files from a command line (or from a batch/program you wrote) and it should work fine. Try this from a dos prompt: echo Hello there! > .hgignore | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/19442",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/479/"
]
} |
19,454 | Following on from my recent question on Large, Complex Objects as a Web Service Result . I have been thinking about how I can ensure all future child classes are serializable to XML. Now, obviously I could implement the IXmlSerializable interface and then chuck a reader/writer to it but I would like to avoid that since it then means I need to instantiate a reader/writer whenever I want to do it, and 99.99% of the time I am going to be working with a string so I may just write my own. However, to serialize to XML, I am simply decorating the class and its members with the Xml??? attributes ( XmlRoot , XmlElement etc.) and then passing it to the XmlSerializer and a StringWriter to get the string. Which is all good. I intend to put the method to return the string into a generic utility method so I don't need to worry about type etc. The this that concerns me is this: If I do not decorate the class(es) with the required attributes an error is not thrown until run time. Is there any way to enforce attribute decoration? Can this be done with FxCop? (I have not used FxCop yet) UPDATE: Sorry for the delay in getting this close off guys, lots to do! Definitely like the idea of using reflection to do it in a test case rather than resorting to FxCop (like to keep everything together).. Fredrik Kalseth's answer was fantastic, thanks for including the code as it probably would have taken me a bit of digging to figure out how to do it myself! +1 to the other guys for similar suggestions :) | I'd write a unit/integration test that verifies that any class matching some given criteria (ie subclassing X) is decorated appropriately. If you set up your build to run with tests, you can have the build fail when this test fails. UPDATE: You said, "Looks like I will just have to roll my sleeves up and make sure that the unit tests are collectively maintained" - you don't have to. Just write a general test class that uses reflection to find all classes that needs to be asserted. Something like this: [TestClass]public class When_type_inherits_MyObject{ private readonly List<Type> _types = new List<Type>(); public When_type_inherits_MyObject() { // lets find all types that inherit from MyObject, directly or indirectly foreach(Type type in typeof(MyObject).Assembly.GetTypes()) { if(type.IsClass && typeof(MyObject).IsAssignableFrom(type)) { _types.Add(type); } } } [TestMethod] public void Properties_have_XmlElement_attribute { foreach(Type type in _types) { foreach(PropertyInfo property in type.GetProperties()) { object[] attribs = property.GetCustomAttributes(typeof(XmlElementAttribute), false); Assert.IsTrue(attribs.Count > 0, "Missing XmlElementAttribute on property " + property.Name + " in type " + type.FullName); } } }} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/832/"
]
} |
19,466 | How can I check file permissions , without having to run operating system specific command via passthru() or exec() ? | Use fileperms() function clearstatcache();echo substr(sprintf('%o', fileperms('/etc/passwd')), -4); | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/19466",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115/"
]
} |
19,589 | Using C# .NET 3.5 and WCF, I'm trying to write out some of the WCF configuration in a client application (the name of the server the client is connecting to). The obvious way is to use ConfigurationManager to load the configuration section and write out the data I need. var serviceModelSection = ConfigurationManager.GetSection("system.serviceModel"); Appears to always return null. var serviceModelSection = ConfigurationManager.GetSection("appSettings"); Works perfectly. The configuration section is present in the App.config but for some reason ConfigurationManager refuses to load the system.ServiceModel section. I want to avoid manually loading the xxx.exe.config file and using XPath but if I have to resort to that I will. Just seems like a bit of a hack. Any suggestions? | The <system.serviceModel> element is for a configuration section group , not a section. You'll need to use System.ServiceModel.Configuration.ServiceModelSectionGroup.GetSectionGroup() to get the whole group. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/19589",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1297/"
]
} |
19,654 | The company I used to work with has two developers working fulltime, and a handful of freelancers. They're in the process of hiring a new lead developer to try to bring order and management to the development. But, currently, one of the developers has seen the light of Django (the company has only developed in PHP to date) while the other developer is concerned that introducing a new language (Python) is a bad idea right now. How should they approach introducing this new technology? Obviously with only one of the developers actually knowing Python, there will be no redundancy when that dev is away or leaves the company. Should they bother to introduce Python, or should they look for PHP-only solutions until such a time when the team actually have more than one Pythonion? Without a team leader, the decisions are having to fall to them. | I recently introduced Python to my company, which does consulting work for the Post Office. I did this by waiting until there was a project for which I would be the only programmer, then getting permission to do this new project in Python. I then did another small project in Python with similarly impressive results. In addition, I used Python for all of my small throwaway assignments ("can you parse the stats in these files into a CSV file organized by date and site?", etc) and had a quick turnaround time on all of them. I also evangelized Python a bit; I went out of my way to NOT be obnoxious about it, but I'd occasionally describe why I liked it so much, talked about the personal projects I use it for in my free time and why it's awesome for me, etc. Eventually we started another project and I convinced everyone to use Python for it. I took care to point everyone to a lot of documentation, including the specific webpages relating to what they were working on, and every time they had a question, I'd explain how to do things properly by explaining the Pythonic approach to things, etc. This has worked really well. However, this might be somewhat different than what you're describing. In my case I started with moderately small projects and Python is only being used for new projects. Also, none of my co-workers were really Perl or PHP gurus; they all knew those languages and had been using them for awhile, but it didn't take much effort for them to become more productive in Python than they'd been before. So if you're talking about new projects with people who currently use PHP but aren't super-experts and don't love that language, then I think switching to Python is a no-brainer. However, if you're talking about working with a large existing PHP code base with a lot of very experienced PHP programmers who are happy with their current setup, then switching languages is probably not a good idea. You're probably somewhere in between, so you'll have to weigh the tradeoffs; hopefully my answer will help you do that. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19654",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1951/"
]
} |
19,725 | Right, initially ran: c:\regsvr32 Amazing.dll then, (accidentally - I might add) I must have run it again, and (indeed) again when new versions of 'Amazing.dll' were released. Yes - I know now I should've run: c:\regsvr32 /u Amazing.dll beforehand - but hey! I forgot. To cut to the chase, when add the COM reference in VS, I can see 3 instances of 'Amazing' all pointing to the same location (c:\Amazing.dll), running regsvr32 /u removes one of the references, the second time - does nothing... How do I get rid of these references? Am I looking at a regedit scenario? - If so - what exactly happens if I delete one of the keys??? Cheers | Your object's GUID's should not be changing. In other words, once you register the COM object, re-registering shouldn't be adding anything additional to the registry. Unless you added additional COM interfaces or objects to the project. In any case, if this is a one time deal (and it sounds like it is), open regedit and delete the unneeded keys manually. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19725",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2266/"
]
} |
19,746 | I'm trying to create a webapplication where I want to be able to plug-in separate assemblies. I'm using MVC preview 4 combined with Unity for dependency injection, which I use to create the controllers from my plugin assemblies. I'm using WebForms (default aspx) as my view engine. If I want to use a view, I'm stuck on the ones that are defined in the core project, because of the dynamic compiling of the ASPX part. I'm looking for a proper way to enclose ASPX files in a different assembly, without having to go through the whole deployment step. Am I missing something obvious? Or should I resort to creating my views programmatically? Update: I changed the accepted answer. Even though Dale's answer is very thorough, I went for the solution with a different virtual path provider. It works like a charm, and takes only about 20 lines in code altogether I think. | Essentially this is the same issue as people had with WebForms and trying to compile their UserControl ASCX files into a DLL. I found this http://www.codeproject.com/KB/aspnet/ASP2UserControlLibrary.aspx that might work for you too. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/909/"
]
} |
19,766 | What would be the best way to have a list of items with a checkbox each in Java Swing? I.e. a JList with items that have some text and a checkbox each? | Create a custom ListCellRenderer and asign it to the JList . This custom ListCellRenderer must return a JCheckbox in the implementantion of getListCellRendererComponent(...) method. But this JCheckbox will not be editable, is a simple paint in the screen is up to you to choose when this JCheckbox must be 'ticked' or not, For example, show it ticked when the row is selected (parameter isSelected ), but this way the check status will no be mantained if the selection changes. Its better to show it checked consulting the data below the ListModel , but then is up to you to implement the method who changes the check status of the data, and notify the change to the JList to be repainted. I Will post sample code later if you need it ListCellRenderer | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19766",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1827/"
]
} |
19,772 | Does anyone know of a good Command Prompt replacement? I've tried bash/Cygwin, but that does not really meet my needs at work because it's too heavy. I'd like a function-for-function identical wrapper on cmd.exe, but with highlighting, intellisense, and (critically) a tabbed interface. Powershell is okay, but the interface is still lacking. | Edited : I've been using ConEmu ( http://conemu.github.io/ ) for quite some time now. This one is a wrapper too, since it is not really possible to replace the Windows console without rewriting the whole command interpreter. Below the line is my original answer for an earlier alternative. Not exactly a replacement (actually, it's a prettifying wrapper) but you might try Console ( http://sourceforge.net/projects/console/ ) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/19772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1490/"
]
} |
19,787 | Is it possible to look back through the history of a Subversion repository for files of a certain name (even better would be for them to have a wildcard search)? I want to see if a .bat file has been committed to the repository at some point in the past but has since been removed in later updates. Even a dump of the file history at each revision would work, as I could just grep the output. I have looked through the manual but could not see a good way to do this. The logs for each commit are descriptive, so I cannot just look through the log messages to see what modifications were done. I presume Subversion does have a way of retrieving this? | TortoiseSVN can search the logs very easily, and on my system I can enter ".plg" in the search box and find all adds, modifies, and deletes for those files. Without Tortoise, the only way I can think of doing that would be to grep the full logs or parse the logs and do your own searching for 'A' and 'D' indicators on the file you are looking for (use svn log --verbose to get file paths). svn log --verbose | grep .bat | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/19787",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/277/"
]
} |
19,790 | By default each row of a Gridview maps to each row in a datatable or dataset attached to its datasource. But what if I want to display these rows in multiple columns. For example if it has 10 rows, 5 rows each should be displayed in 2 columns side by side. Also can I do this with the Infragistics grid. Is this possible? | You can use a DataList control instead. It has a RepeatColumns property that you can define the number of columns you want to display. In .NET Framework 3.5, there is an even better solution, the ListView control. You can find further information about how to use the ListView control here . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19790",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1688440/"
]
} |
19,852 | I'm just designing the schema for a database table which will hold details of email attachments - their size in bytes, filename and content-type (i.e. "image/jpg", "audio/mp3", etc). Does anybody know the maximum length that I can expect a content-type to be? | I hope I havn't misread, but it looks like the length is max 127/127 or 255 total . RFC 4288 has a reference in 4.2 (page 6): Type and subtype names MUST conform to the following ABNF: type-name = reg-name subtype-name = reg-name reg-name = 1*127reg-name-chars reg-name-chars = ALPHA / DIGIT / "!" / "#" / "$" / "&" / "." / "+" / "-" / "^" / "_" It is not clear to me if the +suffix can add past the 127, but it appears not. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/19852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2084/"
]
} |
19,883 | I've used Trac/Subversion before and really like the integration. My current project is using Mercurial for distributed development and it'd be nice to be able to track issues/bugs and have this be integrated with Mercurial. I realized this could be tricky with the nature of DVCS. | TracMercurial integrates Trac with Mercurial. Assembla provides free Mercurial hosting with Trac integration. The idea is that you have a central repository as your master and upload all the subsidiary changes from local repositories into the main one. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/287/"
]
} |
19,893 | I have two applications written in Java that communicate with each other using XML messages over the network. I'm using a SAX parser at the receiving end to get the data back out of the messages. One of the requirements is to embed binary data in an XML message, but SAX doesn't like this. Does anyone know how to do this? UPDATE: I got this working with the Base64 class from the apache commons codec library , in case anyone else is trying something similar. | You could encode the binary data using base64 and put it into a Base64 element; the below article is a pretty good one on the subject. Handling Binary Data in XML Documents | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/19893",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1288/"
]
} |
19,933 | I want to copy a file from A to B in C#. How do I do that? | The File.Copy method: MSDN Link | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2260/"
]
} |
19,963 | If you have two versions of the same report (.rpt) and you want to establish what the exact differences are, what is the best way to go about this? I've seen some commercial tools to do this, but I'm not too interested in forking out cash for something that should be relatively straight forward. Can I hook into the Crystal API and simply list all of the properties of every field or something? Please someone tell me that there's an Open Source project somewhere that does this... @:-) @Kogus, wouldn't diffing the outputs as text hide any formatting differences? @ladoucep, I don't seem to be able to export the report without data. | Can I hook into the Crystal API and simply list all of the properties of every field or something? Please someone tell me that there's an Open Source project somewhere that does this... @:-) There is in fact, such an API. I wrote a VB6 application to do just what you asked and more. I think I even migrated it to VB.Net. As it was for my own use, I didn't spend much time making it 'polished'. I've been intending to release it, but I haven't had the time... Another approach that I've used in the past is to create an Access application to help manage large, report-development projects. One of it's many features includes the ability to extract the tables that are used by the report, and the SQL statements used by its Commands and SQL Expressions. It's intent is to give one a global perspective of which reports use which tables. I probably still have it somewhere... ** edit 1 ** BusinessObjects Enterprise XI (R?) has a feature named 'Meta Manager'. It will periodically examine the contents of the Repository and save the results to a database. It uses the Report-Application Service (RAS) to generate the meta data. It's an additional, 5-figure license, of course. ** edit 2 ** Consider using PowerShell to do the work: PsCrystal . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19963",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1030/"
]
} |
19,970 | What is the best way to manage a list of windows (keeping them in order) to be able to promote the next window to the top-level when the current top-level window is closed. This is for a web application, so we're using jQuery Javascript. We'd talked through a few simplistic solutions, such as using an array and just treating [0] index as the top-most window. I'm wondering if there's any potentially more efficient or useful alternative to what we had brainstormed. | Can I hook into the Crystal API and simply list all of the properties of every field or something? Please someone tell me that there's an Open Source project somewhere that does this... @:-) There is in fact, such an API. I wrote a VB6 application to do just what you asked and more. I think I even migrated it to VB.Net. As it was for my own use, I didn't spend much time making it 'polished'. I've been intending to release it, but I haven't had the time... Another approach that I've used in the past is to create an Access application to help manage large, report-development projects. One of it's many features includes the ability to extract the tables that are used by the report, and the SQL statements used by its Commands and SQL Expressions. It's intent is to give one a global perspective of which reports use which tables. I probably still have it somewhere... ** edit 1 ** BusinessObjects Enterprise XI (R?) has a feature named 'Meta Manager'. It will periodically examine the contents of the Repository and save the results to a database. It uses the Report-Application Service (RAS) to generate the meta data. It's an additional, 5-figure license, of course. ** edit 2 ** Consider using PowerShell to do the work: PsCrystal . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/19970",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2286/"
]
} |
19,995 | Of course I am aware of Ajax, but the problem with Ajax is that the browser should poll the server frequently to find whether there is new data. This increases server load. Is there any better method (even using Ajax) other than polling the server frequently? | Yes, what you're looking for is COMET http://en.wikipedia.org/wiki/Comet_(programming) . Other good Google terms to search for are AJAX-push and reverse-ajax. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/19995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/184/"
]
} |
20,003 | I have a large application (~50 modules) using a structure similar to the following: Application Communication modules Color communication module SSN communication module etc. communication module Router module Service modules Voting service module Web interface submodule for voting Vote collector submodule for voting etc. for voting Quiz service module etc. module I would like to import the application to Maven and Subversion. After some research I found that two practical approaches exists for this. One is using a tree structure just as the previous one. The drawback of this structure is that you need a ton of tweaking/hacks to get the multi-module reporting work well with Maven. Another downside is that in Subversion the standard trunk/tags/branches approach add even more complexity to the repository. The other approach uses a flat structure, where there are only one parent project and all the modules, submodules and parts-of-the-submodules are a direct child of the parent project. This approach works well for reporting and is easier in Subversion, however I feel I lose a bit of the structure this way. Which way would you choose in the long term and why? | We have a largish application (160+ OSGi bundles where each bundle is a Maven module) and the lesson we learned, and continue to learn, is that flat is better. The problem with encoding semantics in your hierarchy is that you lose flexibility. A module that is 100% say "communication" today may be partly "service" tomorrow and then you'll need to be moving things around in your repository and that will break all sorts of scripts, documentation, references, etc. So I would recommend a flat structure and to encode the semantics in another place (say for example an IDE workspace or documentation). I've answered a question about version control layout in some detail with examples at another question , it may be relevant to your situation. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/686/"
]
} |
20,021 | I'm currently looking at the Python framework Django for future db-based web apps as well as for a port of some apps currently written in PHP. One of the nastier issues during my last years was keeping track of database schema changes and deploying these changes to productive systems. I haven't dared asking for being able to undo them too, but of course for testing and debugging that would be a great feature. From other questions here (such as this one or this one ), I can see that I'm not alone and that this is not a trivial problem. Also, I found many inspirations in the answers there. Now, as Django seems to be very powerful, does it have any tools to help with the above? Maybe it's even in their docs and I missed it? | There are at least two third party utilities to handle DB schema migrations, South and Django Evolution . I haven't tried either one, but I have heard some good things about South, though Evolution has been around a little longer. Also, look at SchemaEvolution on the Django wiki. It is just a wiki page about migrating the db. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2077/"
]
} |
20,034 | Project Darkstar was the topic of the monthly JavaSIG meeting down at the Google offices in NYC last night. For those that don't know (probably everyone), Project Darkstar is a framework for massively multiplayer online games that attempts to take care of all of the "hard stuff." The basic idea is that you write your game server logic in such a way that all operations are broken up into tiny tasks. You pass these tasks to the Project Darkstar framework which handles distributing them to a specific node in the cluster, any concurrency issues, and finally persisting the data. Apparently doing this kind of thing is a much different problem for video games than it is for enterprise applications. Jim Waldo, who gave the lecture, claims that MMO games have a DB read/write ratio of 50/50, whereas enterprise apps are more like 90% read, 10% write. He also claims that most existing MMOs keep everything in memory exlcusively, and only dump to a DB every 6 hours of so. This means if a server goes down, you would lose all of the work since the last DB dump. Now, the project itself sounds really cool, but I don't think the industry will accept it. First, you have to write your server code in Java. The client code can be written in anything (Jim claims ActionScript 3 is the most popular, follow by C++), but the server stuff has to be Java. Sounds good to me, but I really get the impression that everyone in the games industry hates Java. Second, unlike other industries where developers prefer to use existing frameworks and libraries, the guys in the games industry seem to like to write everything themselves. Not only that, they like to rewrite everything for every new game they produce. Things are starting to change where developers are using Havok for physics, Unreal Engine 3 as their platform, etc., but for the most part it looks like everything is still proprietary. So, are the guys at Project Darkstar just wasting their time? Can a general framework like this really work for complex games with the performance that is required? Even if it does work, are game companies willing to use it? | Edit: This was written before Oracle bought Sun and started a rampage to kill everything that does not make them a billion $ per day. See the comments for an OSS Fork. I still stand by my opinion that stuff like that (MMO Middleware) is realistic, you just need a company that doesn't suck behind it. The Market may be dominated by few large games, but that does not mean that there is not a lot of room for more niche games. Lets face it: If you want to reach 100.000+ players, you're ending up building your own technology stack, at least for the critical core. That's what CCP did for EVE Online ( StacklessIO ), that's what Blizzard did for World of Warcraft (although they do use many third-party libraries), that's what Mythic did for Warhammer Online (although they are based on Gamebryo). However, if you aim to be a small, niche MMO (like the dozens of Free-to-Play/Itemshop MMOs), then getting the Network stuff right is just insanely hard, data consistency is even harder and scalability is the biggest b*tch. But game technology is not your only problem - you also need to tackle Billing. Credit Card only? Have fun selling in Germany then, people there want ELV. That's where you need a reliable billing provider, but you still need to wire in the billing application with your accounts to make sure that accounts are blocked/reactivated when the billing fails. There are some companies already offering "MMO Infratructure Services" (i.e. Arvato's EEIS ), but the bottom line is: Stuff like Project Darkstar IS realistic, but assuming that you can build a Multi-Billion-MMO entirely on a Third Party Stack is optimistic, possibly idealistic. But then again, entirely inventing all of the technology is even more stupid - use the Third Party stuff that you need (i.e. Billing, Font Rendering, Audio Output...), but write the stuff that really makes or breaks your business (i.e. Network stack, User interface etc.) on your own. (Note: Jeff's posting may be a bit flawed , but the overall direction is correct IMHO.) Addendum: Also, the game industry does license and reuse engines a lot. The most prominent game Engines are the Unreal Engine , Source Engine and id Tech , which fuel dozens, if not hundreds of games. But there are some lesser-known (outside of the industry) engines. There is Gamebryo , the Middleware behind games like Civilization 4 and Fallout 3, there was RenderWare that is now only EA-in-House, but used in games like Battlefield 2 or The Sims 3. There is the open source Ogre3d , which was used in some commercial titles . If you're just looking for Sound, there's stuff like FMOD or if you want to do font-rendering, why not give FreeType a spin? What I'm saying is: Third-Party Engines/Middleware do exist, and they ARE being successfully used since more than a decade (I know for sure that id's Wolfenstein Engine was licensed to other companies, and that was 1992), even by big companies in multi-million-dollar titles. The important thing is the support, because a good engine with no help in case of an issue is pretty much worthless or at least very expensive if the developer has to spend their game-development-time with unneccessary debugging of the Engine. If the Darkstar folks manage to get the support side right and 2 or 3 higher profile titles out, I do believe it could succeed in opening the MMO market to a lot more smaller developers and indies. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/20034",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1471/"
]
} |
20,047 | We're seeing some pernicious, but rare, deadlock conditions in the Stack Overflow SQL Server 2005 database. I attached the profiler, set up a trace profile using this excellent article on troubleshooting deadlocks , and captured a bunch of examples. The weird thing is that the deadlocking write is always the same : UPDATE [dbo].[Posts]SET [AnswerCount] = @p1, [LastActivityDate] = @p2, [LastActivityUserId] = @p3WHERE [Id] = @p0 The other deadlocking statement varies, but it's usually some kind of trivial, simple read of the posts table. This one always gets killed in the deadlock. Here's an example SELECT[t0].[Id], [t0].[PostTypeId], [t0].[Score], [t0].[Views], [t0].[AnswerCount], [t0].[AcceptedAnswerId], [t0].[IsLocked], [t0].[IsLockedEdit], [t0].[ParentId], [t0].[CurrentRevisionId], [t0].[FirstRevisionId], [t0].[LockedReason],[t0].[LastActivityDate], [t0].[LastActivityUserId]FROM [dbo].[Posts] AS [t0]WHERE [t0].[ParentId] = @p0 To be perfectly clear, we are not seeing write / write deadlocks, but read / write. We have a mixture of LINQ and parameterized SQL queries at the moment. We have added with (nolock) to all the SQL queries. This may have helped some. We also had a single (very) poorly-written badge query that I fixed yesterday, which was taking upwards of 20 seconds to run every time, and was running every minute on top of that. I was hoping this was the source of some of the locking problems! Unfortunately, I got another deadlock error about 2 hours ago. Same exact symptoms, same exact culprit write. The truly strange thing is that the locking write SQL statement you see above is part of a very specific code path. It's only executed when a new answer is added to a question -- it updates the parent question with the new answer count and last date/user. This is, obviously, not that common relative to the massive number of reads we are doing! As far as I can tell, we're not doing huge numbers of writes anywhere in the app. I realize that NOLOCK is sort of a giant hammer, but most of the queries we run here don't need to be that accurate. Will you care if your user profile is a few seconds out of date? Using NOLOCK with Linq is a bit more difficult as Scott Hanselman discusses here . We are flirting with the idea of using SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED on the base database context so that all our LINQ queries have this set. Without that, we'd have to wrap every LINQ call we make (well, the simple reading ones, which is the vast majority of them) in a 3-4 line transaction code block, which is ugly. I guess I'm a little frustrated that trivial reads in SQL 2005 can deadlock on writes. I could see write/write deadlocks being a huge issue, but reads? We're not running a banking site here, we don't need perfect accuracy every time. Ideas? Thoughts? Are you instantiating a new LINQ to SQL DataContext object for every operation or are you perhaps sharing the same static context for all your calls? Jeremy, we are sharing one static datacontext in the base Controller for the most part: private DBContext _db;/// <summary>/// Gets the DataContext to be used by a Request's controllers./// </summary>public DBContext DB{ get { if (_db == null) { _db = new DBContext() { SessionName = GetType().Name }; //_db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED"); } return _db; }} Do you recommend we create a new context for every Controller, or per Page, or .. more often? | According to MSDN: http://msdn.microsoft.com/en-us/library/ms191242.aspx When either the READ COMMITTED SNAPSHOT or ALLOW SNAPSHOT ISOLATION database options are ON, logical copies (versions) are maintained for all data modifications performed in the database. Every time a row is modified by a specific transaction, the instance of the Database Engine stores a version of the previously committed image of the row in tempdb. Each version is marked with the transaction sequence number of the transaction that made the change. The versions of modified rows are chained using a link list. The newest row value is always stored in the current database and chained to the versioned rows stored in tempdb. For short-running transactions, a version of a modified row may get cached in the buffer pool without getting written into the disk files of the tempdb database. If the need for the versioned row is short-lived, it will simply get dropped from the buffer pool and may not necessarily incur I/O overhead. There appears to be a slight performance penalty for the extra overhead, but it may be negligible. We should test to make sure. Try setting this option and REMOVE all NOLOCKs from code queries unless it’s really necessary. NOLOCKs or using global methods in the database context handler to combat database transaction isolation levels are Band-Aids to the problem. NOLOCKS will mask fundamental issues with our data layer and possibly lead to selecting unreliable data, where automatic select / update row versioning appears to be the solution. ALTER Database [StackOverflow.Beta] SET READ_COMMITTED_SNAPSHOT ON | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/20047",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1/"
]
} |
20,059 | What languages and tools do you consider a youngster starting out in programming should use in the modern era? Lots of us started with proprietary Basics and they didn't do all of us long term harm :) but given the experiences you have had since then and your knowledge of the domain now are there better options? There are related queries to this one such as " Best ways to teach a beginner to program? " and " One piece of advice " about starting adults programming both of which I submitted answers to but children might require a different tool. Disclosure: it's bloody hard choosing a 'correct' answer to a question like this so who ever has the best score in a few days will get the 'best answer' mark from me based on the communities choice. | I would suggest LEGO Mindstorm , it provides an intuitive drag and drop interface for programming and because it comes with hardware it provides something tangible for a child to grasp. Also, because it is "LEGO" they might think of it as more of a game then a programming exercise. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/20059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/269/"
]
} |
20,061 | I've recently taken up learning some C# and wrote a Yahtzee clone. My next step (now that the game logic is in place and functioning correctly) is to integrate some method of keeping stats across all the games played. My question is this, how should I go about storing this information? My first thought would be to use a database and I have a feeling that's the answer I'll get... if that's the case, can you point me to a good resource for creating and accessing a database from a C# application? Storing in an XML file actually makes more sense to me, but I thought if I suggested that I'd get torn apart ;). I'm used to building web applications and for those, text files are generally frowned upon. So, going with an XML file, what classes should I be looking at that would allow for easy manipulation? | Here is one idea: use Xml Serialization. Design your GameStats data structure and optionally use Xml attributes to influence the schema as you like. I like to use this method for small data sets because its quick and easy and all I need to do is design and manipulate the data structure. using (FileStream fs = new FileStream(....)){ // Read in stats XmlSerializer xs = new XmlSerializer(typeof(GameStats)); GameStats stats = (GameStats)xs.Deserialize(fs); // Manipulate stats here ... // Write out game stats XmlSerializer xs = new XmlSerializer(typeof(GameStats)); xs.Serialize(fs, stats); fs.Close();} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20061",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/271/"
]
} |
20,063 | What's the easiest , tersest , and most flexible method or library for parsing Python command line arguments? | This answer suggests optparse which is appropriate for older Python versions. For Python 2.7 and above, argparse replaces optparse . See this answer for more information. As other people pointed out, you are better off going with optparse over getopt. getopt is pretty much a one-to-one mapping of the standard getopt(3) C library functions, and not very easy to use. optparse, while being a bit more verbose, is much better structured and simpler to extend later on. Here's a typical line to add an option to your parser: parser.add_option('-q', '--query', action="store", dest="query", help="query string", default="spam") It pretty much speaks for itself; at processing time, it will accept -q or --query as options, store the argument in an attribute called query and has a default value if you don't specify it. It is also self-documenting in that you declare the help argument (which will be used when run with -h/--help) right there with the option. Usually you parse your arguments with: options, args = parser.parse_args() This will, by default, parse the standard arguments passed to the script (sys.argv[1:]) options.query will then be set to the value you passed to the script. You create a parser simply by doing parser = optparse.OptionParser() These are all the basics you need. Here's a complete Python script that shows this: import optparseparser = optparse.OptionParser()parser.add_option('-q', '--query', action="store", dest="query", help="query string", default="spam")options, args = parser.parse_args()print 'Query string:', options.query 5 lines of python that show you the basics. Save it in sample.py, and run it once with python sample.py and once with python sample.py --query myquery Beyond that, you will find that optparse is very easy to extend.In one of my projects, I created a Command class which allows you to nest subcommands in a command tree easily. It uses optparse heavily to chain commands together. It's not something I can easily explain in a few lines, but feel free to browse around in my repository for the main class, as well as a class that uses it and the option parser | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/20063",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1335/"
]
} |
20,084 | Following on from my previous question I have been working on getting my object model to serialize to XML. But I have now run into a problem (quelle surprise!). The problem I have is that I have a collection, which is of a abstract base class type, which is populated by the concrete derived types. I thought it would be fine to just add the XML attributes to all of the classes involved and everything would be peachy. Sadly, thats not the case! So I have done some digging on Google and I now understand why it's not working. In that the XmlSerializer is in fact doing some clever reflection in order to serialize objects to/from XML, and since its based on the abstract type, it cannot figure out what the hell it's talking to . Fine. I did come across this page on CodeProject, which looks like it may well help a lot (yet to read/consume fully), but I thought I would like to bring this problem to the StackOverflow table too, to see if you have any neat hacks/tricks in order to get this up and running in the quickest/lightest way possible. One thing I should also add is that I DO NOT want to go down the XmlInclude route. There is simply too much coupling with it, and this area of the system is under heavy development, so the it would be a real maintenance headache! | Problem Solved! OK, so I finally got there (admittedly with a lot of help from here !). So summarise: Goals: I didn't want to go down the XmlInclude route due to the maintenence headache. Once a solution was found, I wanted it to be quick to implement in other applications. Collections of Abstract types may be used, as well as individual abstract properties. I didn't really want to bother with having to do "special" things in the concrete classes. Identified Issues/Points to Note: XmlSerializer does some pretty cool reflection, but it is very limited when it comes to abstract types (i.e. it will only work with instances of the abstract type itself, not subclasses). The Xml attribute decorators define how the XmlSerializer treats the properties its finds. The physical type can also be specified, but this creates a tight coupling between the class and the serializer (not good). We can implement our own XmlSerializer by creating a class that implements IXmlSerializable . The Solution I created a generic class, in which you specify the generic type as the abstract type you will be working with. This gives the class the ability to "translate" between the abstract type and the concrete type since we can hard-code the casting (i.e. we can get more info than the XmlSerializer can). I then implemented the IXmlSerializable interface, this is pretty straight forward, but when serializing we need to ensure we write the type of the concrete class to the XML, so we can cast it back when de-serializing. It is also important to note it must be fully qualified as the assemblies that the two classes are in are likely to differ. There is of course a little type checking and stuff that needs to happen here. Since the XmlSerializer cannot cast, we need to provide the code to do that, so the implicit operator is then overloaded (I never even knew you could do this!). The code for the AbstractXmlSerializer is this: using System;using System.Collections.Generic;using System.Text;using System.Xml.Serialization;namespace Utility.Xml{ public class AbstractXmlSerializer<AbstractType> : IXmlSerializable { // Override the Implicit Conversions Since the XmlSerializer // Casts to/from the required types implicitly. public static implicit operator AbstractType(AbstractXmlSerializer<AbstractType> o) { return o.Data; } public static implicit operator AbstractXmlSerializer<AbstractType>(AbstractType o) { return o == null ? null : new AbstractXmlSerializer<AbstractType>(o); } private AbstractType _data; /// <summary> /// [Concrete] Data to be stored/is stored as XML. /// </summary> public AbstractType Data { get { return _data; } set { _data = value; } } /// <summary> /// **DO NOT USE** This is only added to enable XML Serialization. /// </summary> /// <remarks>DO NOT USE THIS CONSTRUCTOR</remarks> public AbstractXmlSerializer() { // Default Ctor (Required for Xml Serialization - DO NOT USE) } /// <summary> /// Initialises the Serializer to work with the given data. /// </summary> /// <param name="data">Concrete Object of the AbstractType Specified.</param> public AbstractXmlSerializer(AbstractType data) { _data = data; } #region IXmlSerializable Members public System.Xml.Schema.XmlSchema GetSchema() { return null; // this is fine as schema is unknown. } public void ReadXml(System.Xml.XmlReader reader) { // Cast the Data back from the Abstract Type. string typeAttrib = reader.GetAttribute("type"); // Ensure the Type was Specified if (typeAttrib == null) throw new ArgumentNullException("Unable to Read Xml Data for Abstract Type '" + typeof(AbstractType).Name + "' because no 'type' attribute was specified in the XML."); Type type = Type.GetType(typeAttrib); // Check the Type is Found. if (type == null) throw new InvalidCastException("Unable to Read Xml Data for Abstract Type '" + typeof(AbstractType).Name + "' because the type specified in the XML was not found."); // Check the Type is a Subclass of the AbstractType. if (!type.IsSubclassOf(typeof(AbstractType))) throw new InvalidCastException("Unable to Read Xml Data for Abstract Type '" + typeof(AbstractType).Name + "' because the Type specified in the XML differs ('" + type.Name + "')."); // Read the Data, Deserializing based on the (now known) concrete type. reader.ReadStartElement(); this.Data = (AbstractType)new XmlSerializer(type).Deserialize(reader); reader.ReadEndElement(); } public void WriteXml(System.Xml.XmlWriter writer) { // Write the Type Name to the XML Element as an Attrib and Serialize Type type = _data.GetType(); // BugFix: Assembly must be FQN since Types can/are external to current. writer.WriteAttributeString("type", type.AssemblyQualifiedName); new XmlSerializer(type).Serialize(writer, _data); } #endregion }} So, from there, how do we tell the XmlSerializer to work with our serializer rather than the default? We must pass our type within the Xml attributes type property, for example: [XmlRoot("ClassWithAbstractCollection")]public class ClassWithAbstractCollection{ private List<AbstractType> _list; [XmlArray("ListItems")] [XmlArrayItem("ListItem", Type = typeof(AbstractXmlSerializer<AbstractType>))] public List<AbstractType> List { get { return _list; } set { _list = value; } } private AbstractType _prop; [XmlElement("MyProperty", Type=typeof(AbstractXmlSerializer<AbstractType>))] public AbstractType MyProperty { get { return _prop; } set { _prop = value; } } public ClassWithAbstractCollection() { _list = new List<AbstractType>(); }} Here you can see, we have a collection and a single property being exposed, and all we need to do is add the type named parameter to the Xml declaration, easy! :D NOTE: If you use this code, I would really appreciate a shout-out. It will also help drive more people to the community :) Now, but unsure as to what to do with answers here since they all had their pro's and con's. I'll upmod those that I feel were useful (no offence to those that weren't) and close this off once I have the rep :) Interesting problem and good fun to solve! :) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/20084",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/832/"
]
} |
20,088 | I am maintaining a few web applications. The development and qa environments use invalid/outdated ssl-certificates. Although it is generally a good thing, that Firefox makes me click like a dozen times to accept the certificate, this is pretty annoying. Is there a configuration-parameter to make Firefox (and possibly IE too) accept any ssl-certificate? EDIT: I have accepted the solution, that worked. But thanks to all the people that have advised to use self-signed certificates. I am totally aware, that the accepted solution leaves me with a gaping security hole . Nonetheless I am to lazy to change the certificate for all the applications and all the environments... But I also advice anybody strongly to leave validation enabled! | Go to Tools > Options > Advanced "Tab"(?) > Encryption Tab Click the "Validation" button, and uncheck the checkbox for checking validity Be advised though that this is pretty unsecure as it leaves you wide open to accept any invalid certificate. I'd only do this if using the browser on an Intranet where the validity of the cert isn't a concern to you, or you aren't concerned in general. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/20088",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1870/"
]
} |
20,107 | This line in YUI's Reset CSS is causing trouble for me: address,caption,cite,code,dfn,em,strong,th,var { font-style: normal; font-weight: normal;} It makes my em not italic and my strong not bold. Which is okay. I know how to override that in my own stylesheet. strong, b { font-weight: bold;}em, i { font-style: italic;} The problem comes in when I have text that's both em and strong . <strong>This is bold, <em>and this is italic, but not bold</em></strong> My rule for strong makes it bold, but YUI's rule for em makes it normal again. How do I fix that? | If your strong declaration comes after YUI's yours should override it. You can force it like this: strong, b, strong *, b * { font-weight: bold; }em, i, em *, i * { font-style: italic; } If you still support IE7 you'll need to add !important . strong, b, strong *, b * { font-weight: bold !important; }em, i, em *, i * { font-style: italic !important; } This works - see for yourself: /*YUI styles*/address,caption,cite,code,dfn,em,strong,th,var { font-style: normal; font-weight: normal;}/*End YUI styles =*/strong, b, strong *, b * { font-weight: bold;}em, i, em *, i * { font-style: italic;} <strong>Bold</strong> - <em>Italic</em> - <strong>Bold and <em>Italic</em></strong> | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20107",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/437/"
]
} |
20,146 | I'm looking for something like the tempfile module in Python: A (preferably) secure way to open a file for writing to. This should be easy to delete when I'm done too... It seems, .NET does not have the "batteries included" features of the tempfile module, which not only creates the file, but returns the file descriptor (old school, I know...) to it along with the path. At the same time, it makes sure only the creating user can access the file and whatnot ( mkstemp() I think): https://docs.python.org/library/tempfile.html Ah, yes, I can see that. But GetTempFileName does have a drawback: There is a race condition between when the file was created (upon call to GetTempFileName a 0-Byte file gets created) and when I get to open it (after return of GetTempFileName). This might be a security issue, although not for my current application... | I've also had the same requirement before, and I've created a small class to solve it: public sealed class TemporaryFile : IDisposable { public TemporaryFile() : this(Path.GetTempPath()) { } public TemporaryFile(string directory) { Create(Path.Combine(directory, Path.GetRandomFileName())); } ~TemporaryFile() { Delete(); } public void Dispose() { Delete(); GC.SuppressFinalize(this); } public string FilePath { get; private set; } private void Create(string path) { FilePath = path; using (File.Create(FilePath)) { }; } private void Delete() { if (FilePath == null) return; File.Delete(FilePath); FilePath = null; }} It creates a temporary file in a folder you specify or in the system temporary folder. It's a disposable class, so at the end of its life (either Dispose or the destructor), it deletes the file. You get the name of the file created (and path) through the FilePath property. You can certainly extend it to also open the file for writing and return its associated FileStream . An example usage: using (var tempFile = new TemporaryFile()) { // use the file through tempFile.FilePath...} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20146",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2260/"
]
} |
20,148 | I'm working on a projects which involves a lot of database writes, I'd say ( 70% inserts and 30% reads ). This ratio would also include updates which I consider to be one read and one write. The reads can be dirty (e.g. I don't need 100% accurate information at the time of read). The task in question will be doing over 1 million database transactions an hour. I've read a bunch of stuff on the web about the differences between MyISAM and InnoDB, and MyISAM seems like the obvious choice to me for the particular database/tables that I'll be using for this task. From what I seem to be reading, InnoDB is good if transactions are needed since row level locking is supported. Does anybody have any experience with this type of load (or higher)? Is MyISAM the way to go? | I have briefly discussed this question in a table so you can conclude whether to go with InnoDB or MyISAM . Here is a small overview of which db storage engine you should use in which situation: MyISAM InnoDB----------------------------------------------------------------Required full-text search Yes 5.6.4----------------------------------------------------------------Require transactions Yes----------------------------------------------------------------Frequent select queries Yes ----------------------------------------------------------------Frequent insert, update, delete Yes----------------------------------------------------------------Row locking (multi processing on single table) Yes----------------------------------------------------------------Relational base design Yes Summary In almost all circumstances, InnoDB is the best way to go But, frequent reading, almost no writing, use MyISAM Full-text search in MySQL <= 5.5, use MyISAM | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/20148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2013/"
]
} |
20,156 | Is there an easy way in C# to create Ordinals for a number? For example: 1 returns 1st 2 returns 2nd 3 returns 3rd ...etc Can this be done through String.Format() or are there any functions available to do this? | This page gives you a complete listing of all custom numerical formatting rules: Custom numeric format strings As you can see, there is nothing in there about ordinals, so it can't be done using String.Format . However its not really that hard to write a function to do it. public static string AddOrdinal(int num){ if( num <= 0 ) return num.ToString(); switch(num % 100) { case 11: case 12: case 13: return num + "th"; } switch(num % 10) { case 1: return num + "st"; case 2: return num + "nd"; case 3: return num + "rd"; default: return num + "th"; }} Update: Technically Ordinals don't exist for <= 0, so I've updated the code above. Also removed the redundant ToString() methods. Also note, this is not internationalized. I've no idea what ordinals look like in other languages. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/20156",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383/"
]
} |
20,173 | Does anyone have any experience getting MSTest to copy hibernate.cfg.xml properly to the output directory? All my MSTests fail with a cannot find hibernate.cfg.xml error (I have it set to Copy Always), but my MBUnit tests pass. | You can try adding the DeploymentItemAttribute to one of your tests, or edit your .testrunconfig file and add the file to the Deployment list. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1975/"
]
} |
20,206 | Anyone have any good urls for templates or diagram examples in Visio 2007 to be used in software architecture? | Here is a link to a Visio Stencil and Template for UML 2.0. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/20206",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1940/"
]
} |
20,227 | Every method I write to encode a string in Java using 3DES can't be decrypted back to the original string. Does anyone have a simple code snippet that can just encode and then decode the string back to the original string? I know I'm making a very silly mistake somewhere in this code. Here's what I've been working with so far: ** note, I am not returning the BASE64 text from the encrypt method, and I am not base64 un-encoding in the decrypt method because I was trying to see if I was making a mistake in the BASE64 part of the puzzle. public class TripleDESTest { public static void main(String[] args) { String text = "kyle boon"; byte[] codedtext = new TripleDESTest().encrypt(text); String decodedtext = new TripleDESTest().decrypt(codedtext); System.out.println(codedtext); System.out.println(decodedtext); } public byte[] encrypt(String message) { try { final MessageDigest md = MessageDigest.getInstance("md5"); final byte[] digestOfPassword = md.digest("HG58YZ3CR9".getBytes("utf-8")); final byte[] keyBytes = Arrays.copyOf(digestOfPassword, 24); for (int j = 0, k = 16; j < 8;) { keyBytes[k++] = keyBytes[j++]; } final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(new byte[8]); final Cipher cipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, key, iv); final byte[] plainTextBytes = message.getBytes("utf-8"); final byte[] cipherText = cipher.doFinal(plainTextBytes); final String encodedCipherText = new sun.misc.BASE64Encoder().encode(cipherText); return cipherText; } catch (java.security.InvalidAlgorithmParameterException e) { System.out.println("Invalid Algorithm"); } catch (javax.crypto.NoSuchPaddingException e) { System.out.println("No Such Padding"); } catch (java.security.NoSuchAlgorithmException e) { System.out.println("No Such Algorithm"); } catch (java.security.InvalidKeyException e) { System.out.println("Invalid Key"); } catch (BadPaddingException e) { System.out.println("Invalid Key");} catch (IllegalBlockSizeException e) { System.out.println("Invalid Key");} catch (UnsupportedEncodingException e) { System.out.println("Invalid Key");} return null; } public String decrypt(byte[] message) { try { final MessageDigest md = MessageDigest.getInstance("md5"); final byte[] digestOfPassword = md.digest("HG58YZ3CR9".getBytes("utf-8")); final byte[] keyBytes = Arrays.copyOf(digestOfPassword, 24); for (int j = 0, k = 16; j < 8;) { keyBytes[k++] = keyBytes[j++]; } final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(new byte[8]); final Cipher decipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); decipher.init(Cipher.DECRYPT_MODE, key, iv); //final byte[] encData = new sun.misc.BASE64Decoder().decodeBuffer(message); final byte[] plainText = decipher.doFinal(message); return plainText.toString(); } catch (java.security.InvalidAlgorithmParameterException e) { System.out.println("Invalid Algorithm"); } catch (javax.crypto.NoSuchPaddingException e) { System.out.println("No Such Padding"); } catch (java.security.NoSuchAlgorithmException e) { System.out.println("No Such Algorithm"); } catch (java.security.InvalidKeyException e) { System.out.println("Invalid Key"); } catch (BadPaddingException e) { System.out.println("Invalid Key");} catch (IllegalBlockSizeException e) { System.out.println("Invalid Key");} catch (UnsupportedEncodingException e) { System.out.println("Invalid Key");} catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; }} | Your code was fine except for the Base 64 encoding bit (which you mentioned was a test), the reason the output may not have made sense is that you were displaying a raw byte array (doing toString() on a byte array returns its internal Java reference, not the String representation of the contents ). Here's a version that's just a teeny bit cleaned up and which prints "kyle boon" as the decoded string: import java.security.MessageDigest;import java.util.Arrays;import javax.crypto.Cipher;import javax.crypto.SecretKey;import javax.crypto.spec.IvParameterSpec;import javax.crypto.spec.SecretKeySpec;public class TripleDESTest { public static void main(String[] args) throws Exception { String text = "kyle boon"; byte[] codedtext = new TripleDESTest().encrypt(text); String decodedtext = new TripleDESTest().decrypt(codedtext); System.out.println(codedtext); // this is a byte array, you'll just see a reference to an array System.out.println(decodedtext); // This correctly shows "kyle boon" } public byte[] encrypt(String message) throws Exception { final MessageDigest md = MessageDigest.getInstance("md5"); final byte[] digestOfPassword = md.digest("HG58YZ3CR9" .getBytes("utf-8")); final byte[] keyBytes = Arrays.copyOf(digestOfPassword, 24); for (int j = 0, k = 16; j < 8;) { keyBytes[k++] = keyBytes[j++]; } final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(new byte[8]); final Cipher cipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); cipher.init(Cipher.ENCRYPT_MODE, key, iv); final byte[] plainTextBytes = message.getBytes("utf-8"); final byte[] cipherText = cipher.doFinal(plainTextBytes); // final String encodedCipherText = new sun.misc.BASE64Encoder() // .encode(cipherText); return cipherText; } public String decrypt(byte[] message) throws Exception { final MessageDigest md = MessageDigest.getInstance("md5"); final byte[] digestOfPassword = md.digest("HG58YZ3CR9" .getBytes("utf-8")); final byte[] keyBytes = Arrays.copyOf(digestOfPassword, 24); for (int j = 0, k = 16; j < 8;) { keyBytes[k++] = keyBytes[j++]; } final SecretKey key = new SecretKeySpec(keyBytes, "DESede"); final IvParameterSpec iv = new IvParameterSpec(new byte[8]); final Cipher decipher = Cipher.getInstance("DESede/CBC/PKCS5Padding"); decipher.init(Cipher.DECRYPT_MODE, key, iv); // final byte[] encData = new // sun.misc.BASE64Decoder().decodeBuffer(message); final byte[] plainText = decipher.doFinal(message); return new String(plainText, "UTF-8"); }} | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/20227",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1486/"
]
} |
20,245 | I am doing an e-commerce solution in ASP.NET which uses PayPal's Website Payments Standard service. Together with that I use a service they offer ( Payment Data Transfer ) that sends you back order information after a user has completed a payment. The final thing I need to do is to parse the POST request from them and persist the info in it. The HTTP request's content is in this form : SUCCESS first_name=Jane+Doe last_name=Smith payment_status=Completed payer_email=janedoesmith%40hotmail.com payment_gross=3.99 mc_currency=USD custom=For+the+purchase+of+the+rare+book+Green+Eggs+%26+Ham Basically I want to parse this information and do something meaningful, like send it through e-mail or save it in DB. My question is what is the right approach to do parsing raw HTTP data in ASP.NET, not how the parsing itself is done. | Something like this placed in your onload event. if (Request.RequestType == "POST"){ using (StreamReader sr = new StreamReader(Request.InputStream)) { if (sr.ReadLine() == "SUCCESS") { /* Do your parsing here */ } }} Mind you that they might want some special sort of response to (ie; not your full webpage), so you might do something like this after you're done parsing. Response.Clear();Response.ContentType = "text/plain";Response.Write("Thanks!");Response.End(); Update: this should be done in a Generic Handler (.ashx) file in order to avoid a great deal of overhead from the page model. Check out this article for more information about .ashx files | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20245",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1801/"
]
} |
20,263 | "Microsoft SQL Server Profiler is a graphical user interface to SQL Trace for monitoring an instance of the Database Engine or Analysis Services." I find using SQL Server Profiler extremely useful during development, testing and when I am debugging database application problems. Does anybody know if there is an equivalent program for MySql? | Something cool that is in version 5.0.37 of the community server is MySQL's new profiler . This may give you what info you are looking for. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/20263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1982/"
]
} |
20,267 | I have a large text template which needs tokenized sections replaced by other text. The tokens look something like this: ##USERNAME##. My first instinct is just to use String.Replace(), but is there a better, more efficient way or is Replace() already optimized for this? | System.Text.RegularExpressions.Regex.Replace() is what you seek - IF your tokens are odd enough that you need a regex to find them. Some kind soul did some performance testing , and between Regex.Replace(), String.Replace(), and StringBuilder.Replace(), String.Replace() actually came out on top. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20267",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1436/"
]
} |
20,298 | I have something like this: barProgress.BeginAnimation(RangeBase.ValueProperty, new DoubleAnimation( barProgress.Value, dNextProgressValue, new Duration(TimeSpan.FromSeconds(dDuration))); Now, how would you stop that animation (the DoubleAnimation )? The reason I want to do this, is because I would like to start new animations (this seems to work, but it's hard to tell) and eventually stop the last animation... | To stop it, call BeginAnimation again with the second argument set to null . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/20298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2260/"
]
} |
20,346 | What are attributes in .NET, what are they good for, and how do I create my own attributes? | Metadata. Data about your objects/methods/properties. For example I might declare an Attribute called: DisplayOrder so I can easily control in what order properties should appear in the UI. I could then append it to a class and write some GUI components that extract the attributes and order the UI elements appropriately. public class DisplayWrapper{ private UnderlyingClass underlyingObject; public DisplayWrapper(UnderlyingClass u) { underlyingObject = u; } [DisplayOrder(1)] public int SomeInt { get { return underlyingObject .SomeInt; } } [DisplayOrder(2)] public DateTime SomeDate { get { return underlyingObject .SomeDate; } }} Thereby ensuring that SomeInt is always displayed before SomeDate when working with my custom GUI components. However, you'll see them most commonly used outside of the direct coding environment. For example the Windows Designer uses them extensively so it knows how to deal with custom made objects. Using the BrowsableAttribute like so: [Browsable(false)]public SomeCustomType DontShowThisInTheDesigner{ get{/*do something*/}} Tells the designer not to list this in the available properties in the Properties window at design time for example. You could also use them for code-generation, pre-compile operations (such as Post-Sharp) or run-time operations such as Reflection.Emit.For example, you could write a bit of code for profiling that transparently wrapped every single call your code makes and times it. You could "opt-out" of the timing via an attribute that you place on particular methods. public void SomeProfilingMethod(MethodInfo targetMethod, object target, params object[] args){ bool time = true; foreach (Attribute a in target.GetCustomAttributes()) { if (a.GetType() is NoTimingAttribute) { time = false; break; } } if (time) { StopWatch stopWatch = new StopWatch(); stopWatch.Start(); targetMethod.Invoke(target, args); stopWatch.Stop(); HandleTimingOutput(targetMethod, stopWatch.Duration); } else { targetMethod.Invoke(target, args); }} Declaring them is easy, just make a class that inherits from Attribute. public class DisplayOrderAttribute : Attribute{ private int order; public DisplayOrderAttribute(int order) { this.order = order; } public int Order { get { return order; } }} And remember that when you use the attribute you can omit the suffix "attribute" the compiler will add that for you. NOTE: Attributes don't do anything by themselves - there needs to be some other code that uses them. Sometimes that code has been written for you but sometimes you have to write it yourself. For example, the C# compiler cares about some and certain frameworks frameworks use some (e.g. NUnit looks for [TestFixture] on a class and [Test] on a test method when loading an assembly). So when creating your own custom attribute be aware that it will not impact the behaviour of your code at all. You'll need to write the other part that checks attributes (via reflection) and act on them. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/20346",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1595/"
]
} |
20,363 | I have a database with a table Customers that have some data I have another database in the office that everything is the same, but my table Customers is empty How can I create a sql file in SQL Server 2005 (T-SQL) that takes everything on the table Customers from the first database, creates a, let's say, buildcustomers.sql, I zip that file, copy it across the network, execute it in my SQL Server and voila! my table Customers is full How can I do the same for a whole database? | This functionality is already built in to Sql Server Management Studio 2008. Just download the trial and only install the client tools (which shouldn't expire). Use Management Studio 2008 to connect to your 2005 database (its backwards compatible). Right click your database Choose Tasks > Generate Scripts Press Next, select your database again On the 'Choose Script Options' screen, there is an option called Script Data which will generate SQL insert statements for all your data. (Note: for SQL Server Management Studio 2008 R2, the option is called "Types of data to script" and is the last one in the General section. The choices are "data only", "schema and data", and "schema only") | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/20363",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
20,420 | I'm a complete Xcode/Objective-C/Cocoa newbie but I'm learning fast and really starting to enjoy getting to grips with a new language, platform and paradigm. One thing is though, having been using Visual Studio with R# for so long I've kind of been spoiled with the coding tools such as refactorings and completion etc and as far as I can tell Xcode has some fairly limited built in support for this stuff. On that note, does anyone know if any add-ins or whatever are available for the Xcode environment which add coding helpers such as automatically generating implementation skeletons from a class interface definition etc? I suspect there aren't but I suppose it can't help to ask. | You sound as if you're looking for three major things: code templates, refactoring tools, and auto-completion. The good news is that Xcode 3 and later come with superb auto-completion and template support. By default, you have to explicitly request completion by hitting the escape key. (This actually works in all NSTextView s; try it!) If you want to have the completions appear automatically, you can go to Preferences -> Code Sense and set the pop-up to appear automatically after a few seconds. You should find good completions for C and Objective-C code, and pretty good completions for C++. Xcode also has a solid template/skeleton system that you can use. You can see what templates are available by default by going to Edit -> Insert Text Macro. Of course, you don't want to insert text macros with the mouse; that defeats the point. Instead, you have two options: Back in Preferences ,go to Key Bindings , and then, under Menu Key Bindings , assign a specific shortcut to macros you use often. I personally don't bother doing this, but I know plenty of great Mac devs who do Use the CompletionPrefix . By default, nearly all of the templates have a special prefix that, if you type and then hit the escape key, will result in the template being inserted. You can use Control-/ to move between the completion fields. You can see a full list of Xcode's default macros and their associated CompletionPrefix es at Crooked Spin . You can also add your own macros, or modify the defaults. To do so, edit the file /Developer/Library/Xcode/Specifications/{C,HTML}.xctxtmacro . The syntax should be self-explanatory, if not terribly friendly. Unfortunately, if you're addicted to R#, you will be disappointed by your refactoring options. Basic refactoring is provided within Xcode through the context menu or by hitting Shift-Apple-J. From there, you can extract and rename methods, promote and demote them through the class hierarchy, and a few other common operations. Unfortunately, neither Xcode nor any third-party utilities offer anything approaching Resharper, so on that front, you're currently out of luck. Thankfully, Apple has already demonstrated versions of Xcode in the works that have vastly improved refactoring capabilities, so hopefully you won't have to wait too long before the situation starts to improve. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/20420",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1120/"
]
} |
20,426 | I have a tree encoded in a MySQL database as edges: CREATE TABLE items ( num INT, tot INT, PRIMARY KEY (num) );CREATE TABLE tree ( orig INT, term INT FOREIGN KEY (orig,term) REFERENCES items (num,num) ) For each leaf in the tree, items.tot is set by someone. For interior nodes, items.tot needs to be the sum of it's children. Running the following query repeatedly would generate the desired result. UPDATE items SET tot = ( SELECT SUM(b.tot) FROM tree JOIN items AS b ON tree.term = b.num WHERE tree.orig=items.num) WHERE EXISTS (SELECT * FROM tree WHERE orig=items.num) (note this actually doesn't work but that's beside the point) Assume that the database exists and the invariant are already satisfied. The question is: What is the most practical way to update the DB while maintaining this requirement? Updates may move nodes around or alter the value of tot on leaf nodes. It can be assumed that leaf nodes will stay as leaf nodes, interior nodes will stay as interior nodes and the whole thing will remain as a proper tree. Some thoughts I have had: Full Invalidation, after any update, recompute everything (Um... No) Set a trigger on the items table to update the parent of any row that is updated This would be recursive (updates trigger updates, trigger updates, ...) Doesn't work, MySQL can't update the table that kicked off the trigger Set a trigger to schedule an update of the parent of any row that is updated This would be iterative (get an item from the schedule, processing it schedules more items) What kicks this off? Trust client code to get it right? An advantage is that if the updates are ordered correctly fewer sums need to be computer. But that ordering is a complication in and of it's own. An ideal solution would generalize to other "aggregating invariants" FWIW I know this is "a bit overboard", but I'm doing this for fun (Fun: verb, Finding the impossible by doing it. :-) | You sound as if you're looking for three major things: code templates, refactoring tools, and auto-completion. The good news is that Xcode 3 and later come with superb auto-completion and template support. By default, you have to explicitly request completion by hitting the escape key. (This actually works in all NSTextView s; try it!) If you want to have the completions appear automatically, you can go to Preferences -> Code Sense and set the pop-up to appear automatically after a few seconds. You should find good completions for C and Objective-C code, and pretty good completions for C++. Xcode also has a solid template/skeleton system that you can use. You can see what templates are available by default by going to Edit -> Insert Text Macro. Of course, you don't want to insert text macros with the mouse; that defeats the point. Instead, you have two options: Back in Preferences ,go to Key Bindings , and then, under Menu Key Bindings , assign a specific shortcut to macros you use often. I personally don't bother doing this, but I know plenty of great Mac devs who do Use the CompletionPrefix . By default, nearly all of the templates have a special prefix that, if you type and then hit the escape key, will result in the template being inserted. You can use Control-/ to move between the completion fields. You can see a full list of Xcode's default macros and their associated CompletionPrefix es at Crooked Spin . You can also add your own macros, or modify the defaults. To do so, edit the file /Developer/Library/Xcode/Specifications/{C,HTML}.xctxtmacro . The syntax should be self-explanatory, if not terribly friendly. Unfortunately, if you're addicted to R#, you will be disappointed by your refactoring options. Basic refactoring is provided within Xcode through the context menu or by hitting Shift-Apple-J. From there, you can extract and rename methods, promote and demote them through the class hierarchy, and a few other common operations. Unfortunately, neither Xcode nor any third-party utilities offer anything approaching Resharper, so on that front, you're currently out of luck. Thankfully, Apple has already demonstrated versions of Xcode in the works that have vastly improved refactoring capabilities, so hopefully you won't have to wait too long before the situation starts to improve. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/20426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1343/"
]
} |
20,463 | Interfaces allow you to create code which defines the methods of classes that implement it. You cannot however add any code to those methods. Abstract classes allow you to do the same thing, along with adding code to the method. Now if you can achieve the same goal with abstract classes, why do we even need the concept of interfaces? I've been told that it has to do with OO theory from C++ to Java, which is what PHP's OO stuff is based on. Is the concept useful in Java but not in PHP? Is it just a way to keep from having placeholders littered in the abstract class? Am I missing something? | The entire point of interfaces is to give you the flexibility to have your class be forced to implement multiple interfaces, but still not allow multiple inheritance. The issues with inheriting from multiple classes are many and varied and the wikipedia page on it sums them up pretty well. Interfaces are a compromise. Most of the problems with multiple inheritance don't apply to abstract base classes, so most modern languages these days disable multiple inheritance yet call abstract base classes interfaces and allows a class to "implement" as many of those as they want. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/20463",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1797/"
]
} |
20,529 | I have been using Eclipse as an IDE for a short amount of time (about 3 months of full use) and almost every day I learn about some shortcut or feature that I had absolutely no idea about. For instance, just today I learned that Ctrl + 3 was the shortcut for a Quick Access window. I was wondering what your most useful/favorite Eclipse features are. With the IDE being so big, it would be helpful to learn about the more commonly used parts of the program. | My most commonly used features are ctrl + 1 quick-fix / spell-checker opening files ctrl + shift + t load class file by classname ctrl + shift + r load any file by filename matches are made on the start of the class/filename. start your search pattern with a * to search anywhere within the filename/classname. Formatting ctrl + shift + f Format source file (set up your formatting style in Window | preferences | java | code style | formatter) ctrl + shift + o Organise imports Generated code alt + s , r to generate getters and setters alt + s , v to insert method signatures for overidden methods from superclass or interface Refactorings alt + shift + l Extract text-selection as local variable (really handy in that it determines and inserts the type for you. alt + shift + m Extract text-selection as a method alt + shift + i inline selected method Running and debugging. alt + shift + x is a really handy prefix to run stuff in your current file. alt + shift + x , t run unit tests in current file alt + shift + x , j run main in current file alt + shift + x , r run on server There are more. The options are shown to you in the lower-right popup after hitting alt + shift + x . alt + shift + x can be switched for alt + shift + d in all the above examples to run in the debugger. Validation As of the recent Ganymede release, you can now switch of validation in specified files and folders. I've been waiting for this feature for ages. Go to Project | Properties | Validation click on the ... button in the settings column of the validator you want to shut up Add a rule to the exclude group code navigation hold down ctrl to make all variables, methods and classnames hyperlinks to their definitions. alt + left to navigate back to where you clicked ctrl alt + right to go "forwards" again | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/20529",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2329/"
]
} |
20,533 | I searched for this and found Maudite's question about text editors but they were all for Windows. As you have no doubt guessed, I am trying to find out if there are any text/code editors for the Mac besides what I know of. I'll edit my post to include editors listed. Free Textwrangler Xcode Mac Vim Aquamacs and closer to the original EMacs JEdit Editra Eclipse NetBeans Kod TextMate2 - GPL Brackets Atom.io Commercial Textmate BBEdit SubEthaEdit Coda Sublime Text 2 Smultron WebStorm Peppermint Articles related to the subject Faceoff, which is the best text editor ever? Maceditors.com, mac editors features compared Thank you everybody that has added suggestions. | I haven't used it myself, but another free one that I've heard good thing about is Smultron . In my own research on this, I found this interesting article: Faceoff: Which Is The Best Mac Text Editor Ever? | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20533",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1384652/"
]
} |
20,586 | I'm trying to bind a list of custom objects to a WPF Image like this: <Image> <Image.Source> <BitmapImage UriSource="{Binding Path=ImagePath}" /> </Image.Source></Image> But it doesn't work. This is the error I'm getting: "Property 'UriSource' or property 'StreamSource' must be set." What am I missing? | WPF has built-in converters for certain types. If you bind the Image's Source property to a string or Uri value, under the hood WPF will use an ImageSourceConverter to convert the value to an ImageSource . So <Image Source="{Binding ImageSource}"/> would work if the ImageSource property was a string representation of a valid URI to an image. You can of course roll your own Binding converter: public class ImageConverter : IValueConverter{ public object Convert( object value, Type targetType, object parameter, CultureInfo culture) { return new BitmapImage(new Uri(value.ToString())); } public object ConvertBack( object value, Type targetType, object parameter, CultureInfo culture) { throw new NotSupportedException(); }} and use it like this: <Image Source="{Binding ImageSource, Converter={StaticResource ImageConverter}}"/> | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/20586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/373/"
]
} |
20,587 | I want to get the results of a stored procedure and place them into a CSV file onto an FTP location. The catch though is that I cannot create a local/temporary file that I can then FTP over. The approach I was taking was to use an SSIS package to create a temporary file and then have a FTP Task within the pack to FTP the file over, but our DBA's do not allow temporary files to be created on any servers. in reply to Yaakov Ellis I think we will need to convince the DBA's to let me use at least a share on a server that they do not operate, or ask them how they would do it. in reply to Kev I like the idea of the CLR integration, but I don't think our DBA's even know what that is lol and they would probably not allow it either. But I will probably be able to do this within a Script Task in an SSIS package that can be scheduled. | This step-by-step example is for others who might stumble upon this question. This example uses Windows Server 2008 R2 server and SSIS 2008 R2 . Even though, the example uses SSIS 2008 R2 , the logic used is applicable to SSIS 2005 as well. Thanks to @Kev for the FTPWebRequest code. Create an SSIS package ( Steps to create an SSIS package ). I have named the package in the format YYYYMMDD_hhmm in the beginning followed by SO stands for Stack Overflow, followed by the SO question id , and finally a description. I am not saying that you should name your package like this. This is for me to easily refer this back later. Note that I also have two Data Sources namely Adventure Works and Practice DB . I will be using Adventure Works data source, which points to AdventureWorks database downloaded from this link . Refer screenshot #1 at the bottom of the answer. In the AdventureWorks database, create a stored procedure named dbo.GetCurrency using the below given script. CREATE PROCEDURE [dbo].[GetCurrency]ASBEGIN SET NOCOUNT ON; SELECT TOP 10 CurrencyCode , Name , ModifiedDate FROM Sales.Currency ORDER BY CurrencyCodeENDGO On the package’s Connection Manager section, right-click and select New Connection From Data Source . On the Select Data Source dialog, select Adventure Works and click OK . You should now see the Adventure Works data source under the Connection Managers section. Refer screenshot #2 , #3 and #4 . On the package, create the following variables. Refer screenshot #5 . ColumnDelimiter : This variable is of type String. This will be used to separate the column data when it is written to the file. In this example, we will be using comma (,) and the code is written to handle only displayable characters. For non-displayable characters like tab (\t), you might need to change the code used in this example accordingly. FileName : This variable is of type String. It will contain the name of the file. In this example, I have named the file as Currencies.csv because I am going to export list of currency names. FTPPassword : This variable is of type String. This will contain the password to the FTP website. Ideally, the package should be encrypted to hide sensitive information. FTPRemotePath : This variable is of type String. This will contain the FTP folder path to which the file should be uploaded to. For example if the complete FTP URI is ftp://myFTPSite.com/ssis/samples/uploads , then the RemotePath would be /ssis/samples/uploads. FTPServerName : This variable is of type String. This will contain the FTP site root URI. For example if the complete FTP URI is ftp://myFTPSite.com/ssis/samples/uploads , then the FTPServerName would contain ftp://myFTPSite.com . You can combine FTPRemotePath with this variable and have a single variable. It is up to your preference. FTPUserName :This variable is of type String. This will contain the user name that will be used to connect to the FTP website. ListOfCurrencies : This variable is of type Object. This will contain the result set from the stored procedure and it will be looped through in the Script Task. ShowHeader : This variable is of type Boolean. This will contain values true/false. True indicates that the first row in the file will contain Column names and False indicates that the first row will not contain Column names. SQLGetData : This variable is of type String. This will contain the Stored Procedure execution statement. This example uses the value EXEC dbo.GetCurrency On the package’s Control Flow tab, place an Execute SQL Task and name it as Get Data . Double-click on the Execute SQL Task to bring the Execute SQL Task Editor . On the General section of the Execute SQL Task Editor , set the ResultSet to Full result set , the Connection to Adventure Works , the SQLSourceType to Variable and the SourceVariable to User::SQLGetData . On the Result Set section, click Add button. Set the Result Name to 0 , this indicates the index and the Variable to User::ListOfCurrencies . The output of the stored procedure will be saved to this object variable. Click OK . Refer screenshot #6 and #7 . On the package’s Control Flow tab, place a Script Task below the Execute SQL Task and name it as Save to FTP . Double-click on the Script Task to bring the Script Task Editor . On the Script section, click the Edit Script… button. Refer screenshot #8 . This will bring up the Visual Studio Tools for Applications (VSTA) editor. Replace the code within the class ScriptMain in the editor with the code given below. Also, make sure that you add the using statements to the namespaces System.Data.OleDb , System.IO , System.Net , System.Text . Refer screenshot #9 that highlights the code changes. Close the VSTA editor and click Ok to close the Script Task Editor. Script code takes the object variable ListOfCurrencies and stores it into a DataTable with the help of OleDbDataAdapter because we are using OleDb connection. The code then loops through each row and if the variable ShowHeader is set to true, the code will include the Column names in the first row written to the file. The result is stored in a stringbuilder variable. After the string builder variable is populated with all the data, the code creates an FTPWebRequest object and connects to the FTP Uri by combining the variables FTPServerName, FTPRemotePath and FileName using the credentials provided in the variables FTPUserName and FTPPassword. Then the full string builder variable contents are written to the file. The method WriteRowData is created to loop through columns and provide the column names or data information based on the parameters passed. using System;using System.Data;using Microsoft.SqlServer.Dts.Runtime;using System.Windows.Forms;using System.Data.OleDb;using System.IO;using System.Net;using System.Text;namespace ST_7033c2fc30234dae8086558a88a897dd.csproj{ [System.AddIn.AddIn("ScriptMain", Version = "1.0", Publisher = "", Description = "")] public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase { #region VSTA generated code enum ScriptResults { Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success, Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure }; #endregion public void Main() { Variables varCollection = null; Dts.VariableDispenser.LockForRead("User::ColumnDelimiter"); Dts.VariableDispenser.LockForRead("User::FileName"); Dts.VariableDispenser.LockForRead("User::FTPPassword"); Dts.VariableDispenser.LockForRead("User::FTPRemotePath"); Dts.VariableDispenser.LockForRead("User::FTPServerName"); Dts.VariableDispenser.LockForRead("User::FTPUserName"); Dts.VariableDispenser.LockForRead("User::ListOfCurrencies"); Dts.VariableDispenser.LockForRead("User::ShowHeader"); Dts.VariableDispenser.GetVariables(ref varCollection); OleDbDataAdapter dataAdapter = new OleDbDataAdapter(); DataTable currencies = new DataTable(); dataAdapter.Fill(currencies, varCollection["User::ListOfCurrencies"].Value); bool showHeader = Convert.ToBoolean(varCollection["User::ShowHeader"].Value); int rowCounter = 0; string columnDelimiter = varCollection["User::ColumnDelimiter"].Value.ToString(); StringBuilder sb = new StringBuilder(); foreach (DataRow row in currencies.Rows) { rowCounter++; if (rowCounter == 1 && showHeader) { WriteRowData(currencies, row, columnDelimiter, true, ref sb); } WriteRowData(currencies, row, columnDelimiter, false, ref sb); } string ftpUri = string.Concat(varCollection["User::FTPServerName"].Value, varCollection["User::FTPRemotePath"].Value, varCollection["User::FileName"].Value); FtpWebRequest ftp = (FtpWebRequest)FtpWebRequest.Create(ftpUri); ftp.Method = WebRequestMethods.Ftp.UploadFile; string ftpUserName = varCollection["User::FTPUserName"].Value.ToString(); string ftpPassword = varCollection["User::FTPPassword"].Value.ToString(); ftp.Credentials = new System.Net.NetworkCredential(ftpUserName, ftpPassword); using (StreamWriter sw = new StreamWriter(ftp.GetRequestStream())) { sw.WriteLine(sb.ToString()); sw.Flush(); } Dts.TaskResult = (int)ScriptResults.Success; } public void WriteRowData(DataTable currencies, DataRow row, string columnDelimiter, bool isHeader, ref StringBuilder sb) { int counter = 0; foreach (DataColumn column in currencies.Columns) { counter++; if (isHeader) { sb.Append(column.ColumnName); } else { sb.Append(row[column].ToString()); } if (counter != currencies.Columns.Count) { sb.Append(columnDelimiter); } } sb.Append(System.Environment.NewLine); } }} Once the tasks have been configured, the package’s Control Flow should look like as shown in screenshot #10 . Screenshot #11 shows the output of the stored procedure execution statement EXEC dbo.GetCurrency. Execute the package. Screenshot #12 shows successful execution of the package. Using the FireFTP add-on available in FireFox browser, I logged into the FTP website and verified that the file has been successfully uploaded to the FTP website. Refer screenshot # 13 . Examining the contents by opening the file in Notepad++ shows that it matches with the stored procedure output. Refer screenshot # 14 . Thus, the example demonstrated how to write results from database to an FTP website without having to use temporary/local files. Hope that helps someone. Screenshots: #1 : Solution_Explorer #2 : New_Connection_From_Data_Source #3 : Select_Data_Source #4 : Connection_Managers #5 : Variables #6 : Execute_SQL_Task_Editor_General #7 : Execute_SQL_Task_Editor_Result_Set #8 : Script_Task_Editor #9 : Script_Task_VSTA_Code #10 : Control_Flow_Tab #11 : Query_Results #12 : Package_Execution_Successful #13 : File_In_FTP #14 : File_Contents | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20587",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1950/"
]
} |
20,598 | I'm interested in doing comparisons between the date string and the MySQL timestamp. However, I'm not seeing an easy conversion. Am I overlooking something obvious? | Converting from timestamp to format: date('Y-m-d', $timestamp); Converting from formatted to timestamp: mktime(0, 0, 0, $month, $day, $year, $is_dst); See date and mktime for further documentation. When it comes to storing it's up to you whether to use the MySQL DATE format for stroing as a formatted date; as an integer for storing as a UNIX timestamp; or you can use MySQL's TIMESTAMP format which converts a numeric timestamp into a readable format. Check the MySQL Doc for TIMESTAMP info. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20598",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
]
} |
20,611 | The following code should find the appropriate project tag and remove it from the XmlDocument, however when I test it, it says: The node to be removed is not a child of this node. Does anyone know the proper way to do this? public void DeleteProject (string projectName){ string ccConfigPath = ConfigurationManager.AppSettings["ConfigPath"]; XmlDocument configDoc = new XmlDocument(); configDoc.Load(ccConfigPath); XmlNodeList projectNodes = configDoc.GetElementsByTagName("project"); for (int i = 0; i < projectNodes.Count; i++) { if (projectNodes[i].Attributes["name"] != null) { if (projectName == projectNodes[i].Attributes["name"].InnerText) { configDoc.RemoveChild(projectNodes[i]); configDoc.Save(ccConfigPath); } } }} UPDATE Fixed. I did two things: XmlNode project = configDoc.SelectSingleNode("//project[@name='" + projectName + "']"); Replaced the For loop with an XPath query, which wasn't for fixing it, just because it was a better approach. The actual fix was: project.ParentNode.RemoveChild(project); Thanks Pat and Chuck for this suggestion. | Instead of configDoc.RemoveChild(projectNodes[i]); try projectNodes[i].parentNode.RemoveChild(projectNodes[i]); | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/20611",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1965/"
]
} |
20,627 | I've seen this all over the place: Download here! SHA1 = 8e1ed2ce9e7e473d38a9dc7824a384a9ac34d7d0 What does it mean? How does a hash come into play as far as downloads and... What use can I make of it? Is this a legacy item where you used to have to verify some checksum after you downloaded the whole file? | It's a security measure. It allows you to verify that the file you just downloaded is the one that the author posted to the site. Note that using hashes from the same website you're getting the files from is not especially secure. Often a good place to get them from is a mailing list announcement where a PGP-signed email contains the link to the file and the hash. Since this answer has been ranked so highly compared to the others for some reason, I'm editing it to add the other major reason mentioned first by the other authors below, which is to verify the integrity of the file after transferring it over the network. So: Security - verify that the file that you downloaded was the one the author originally published Integrity - verify that the file wasn't damaged during transmission over the network. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20627",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/547/"
]
} |
20,658 | How, in general, does one determine if a PC supports hardware virtualization? I use VirtualPC to set up parallel test environments and I'd enjoy a bit of a speed boost. | Download this: http://www.cpuid.com/cpuz.php Also check, http://en.wikipedia.org/wiki/X86_virtualization Edit: Additional, I know it's for XEN but the instructions are the same for all VMs that want hardware support. http://wiki.xensource.com/xenwiki/HVM_Compatible_Processors I can't try it from work, but I'm sure it can identify whether you've got the Intel VT or AMD-V instructions. Intel will have a "vmx" instruction and AMD will have a "svm". On linux you can check /proc/cpuinfo, "egrep '(vmx|svm)' /proc/cpuinfo" | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20658",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1490/"
]
} |
20,663 | AOP is an interesting programming paradigm in my opinion. However, there haven't been discussions about it yet here on stackoverflow (at least I couldn't find them). What do you think about it in general? Do you use AOP in your projects? Or do you think it's rather a niche technology that won't be around for a long time or won't make it into the mainstream (like OOP did, at least in theory ;))? If you do use AOP then please let us know which tools you use as well. Thanks! | Yes. Orthogonal concerns, like security, are best done with AOP-style interception. Whether that is done automatically (through something like a dependency injection container) or manually is unimportant to the end goal. One example: the "before/after" attributes in xUnit.net (an open source project I run) are a form of AOP-style method interception. You decorate your test methods with these attributes, and just before and after that test method runs, your code is called. It can be used for things like setting up a database and rolling back the results, changing the security context in which the test runs, etc. Another example: the filter attributes in ASP.NET MVC also act like specialized AOP-style method interceptors. One, for instance, allows you to say how unhandled errors should be treated, if they happen in your action method. Many dependency injection containers, including Castle Windsor and Unity, support this behavior either "in the box" or through the use of extensions. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1793/"
]
} |
20,674 | Other than self.class.send :method, args... , of course. I'd like to make a rather complex method available at both the class and instance level without duplicating the code. UPDATE: @Jonathan Branam: that was my assumption, but I wanted to make sure nobody else had found a way around. Visibility in Ruby is very different from that in Java. You're also quite right that private doesn't work on class methods, though this will declare a private class method: class Foo class <<self private def bar puts 'bar' end endendFoo.bar# => NoMethodError: private method 'bar' called for Foo:Class | Here is a code snippet to go along with the question. Using "private" in a class definition does not apply to class methods. You need to use "private_class_method" as in the following example. class Foo def self.private_bar # Complex logic goes here puts "hi" end private_class_method :private_bar class <<self private def another_private_bar puts "bar" end end public def instance_bar self.class.private_bar end def instance_bar2 self.class.another_private_bar endendf=Foo.newf=instance_bar # NoMethodError: private method `private_bar' called for Foo:Classf=instance_bar2 # NoMethodError: private method `another_private_bar' called for Foo:Class I don't see a way to get around this. The documentation says that you cannot specify the receive of a private method. Also you can only access a private method from the same instance. The class Foo is a different object than a given instance of Foo. Don't take my answer as final. I'm certainly not an expert, but I wanted to provide a code snippet so that others who attempt to answer will have properly private class methods. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1190/"
]
} |
20,675 | What is the difference in ASP/VBScript between Int() and CInt() ? | Int() The Int function returns the integer part of a specified number. CInt() The CInt function converts an expression to type Integer. And the best answer comes from MSDN CInt differs from the Fix and Int functions, which truncate, rather than round, the fractional part of a number. When the fractional part is exactly 0.5, the CInt function always rounds it to the nearest even number. For example, 0.5 rounds to 0, and 1.5 rounds to 2. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/20675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/357/"
]
} |
20,728 | Our team develops distributed winform apps. We use ClickOnce for deployment and are very pleased with it. However, we've found the pain point with ClickOnce is in creating the deployments. We have the standard dev/test/production environments and need to be able to create deployments for each of these that install and update separate from one another. Also, we want control over what assemblies get deployed. Just because an assembly was compiled doesn't mean we want it deployed. The obvious first choice for creating deployments is Visual Studio. However, VS really doesn't address the issues stated. The next in line is the SDK tool, Mage. Mage works OK but creating deployments is rather tedious and we don't want every developer having our code signing certificate and password. What we ended up doing was rolling our own deployment app that uses the command line version of Mage to create the ClickOnce manifest files. I'm satisfied with our current solution but is seems like there would be an industry-wide, accepted approach to this problem. Is there? | I would look at using msbuild . It has built in tasks for handling clickonce deployments. I included some references which will help you get started, if you want to go down this path. It is what I use and I have found it to fit my needs. With a good build process using msbuild, you should be able to accomplish squashing the pains you have felt. Here is detailed post on how ClickOnce manifest generation works with MsBuild. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20728",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1321/"
]
} |
20,731 | I've tried several things already, std::stringstream m;m.empty();m.clear(); both of which don't work. | For all the standard library types the member function empty() is a query, not a command, i.e. it means "are you empty?" not "please throw away your contents". The clear() member function is inherited from ios and is used to clear the error state of the stream, e.g. if a file stream has the error state set to eofbit (end-of-file), then calling clear() will set the error state back to goodbit (no error). For clearing the contents of a stringstream , using: m.str(""); is correct, although using: m.str(std::string()); is technically more efficient, because you avoid invoking the std::string constructor that takes const char* . But any compiler these days should be able to generate the same code in both cases - so I would just go with whatever is more readable. | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/20731",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/25/"
]
} |
20,734 | I've seen binary trees and binary searching mentioned in several books I've read lately, but as I'm still at the beginning of my studies in Computer Science, I've yet to take a class that's really dealt with algorithms and data structures in a serious way. I've checked around the typical sources (Wikipedia, Google) and most descriptions of the usefulness and implementation of (in particular) Red-Black trees have come off as dense and difficult to understand. I'm sure for someone with the necessary background, it makes perfect sense, but at the moment it reads like a foreign language almost. So what makes binary trees useful in some of the common tasks you find yourself doing while programming? Beyond that, which trees do you prefer to use (please include a sample implementation) and why? | Red Black trees are good for creating well-balanced trees. The major problem with binary search trees is that you can make them unbalanced very easily. Imagine your first number is a 15. Then all the numbers after that are increasingly smaller than 15. You'll have a tree that is very heavy on the left side and has nothing on the right side. Red Black trees solve that by forcing your tree to be balanced whenever you insert or delete. It accomplishes this through a series of rotations between ancestor nodes and child nodes. The algorithm is actually pretty straightforward, although it is a bit long. I'd suggest picking up the CLRS (Cormen, Lieserson, Rivest and Stein) textbook, "Introduction to Algorithms" and reading up on RB Trees. The implementation is also not really so short so it's probably not really best to include it here. Nevertheless, trees are used extensively for high performance apps that need access to lots of data. They provide a very efficient way of finding nodes, with a relatively small overhead of insertion/deletion. Again, I'd suggest looking at CLRS to read up on how they're used. While BSTs may not be used explicitly - one example of the use of trees in general are in almost every single modern RDBMS. Similarly, your file system is almost certainly represented as some sort of tree structure, and files are likewise indexed that way. Trees power Google. Trees power just about every website on the internet. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/20734",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2286/"
]
} |
20,762 | Is there any easy/general way to clean an XML based data source prior to using it in an XmlReader so that I can gracefully consume XML data that is non-conformant to the hexadecimal character restrictions placed on XML? Note: The solution needs to handle XMLdata sources that use characterencodings other than UTF-8, e.g. byspecifying the character encoding atthe XML document declaration. Notmangling the character encoding ofthe source while stripping invalidhexadecimal characters has been amajor sticking point. The removal of invalid hexadecimal characters should only remove hexadecimal encoded values, as you can often find href values in data that happens to contains a string that would be a string match for a hexadecimal character. Background: I need to consume an XML-based data source that conforms to a specific format (think Atom or RSS feeds), but want to be able to consume data sources that have been published which contain invalid hexadecimal characters per the XML specification. In .NET if you have a Stream that represents the XML data source, and then attempt to parse it using an XmlReader and/or XPathDocument, an exception is raised due to the inclusion of invalid hexadecimal characters in the XML data. My current attempt to resolve this issue is to parse the Stream as a string and use a regular expression to remove and/or replace the invalid hexadecimal characters, but I am looking for a more performant solution. | It may not be perfect (emphasis added since people missing this disclaimer), but what I've done in that case is below. You can adjust to use with a stream. /// <summary>/// Removes control characters and other non-UTF-8 characters/// </summary>/// <param name="inString">The string to process</param>/// <returns>A string with no control characters or entities above 0x00FD</returns>public static string RemoveTroublesomeCharacters(string inString){ if (inString == null) return null; StringBuilder newString = new StringBuilder(); char ch; for (int i = 0; i < inString.Length; i++) { ch = inString[i]; // remove any characters outside the valid UTF-8 range as well as all control characters // except tabs and new lines //if ((ch < 0x00FD && ch > 0x001F) || ch == '\t' || ch == '\n' || ch == '\r') //if using .NET version prior to 4, use above logic if (XmlConvert.IsXmlChar(ch)) //this method is new in .NET 4 { newString.Append(ch); } } return newString.ToString();} | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/20762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2029/"
]
} |
20,778 | I have binary data in a file that I can read into a byte array and process with no problem. Now I need to send parts of the data over a network connection as elements in an XML document. My problem is that when I convert the data from an array of bytes to a String and back to an array of bytes, the data is getting corrupted. I've tested this on one machine to isolate the problem to the String conversion, so I now know that it isn't getting corrupted by the XML parser or the network transport. What I've got right now is byte[] buffer = ...; // read from file// a few lines that prove I can process the data successfullyString element = new String(buffer);byte[] newBuffer = element.getBytes();// a few lines that try to process newBuffer and fail because it is not the same data anymore Does anyone know how to convert binary to String and back without data loss? Answered: Thanks Sam. I feel like an idiot. I had this answered yesterday because my SAX parser was complaining. For some reason when I ran into this seemingly separate issue, it didn't occur to me that it was a new symptom of the same problem. EDIT: Just for the sake of completeness, I used the Base64 class from the Apache Commons Codec package to solve this problem. | If you encode it in base64, this will turn any data into ascii safe text, but base64 encoded data is larger than the orignal data | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20778",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1288/"
]
} |
20,788 | This question on Cyclomatic Complexity made me think more about static code analysis . Analyzing code complexity and consistency is occasionally useful, and I'd like to start doing it more. What tools do you recommend (per language) for such analysis? Wikipedia has a large list of tools, but which ones have people tried before? Edit: As David points out, this is not a completely unasked question when it comes to C/UNIX based tools . | I have been setting up a Hudson continuous integration (CI) build system for my Objective-C iPhone projects (iOS apps), and have compiled a varied list of tools that can be used to analyze my projects during a build: Clang static analyzer : free, up-to-date stand-alone tool that catches more issues than the version of Clang included with Xcode 4. Active project. -- visit http://clang-analyzer.llvm.org Doxygen : free documentation generation tool that also generates class dependency diagrams. Active project -- visit http://www.doxygen.nl HFCCA (header-free cyclomatic complexity analyzer): free Python script to calculate code complexity, but without header files and pre-processors. Supports output in XML format for Hudson/Jenkins builds. Active project. -- visit http://code.google.com/p/headerfile-free-cyclomatic-complexity-analyzer CLOC (count lines of code): free tool to count files, lines of code, comments, and blank lines. Supports diffing, so you can see the differences between builds. Active project. -- visit http://cloc.sourceforge.net SLOCcount (source lines of code count): a free tool to count lines of code and estimate the costs and time associated with a project. Does not appear to be active. -- visit http://sourceforge.net/projects/sloccount and http://www.dwheeler.com/sloccount AnalysisTool : free code analysis tool that measures code complexity and also generates dependency diagrams. Not active. Does not seem to work with Xcode 4, but I would love to get it working. -- visit http://www.karppinen.fi/analysistool | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20788",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2134/"
]
} |
20,794 | If I call os.stat() on a broken symlink , python throws an OSError exception. This makes it useful for finding them. However, there are a few other reasons that os.stat() might throw a similar exception. Is there a more precise way of detecting broken symlinks with Python under Linux? | A common Python saying is that it's easier to ask forgiveness than permission. While I'm not a fan of this statement in real life, it does apply in a lot of cases. Usually you want to avoid code that chains two system calls on the same file, because you never know what will happen to the file in between your two calls in your code. A typical mistake is to write something like : if os.path.exists(path): os.unlink(path) The second call (os.unlink) may fail if something else deleted it after your if test, raise an Exception, and stop the rest of your function from executing. (You might think this doesn't happen in real life, but we just fished another bug like that out of our codebase last week - and it was the kind of bug that left a few programmers scratching their head and claiming 'Heisenbug' for the last few months) So, in your particular case, I would probably do: try: os.stat(path)except OSError, e: if e.errno == errno.ENOENT: print 'path %s does not exist or is a broken symlink' % path else: raise e The annoyance here is that stat returns the same error code for a symlink that just isn't there and a broken symlink. So, I guess you have no choice than to break the atomicity, and do something like if not os.path.exists(os.readlink(path)): print 'path %s is a broken symlink' % path | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/20794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1892/"
]
} |
20,797 | I have a byte array in memory, read from a file. I would like to split the byte array at a certain point (index) without having to just create a new byte array and copy each byte at a time, increasing the in memory foot print of the operation. What I would like is something like this: byte[] largeBytes = [1,2,3,4,5,6,7,8,9]; byte[] smallPortion; smallPortion = split(largeBytes, 3); smallPortion would equal 1,2,3,4 largeBytes would equal 5,6,7,8,9 | This is how I would do that: using System;using System.Collections;using System.Collections.Generic;class ArrayView<T> : IEnumerable<T>{ private readonly T[] array; private readonly int offset, count; public ArrayView(T[] array, int offset, int count) { this.array = array; this.offset = offset; this.count = count; } public int Length { get { return count; } } public T this[int index] { get { if (index < 0 || index >= this.count) throw new IndexOutOfRangeException(); else return this.array[offset + index]; } set { if (index < 0 || index >= this.count) throw new IndexOutOfRangeException(); else this.array[offset + index] = value; } } public IEnumerator<T> GetEnumerator() { for (int i = offset; i < offset + count; i++) yield return array[i]; } IEnumerator IEnumerable.GetEnumerator() { IEnumerator<T> enumerator = this.GetEnumerator(); while (enumerator.MoveNext()) { yield return enumerator.Current; } }}class Program{ static void Main(string[] args) { byte[] arr = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 0 }; ArrayView<byte> p1 = new ArrayView<byte>(arr, 0, 5); ArrayView<byte> p2 = new ArrayView<byte>(arr, 5, 5); Console.WriteLine("First array:"); foreach (byte b in p1) { Console.Write(b); } Console.Write("\n"); Console.WriteLine("Second array:"); foreach (byte b in p2) { Console.Write(b); } Console.ReadKey(); }} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20797",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1048/"
]
} |
20,840 | Why should I or shouldn't I use dirty reads: set transaction isolation level read uncommitted in SQL Server? | From MSDN : When this option is set, it is possible to read uncommitted or dirty data; values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction. Simply put, when you are using this isolation level, and you are performing multiple queries on an active table as part of one transaction, there is no guarantee that the information returned to you within different parts of the transaction will remain the same. You could query the same data twice within one transaction and get different results (this might happen in the case where a different user was updating the same data in the midst of your transaction). This can obviously have severe ramifications for parts of your application that rely on data integrity. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20840",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/357/"
]
} |
20,856 | I've heard of a few ways to implement tagging; using a mapping table between TagID and ItemID (makes sense to me, but does it scale?), adding a fixed number of possible TagID columns to ItemID (seems like a bad idea), Keeping tags in a text column that's comma separated (sounds crazy but could work). I've even heard someone recommend a sparse matrix, but then how do the tag names grow gracefully? Am I missing a best practice for tags? | Three tables (one for storing all items, one for all tags, and one for the relation between the two), properly indexed, with foreign keys set running on a proper database, should work well and scale properly. Table: ItemColumns: ItemID, Title, ContentTable: TagColumns: TagID, TitleTable: ItemTagColumns: ItemID, TagID | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/20856",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/459/"
]
} |
20,882 | Some things look strange to me: What is the distinction between 0.0.0.0, 127.0.0.1, and [::]? How should each part of the foreign address be read (part1:part2)? What does a state Time_Wait, Close_Wait mean? etc. Could someone give a quick overview of how to interpret these results? | 0.0.0.0 usually refers to stuff listening on all interfaces.127.0.0.1 = localhost (only your local interface)I'm not sure about [::] TIME_WAIT means both sides have agreed to close and TCPmust now wait a prescribed time before taking the connectiondown. CLOSE_WAIT means the remote system has finished sendingand your system has yet to say it's finished. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20882",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1946/"
]
} |
20,910 | My company develops several types of applications. A lot of our business comes from doing multimedia-type apps, typically done in Flash. However, now that side of the house is starting to migrate towards doing Flex development. Most of our other development is done using .NET. I'm trying to make a push towards doing Silverlight development instead, since it would take better advantage of the .NET developers on staff. I prefer the Silverlight platform over the Flex platform for the simple fact that Silverlight is all .NET code. We have more .NET developers on staff than Flash/Flex developers, and most of our Flash/Flex developers are graphic artists (not real programmers). Only reason they push towards Flex right now is because it seems like the logical step from Flash. I've done development using both, and I honestly believe Silverlight is easier to work with. But I'm trying to convince people who are only Flash developers. So here's my question: If I'm going to go into a meeting to praise Silverlight, why would a company want to go with Silverlight instead of Flex? Other than the obvious "not everyone has Silverlight", what are the pros and cons for each? | I think you should look at Silverlight as a long-term play, just as Microsoft seems to be doing. There's an obvious balance on when to use Silverlight vs. Flash when you're concerned about reach and install base, but here are some reasons Silverlight is a good direction to move in: Second mover advantage - Just as Microsoft built a "better Java" with .NET, they're able to look at how you'd design a RIA plugin from scratch, today. They have the advantage of knowing how people use the web today, something the inventors of Flash could never have accurately guessed. Flash can add features, but they can't realistically chuck the platform and start over. Developer familiarity - While Silverlight is a new model, it's not entirely unfamiliar to developers. They'll "get" the way Silverlight works a lot more quickly than they'll understand firing up a new development environment with a new scripting language and new event paradigms. Being rid of the timeline model in Flash - Flash was originally built for keyframe based animations, and while there are ways to abstract this away, it's at the core of how Flash works. Silverlight ditches that for an application-centric model. ScottGu - ScottGu is fired up about Silverlight. Nuff said. Cool new features - While Silverlight still has some catching up to do with Flash on some obvious features (like webcam / mic integration, or 3d / graphics acceleration), there are some slick new technologies built in to Silverlight - Deep Zoom is one example. I'm seeing more "revolutionary" technologies on the Silverlight side, while Flash seems to be in maintenance mode at this point. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/20910",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1226/"
]
} |
20,927 | I've got two models: Message and Attachment. Each attachment is attached to a specific message, using a ForeignKey on the Attachment model. Both models have an auto_now DateTimeField called updated. I'm trying to make it so that when any attachment is saved, it also sets the updated field on the associated message to now. Here's my code: def save(self): super(Attachment, self).save() self.message.updated = self.updated Will this work, and if you can explain it to me, why? If not, how would I accomplish this? | You would also need to then save the message. Then it that should work. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20927",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1914/"
]
} |
20,952 | I'm trying to unit test a custom ConfigurationSection I've written, and I'd like to load some arbitrary configuration XML into a System.Configuration.Configuration for each test (rather than put the test configuration xml in the Tests.dll.config file. That is, I'd like to do something like this: Configuration testConfig = new Configuration("<?xml version=\"1.0\"?><configuration>...</configuration>");MyCustomConfigSection section = testConfig.GetSection("mycustomconfigsection");Assert.That(section != null); However, it looks like ConfigurationManager will only give you Configuration instances that are associated with an EXE file or a machine config. Is there a way to load arbitrary XML into a Configuration instance? | There is actually a way I've discovered.... You need to define a new class inheriting from your original configuration section as follows: public class MyXmlCustomConfigSection : MyCustomConfigSection{ public MyXmlCustomConfigSection (string configXml) { XmlTextReader reader = new XmlTextReader(new StringReader(configXml)); DeserializeSection(reader); }} You can then instantiate your ConfigurationSection object as follows: string configXml = "<?xml version=\"1.0\"?><configuration>...</configuration>";MyCustomConfigSection config = new MyXmlCustomConfigSection(configXml); Hope it helps someone :-) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20952",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2338/"
]
} |
20,958 | I'm designing a database table and asking myself this question: How long should the firstname field be? Does anyone have a list of reasonable lengths for the most common fields, such as first name, last name, and email address? | I just queried my database with millions of customers in the USA. The maximum first name length was 46. I go with 50. (Of course, only 500 of those were over 25, and they were all cases where data imports resulted in extra junk winding up in that field.) Last name was similar to first name. Email addresses maxed out at 62characters. Most of the longer oneswere actually lists of emailaddresses separated by semicolons. Street address maxes out at 95characters. The long ones were allvalid. Max city length was 35. This should be a decent statistical spread for people in the US. If you have localization to consider, the numbers could vary significantly. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/20958",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/437/"
]
} |
20,959 | How can I determine all of the assemblies that my .NET desktop application has loaded? I'd like to put them in the about box so I can query customers over the phone to determine what version of XYZ they have on their PC. It would be nice to see both managed and unmanaged assemblies. I realize the list will get long but I plan to slap an incremental search on it. | using System;using System.Reflection;using System.Windows.Forms;public class MyAppDomain{ public static void Main(string[] args) { AppDomain ad = AppDomain.CurrentDomain; Assembly[] loadedAssemblies = ad.GetAssemblies(); Console.WriteLine("Here are the assemblies loaded in this appdomain\n"); foreach(Assembly a in loadedAssemblies) { Console.WriteLine(a.FullName); } }} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20959",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1490/"
]
} |
20,993 | When creating a web application, and lets say you have a User object denoting a single user, what do you think is the best way to store that the user has logged in? Two ways I've thought about have been: Stored the user database id in a session variable Stored the entire user object in a session variable Any better suggestions, any issues with using the above ways? Perhaps security issues or memory issues, etc, etc. | I recommend storing the id rather than the object. The downside is that you have to hit the database every time you want to get that user's information. However, unless every millisecond counts in your page, the performance shouldn't be an issue. Here are two advantages: If the user's information changes somehow, then you won't be storing out-of-date information in your session. For example, if a user is granted extra privileges by an admin, then those will be immediately available without the user needing to log out and then log back in. If your session information is stored on the hard drive, then you can only store serializable data. So if your User object ever contains anything like a database connection, open socket, file descriptor, etc then this will not be stored properly and may not be cleaned up properly either. In most cases these concerns won't be an issue and either approach would be fine. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/20993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1610/"
]
} |
21,027 | Is it possible to change how Ctrl + Tab and Shift + Ctrl + Tab work in Visual Studio? I have disabled the popup navigator window, because I only want to switch between items in the tab control. My problem is the inconsistency of what switching to the next and previous document do. Every other program that uses a tab control for open document I have seen uses Ctrl + Tab to move from left to right and Shift + Ctrl + Tab to go right to left. Visual Studio breaks this with its jump to the last tab selected. You can never know what document you will end up on, and it is never the same way twice. It is very counterintuitive. Is this a subtle way to encourage everyone to only ever have two document open at once? Let's say I have a few files open. I am working in one, and I need to see what is in the next tab to the right. In every other single application on the face of the Earth, Ctrl + Tab will get me there. But in Visual Studio, I have no idea which of the other tabs it will take me to. If I only ever have two documents open, this works great. As soon as you go to three or more, all bets are off as to what tab Visual Studio has decided to send you to. The problem with this is that I shouldn't have to think about the tool, it should fade into the background, and I should be thinking about the task. The current tab behavior keeps pulling me out of the task and makes me have to pay attention to the tool. | In Visual Studio 2015 (as well as previous versions of VS, but you must install Productivity Power Tools if you're using VS2013 or below), there are two new commands in Visual Studio: Window.NextTab and Window.PreviousTab Just go remap them from Ctrl + Alt + PageUp / Ctrl + Alt + PageDown to Ctrl + Tab / Ctrl + Shift + Tab in: Menu Tools -> Options -> Environment -> Keyboard Note: In earlier versions such as Visual Studio 2010, Window.NextTab and Window.PreviousTab were named Window.NextDocumentWellTab and Window.PreviousDocumentWellTab . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/21027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2277/"
]
} |
21,052 | When I'm working with DataBound controls in ASP.NET 2.0 such as a Repeater, I know the fastest way to retrieve a property of a bound object (instead of using Reflection with the Eval() function) is to cast the DataItem object to the type it is and then use that object natively, like the following: <%#((MyType)Container.DataItem).PropertyOfMyType%> The problem is, if this type is in a namespace (which is the case 99.99% of the time) then this single statement because a lot longer due to the fact that the ASP page has no concept of class scope so all of my types need to be fully qualified. <%#((RootNamespace.SubNamespace1.SubNamspace2.SubNamespace3.MyType)Container.DataItem).PropertyOfMyType%> Is there any kind of using directive or some equivalent I could place somewhere in an ASP.NET page so I don't need to use the full namespace every time? | I believe you can add something like: <%@ Import Namespace="RootNamespace.SubNamespace1" %> At the top of the page. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/21052",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/392/"
]
} |
21,060 | I'm tired of being in the middle of typing something, having a pop-up with a question appear, and hitting enter before reading it... (it also happens with some windows that are not pop-ups) Do you know if there's some setting I could touch for this not to happen? | I believe you can add something like: <%@ Import Namespace="RootNamespace.SubNamespace1" %> At the top of the page. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/21060",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1782/"
]
} |
21,078 | What's the most efficient way to concatenate strings? | The StringBuilder.Append() method is much better than using the + operator. But I've found that, when executing 1000 concatenations or less, String.Join() is even more efficient than StringBuilder . StringBuilder sb = new StringBuilder();sb.Append(someString); The only problem with String.Join is that you have to concatenate the strings with a common delimiter. Edit: as @ryanversaw pointed out, you can make the delimiter string.Empty . string key = String.Join("_", new String[] { "Customers_Contacts", customerID, database, SessionID }); | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/21078",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2358/"
]
} |
21,133 | What's the easiest way to profile a PHP script? I'd love tacking something on that shows me a dump of all function calls and how long they took but I'm also OK with putting something around specific functions. I tried experimenting with the microtime function: $then = microtime();myFunc();$now = microtime();echo sprintf("Elapsed: %f", $now-$then); but that sometimes gives me negative results. Plus it's a lot of trouble to sprinkle that all over my code. | You want xdebug I think. Install it on the server, turn it on, pump the output through kcachegrind (for linux) or wincachegrind (for windows) and it'll show you a few pretty charts that detail the exact timings, counts and memory usage (but you'll need another extension for that). It rocks, seriously :D | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/21133",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/305/"
]
} |
21,137 | I have nUnit installed. I have VS2008 Team Edition installed. I have ASP.Net MVC Preview 4 (Codeplex) installed. How do I make Visual Studio show me nUnit as a testing framework when creating a new MVC project? At this point I still only have the Microsoft Testing Framework as a choice. Update: I installed nUnit 2.5, but still with no success. From what I've found Googling, it would seem I need to create templates for the test projects in order for them to be displayed in the "Create Unit Test Project". I would have thought that templates be readily available for nUnit, xUnit, MBUnit, et. al. Also, it looks like I need to created registry entries. Anybody have any additional information? Update: I determined the answer to this through research and it's posted below. | After a bunch of research and experimentation, I've found the answer. For the record, the current release of nUnit 2.5 Alpha does not seem to contain templates for test projects in Visual Studio 2008. I followed the directions here which describe how to create your own project templates and then add appropriate registry entries that allow your templates to appear in the drop-down box in the Create Unit Test Project dialog box of an MVC project. From a high level, what you have to do is: Create a project Export it as a template (which results in a single ZIP archive) Copy it from the local user's template folder to the Visual Studio main template test folder Execute devenv.exe /setup Run regedit and create a few registry entries. So much for the testing framework selection being easy! Although, to be fair MVC is not even beta yet. After all that, I did get the framework of choice (NUnit) to show up in the drop down box. However, there was still a bit left to be desired: Although the test project gets properly created, it did not automatically have a project reference to the main MVC project. When using Visual Studio Unit Test as the test project, it automatically does this. I tried to open the ZIP file produced and edit the MyTemplate.vssettings file as well as the .csproj project file in order to correct the aforementioned issue as well as tweak the names of things so they'd appear more user friendly. This for some reason does not work. The ZIP file produced can not be updated via WinZip or Win-Rar -- each indicates the archive is corrupt. Each can extract the contents, though. So, I tried updating the extracted files and then recreating the ZIP file. Visual Studio did not like it. So, I should probably read this as well which discusses making project templates for Visual Studio (also referenced in the blog post I linked to above.) I admit to being disappointed though; from all the talk about MVC playing well with other testing frameworks, etc, I thought that it'd be easier to register a 3rd party framework. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/21137",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1875/"
]
} |
21,184 | I've got a System.Generic.Collections.List(Of MyCustomClass) type object. Given integer varaibles pagesize and pagenumber, how can I query only any single page of MyCustomClass objects? | If you have your linq-query that contains all the rows you want to display, this code can be used: var pageNum = 3;var pageSize = 20;query = query.Skip((pageNum - 1) * pageSize).Take(pageSize); You can also make an extension method on the object to be able to write query.Page(2,50) to get the first 50 records of page 2. If that is want you want, the information is on the solid code blog. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/21184",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83/"
]
} |
21,207 | I'm currently trying out db4o (the java version) and I pretty much like what I see. But I cannot help wondering how it does perform in a real live (web-)environment. Does anyone have any experiences (good or bad) to share about running db4o? | We run DB40 .NET version in a large client/server project. Our experiences is that you can potentially get much better performance than typical relational databases. However, you really have to tweak your objects to get this kind of performance. For example, if you've got a list containing a lot of objects, DB4O activation of these lists is slow. There are a number of ways to get around this problem, for example, by inverting the relationship. Another pain is activation. When you retrieve or delete an object from DB4O, by default it will activate the whole object tree. For example, loading a Foo will load Foo.Bar.Baz.Bat, etc until there's nothing left to load. While this is nice from a programming standpoint, performance will slow down the more nesting in your objects. To improve performance, you can tell DB4O how many levels deep to activate. This is time-consuming to do if you've got a lot of objects. Another area of pain was text searching. DB4O's text searching is far, far slower than SQL full text indexing. (They'll tell you this outright on their site.) The good news is, it's easy to setup a text searching engine on top of DB4O. On our project, we've hooked up Lucene.NET to index the text fields we want. Some APIs don't seem to work, such as the GetField APIs useful in applying database upgrades. (For example, you've renamed a property and you want to upgrade your existing objects in the database, you need to use these "reflection" APIs to find objects in the database. Other APIs, such as the [Index] attribute don't work in the stable 6.4 version, and you must instead specify indexes using the Configure().Index("someField"), which is not strongly typed. We've witnessed performance degrade the larger your database. We have a 1GB database right now and things are still fast, but not nearly as fast as when we started with a tiny database. We've found another issue where Db4O.GetByID will close the database if the ID doesn't exist anymore in the database. We've found the Native Query syntax (the most natural, language-integrated syntax for queries) is far, far slower than the less-friendly SODA queries. So instead of typing: // C# syntax for "Find all MyFoos with Bar == 23".// (Note the Java syntax is more verbose using the Predicate class.)IList<MyFoo> results = db4o.Query<MyFoo>(input => input.Bar == 23); Instead of that nice query code, you have to an ugly SODA query which is string-based and not strongly-typed. For .NET folks, they've recently introduced a LINQ-to-DB4O provider, which provides for the best syntax yet. However, it's yet to be seen whether performance will be up-to-par with the ugly SODA queries. DB4O support has been decent: we've talked to them on the phone a number of times and have received helpful info. Their user forums are next to worthless, however, almost all questions go unanswered. Their JIRA bug tracker receives a lot of attention, so if you've got a nagging bug, file it on JIRA on it often will get fixed. (We've had 2 bugs that have been fixed, and another one that got patched in a half-assed way.) If all this hasn't scared you off, let me say that we're very happy with DB4O, despite the problems we've encountered. The performance we've got has blown away some O/RM frameworks we tried. I recommend it. update July 2015 Keep in mind, this answer was written back in 2008. While I appreciate the upvotes, the world has changed since then, and this information may not be as reliable as it was when it was written. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/21207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1562/"
]
} |
21,274 | I am looking for a text editor to be used in a web page. Where users can format the text and get a WYSIWYG experience. Doesn't need to be too fancy. But has to be easy to use and integrate into the page. Has to generate HTML as output. Support AJAX (one I checked works only with standard form submit) and has to be small in terms of download to the user's browser. | Well it depends what platform you are on if you are looking for server-side functionality as well, but the defacto badass WYSIWYg in my opinion is FCKeditor . I have worked with this personally in numerous environments (both professional and hobby level) and have always been impressed. It's certainly worth a look. I believe it is employed by open source projects such as SubText as well. Perhaps, Jon Galloway can add to this if he reads this question. Or Phil if he is currently a user. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/21274",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1363/"
]
} |
21,280 | I seem to be missing something about LINQ. To me, it looks like it's taking some of the elements of SQL that I like the least and moving them into the C# language and using them for other things. I mean, I could see the benefit of using SQL-like statements on things other than databases. But if I wanted to write SQL, well, why not just write SQL and keep it out of C#? What am I missing here? | LINQ is not about SQL. LINQ is about being apply functional programming paradigmns on objects. LINQ to SQL is an ORM built ontop of the LINQ foundation, but LINQ is much more. I don't use LINQ to SQL, yet I use LINQ all the time. Take the task of finding the intersection of two lists: Before LINQ, this tasks requires writing a nested foreach that iterates the small list once for every item in the big list O(N*M), and takes about 10 lines of code. foreach (int number in list1){ foreach (int number2 in list2) { if (number2 == number) { returnList.add(number2); } }} Using LINQ, it does the same thing in one line of code: var results = list1.Intersect(list2); You'll notice that doesn't look like LINQ, yet it is. You don't need to use the expression syntax if you don't want to. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/21280",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
]
} |
21,288 | Which C#/.NET Dependency Injection frameworks are worth looking into?And what can you say about their complexity and speed. | edit (not by the author): There is a comprehensive list of IoC frameworks available at https://github.com/quozd/awesome-dotnet/blob/master/README.md#ioc : Castle Windsor - Castle Windsor is best of breed, mature Inversion of Control container available for .NET and Silverlight Unity - Lightweight extensible dependency injection container with support for constructor, property, and method call injection Autofac - An addictive .NET IoC container DryIoc - Simple, fast all fully featured IoC container. Ninject - The ninja of .NET dependency injectors Spring.Net - Spring.NET is an open source application framework that makes building enterprise .NET applications easier Lamar - A fast IoC container heavily optimized for usage within ASP.NET Core and other .NET server side applications. LightInject - A ultra lightweight IoC container Simple Injector - Simple Injector is an easy-to-use Dependency Injection (DI) library for .NET 4+ that supports Silverlight 4+, Windows Phone 8, Windows 8 including Universal apps and Mono. Microsoft.Extensions.DependencyInjection - The default IoC container for ASP.NET Core applications. Scrutor - Assembly scanning extensions for Microsoft.Extensions.DependencyInjection. VS MEF - Managed Extensibility Framework (MEF) implementation used by Visual Studio. TinyIoC - An easy to use, hassle free, Inversion of Control Container for small projects, libraries and beginners alike. Stashbox - A lightweight, fast and portable dependency injection framework for .NET based solutions. Original answer follows. I suppose I might be being a bit picky here but it's important to note that DI (Dependency Injection) is a programming pattern and is facilitated by, but does not require, an IoC (Inversion of Control) framework. IoC frameworks just make DI much easier and they provide a host of other benefits over and above DI. That being said, I'm sure that's what you were asking. About IoC Frameworks; I used to use Spring.Net and CastleWindsor a lot, but the real pain in the behind was all that pesky XML config you had to write! They're pretty much all moving this way now, so I have been using StructureMap for the last year or so, and since it has moved to a fluent config using strongly typed generics and a registry, my pain barrier in using IoC has dropped to below zero! I get an absolute kick out of knowing now that my IoC config is checked at compile-time (for the most part) and I have had nothing but joy with StructureMap and its speed. I won't say that the others were slow at runtime, but they were more difficult for me to setup and frustration often won the day. Update I've been using Ninject on my latest project and it has been an absolute pleasure to use. Words fail me a bit here, but (as we say in the UK) this framework is 'the Dogs'. I would highly recommend it for any green fields projects where you want to be up and running quickly. I got all I needed from a fantastic set of Ninject screencasts by Justin Etheredge. I can't see that retro-fitting Ninject into existing code being a problem at all, but then the same could be said of StructureMap in my experience. It'll be a tough choice going forward between those two, but I'd rather have competition than stagnation and there's a decent amount of healthy competition out there. Other IoC screencasts can also be found here on Dimecasts . | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/21288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2011/"
]
} |
21,294 | How can you reliably and dynamically load a JavaScript file? This will can be used to implement a module or component that when 'initialized' the component will dynamically load all needed JavaScript library scripts on demand. The client that uses the component isn't required to load all the library script files (and manually insert <script> tags into their web page) that implement this component - just the 'main' component script file. How do mainstream JavaScript libraries accomplish this (Prototype, jQuery, etc)? Do these tools merge multiple JavaScript files into a single redistributable 'build' version of a script file? Or do they do any dynamic loading of ancillary 'library' scripts? An addition to this question: is there a way to handle the event after a dynamically included JavaScript file is loaded? Prototype has document.observe for document-wide events. Example: document.observe("dom:loaded", function() { // initially hide all containers for tab content $$('div.tabcontent').invoke('hide');}); What are the available events for a script element? | You may create a script element dynamically, using Prototypes : new Element("script", {src: "myBigCodeLibrary.js", type: "text/javascript"}); The problem here is that we do not know when the external script file is fully loaded. We often want our dependant code on the very next line and like to write something like: if (iNeedSomeMore) { Script.load("myBigCodeLibrary.js"); // includes code for myFancyMethod(); myFancyMethod(); // cool, no need for callbacks!} There is a smart way to inject script dependencies without the need of callbacks. You simply have to pull the script via a synchronous AJAX request and eval the script on global level. If you use Prototype the Script.load method looks like this: var Script = { _loadedScripts: [], include: function(script) { // include script only once if (this._loadedScripts.include(script)) { return false; } // request file synchronous var code = new Ajax.Request(script, { asynchronous: false, method: "GET", evalJS: false, evalJSON: false }).transport.responseText; // eval code on global level if (Prototype.Browser.IE) { window.execScript(code); } else if (Prototype.Browser.WebKit) { $$("head").first().insert(Object.extend( new Element("script", { type: "text/javascript" }), { text: code } )); } else { window.eval(code); } // remember included script this._loadedScripts.push(script); }}; | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/21294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1341/"
]
} |
21,303 | VC++ makes functions which are implemented within the class declaration inline functions. If I declare a class Foo as follows, then are the CONSTRUCTOR and DESTRUCTOR inline functions? class Foo { int* p;public: Foo() { p = new char[0x00100000]; } ~Foo() { delete [] p; }};{ Foo f; (f);} | Defining the body of the constructor INSIDE the class has the same effect as placing the function OUTSIDE the class with the "inline" keyword. In both cases it's a hint to the compiler. An "inline" function doesn't necessarily mean the function will be inlined. That depends on the complexity of the function and other rules. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/21303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1556/"
]
} |
21,437 | What are some effective strategies for preventing the use of my proprietary images? I'm talking about saving them, direct linking to them etc... Presently I have a watermark on the image, but I'd rather not. .NET platform preferred, but if there's a strategy that's on another platform that integrates with my existing application that'd be a bonus. | It's not possible to make it "impossible" to download. When a user visits your site you're sending them the pictures. The user will have a copy of that image in the browsers cache and he'd be able to access it even after he leaves the site ( depending on the browser, of course ). Your only real option is to watermark them :O | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/21437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1975/"
]
} |
21,449 | What is the difference between the following types of endianness? byte (8b) invariant big and little endianness half-word (16b) invariant big and little endianness word (32b) invariant big and little endianness double-word (64b) invariant big and little endianness Are there other types/variations? | There are two approaches to endian mapping: address invariance and data invariance . Address Invariance In this type of mapping, the address of bytes is always preserved between big and little. This has the side effect of reversing the order of significance (most significant to least significant) of a particular datum (e.g. 2 or 4 byte word) and therefore the interpretation of data. Specifically, in little-endian, the interpretation of data is least-significant to most-significant bytes whilst in big-endian, the interpretation is most-significant to least-significant. In both cases, the set of bytes accessed remains the same. Example Address invariance (also known as byte invariance ): the byte address is constant but byte significance is reversed. Addr Memory 7 0 | | (LE) (BE) |----| +0 | aa | lsb msb |----| +1 | bb | : : |----| +2 | cc | : : |----| +3 | dd | msb lsb |----| | |At Addr=0: Little-endian Big-endianRead 1 byte: 0xaa 0xaa (preserved)Read 2 bytes: 0xbbaa 0xaabbRead 4 bytes: 0xddccbbaa 0xaabbccdd Data Invariance In this type of mapping, the relative byte significance is preserved for datum of a particular size. There are therefore different types of data invariant endian mappings for different datum sizes. For example, a 32-bit word invariant endian mapping would be used for a datum size of 32. The effect of preserving the value of particular sized datum, is that the byte addresses of bytes within the datum are reversed between big and little endian mappings. Example 32-bit data invariance (also known as word invariance ): The datum is a 32-bit word which always has the value 0xddccbbaa , independent of endianness. However, for accesses smaller than a word, the address of the bytes are reversed between big and little endian mappings. Addr Memory | +3 +2 +1 +0 | <- LE |-------------------|+0 msb | dd | cc | bb | aa | lsb |-------------------|+4 msb | 99 | 88 | 77 | 66 | lsb |-------------------| BE -> | +0 +1 +2 +3 |At Addr=0: Little-endian Big-endianRead 1 byte: 0xaa 0xddRead 2 bytes: 0xbbaa 0xddccRead 4 bytes: 0xddccbbaa 0xddccbbaa (preserved)Read 8 bytes: 0x99887766ddccbbaa 0x99887766ddccbbaa (preserved) Example 16-bit data invariance (also known as half-word invariance ): The datum is a 16-bitwhich always has the value 0xbbaa , independent of endianness. However, for accesses smaller than a half-word, the address of the bytes are reversed between big and little endian mappings. Addr Memory | +1 +0 | <- LE |---------|+0 msb | bb | aa | lsb |---------|+2 msb | dd | cc | lsb |---------|+4 msb | 77 | 66 | lsb |---------|+6 msb | 99 | 88 | lsb |---------| BE -> | +0 +1 |At Addr=0: Little-endian Big-endianRead 1 byte: 0xaa 0xbbRead 2 bytes: 0xbbaa 0xbbaa (preserved)Read 4 bytes: 0xddccbbaa 0xddccbbaa (preserved)Read 8 bytes: 0x99887766ddccbbaa 0x99887766ddccbbaa (preserved) Example 64-bit data invariance (also known as double-word invariance ): The datum is a 64-bitword which always has the value 0x99887766ddccbbaa , independent of endianness. However, for accesses smaller than a double-word, the address of the bytes are reversed between big and little endian mappings. Addr Memory | +7 +6 +5 +4 +3 +2 +1 +0 | <- LE |---------------------------------------|+0 msb | 99 | 88 | 77 | 66 | dd | cc | bb | aa | lsb |---------------------------------------| BE -> | +0 +1 +2 +3 +4 +5 +6 +7 |At Addr=0: Little-endian Big-endianRead 1 byte: 0xaa 0x99Read 2 bytes: 0xbbaa 0x9988Read 4 bytes: 0xddccbbaa 0x99887766Read 8 bytes: 0x99887766ddccbbaa 0x99887766ddccbbaa (preserved) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/21449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2045/"
]
} |
21,454 | How do I go about specifying and using an ENUM in a Django model? | From the Django documentation : MAYBECHOICE = ( ('y', 'Yes'), ('n', 'No'), ('u', 'Unknown'),) And you define a charfield in your model : married = models.CharField(max_length=1, choices=MAYBECHOICE) You can do the same with integer fields if you don't like to have lettersin your db. In that case, rewrite your choices: MAYBECHOICE = ( (0, 'Yes'), (1, 'No'), (2, 'Unknown'),) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/21454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2019/"
]
} |
21,547 | I've spent a good amount of time coming up with solution to this problem, so in the spirit of this post , I'm posting it here, since I think it might be useful to others. If anyone has a better script, or anything to add, please post it. Edit: Yes guys, I know how to do it in Management Studio - but I needed to be able to do it from within another application. | I've modified the version above to run for all tables and support new SQL 2005 data types. It also retains the primary key names. Works only on SQL 2005 (using cross apply). select 'create table [' + so.name + '] (' + o.list + ')' + CASE WHEN tc.Constraint_Name IS NULL THEN '' ELSE 'ALTER TABLE ' + so.Name + ' ADD CONSTRAINT ' + tc.Constraint_Name + ' PRIMARY KEY ' + ' (' + LEFT(j.List, Len(j.List)-1) + ')' ENDfrom sysobjects socross apply (SELECT ' ['+column_name+'] ' + data_type + case data_type when 'sql_variant' then '' when 'text' then '' when 'ntext' then '' when 'xml' then '' when 'decimal' then '(' + cast(numeric_precision as varchar) + ', ' + cast(numeric_scale as varchar) + ')' else coalesce('('+case when character_maximum_length = -1 then 'MAX' else cast(character_maximum_length as varchar) end +')','') end + ' ' + case when exists ( select id from syscolumns where object_name(id)=so.name and name=column_name and columnproperty(id,name,'IsIdentity') = 1 ) then 'IDENTITY(' + cast(ident_seed(so.name) as varchar) + ',' + cast(ident_incr(so.name) as varchar) + ')' else '' end + ' ' + (case when UPPER(IS_NULLABLE) = 'NO' then 'NOT ' else '' end ) + 'NULL ' + case when information_schema.columns.COLUMN_DEFAULT IS NOT NULL THEN 'DEFAULT '+ information_schema.columns.COLUMN_DEFAULT ELSE '' END + ', ' from information_schema.columns where table_name = so.name order by ordinal_position FOR XML PATH('')) o (list)left join information_schema.table_constraints tcon tc.Table_name = so.NameAND tc.Constraint_Type = 'PRIMARY KEY'cross apply (select '[' + Column_Name + '], ' FROM information_schema.key_column_usage kcu WHERE kcu.Constraint_Name = tc.Constraint_Name ORDER BY ORDINAL_POSITION FOR XML PATH('')) j (list)where xtype = 'U'AND name NOT IN ('dtproperties') Update: Added handling of the XML data type Update 2: Fixed cases when 1) there is multiple tables with the same name but with different schemas, 2) there is multiple tables having PK constraint with the same name | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/21547",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/369/"
]
} |
21,558 | I want to know what a " virtual base class " is and what it means. Let me show an example: class Foo{public: void DoSomething() { /* ... */ }};class Bar : public virtual Foo{public: void DoSpecific() { /* ... */ }}; | Virtual base classes, used in virtual inheritance, is a way of preventing multiple "instances" of a given class appearing in an inheritance hierarchy when using multiple inheritance. Consider the following scenario: class A { public: void Foo() {} };class B : public A {};class C : public A {};class D : public B, public C {}; The above class hierarchy results in the "dreaded diamond" which looks like this: A / \B C \ / D An instance of D will be made up of B, which includes A, and C which also includes A. So you have two "instances" (for want of a better expression) of A. When you have this scenario, you have the possibility of ambiguity. What happens when you do this: D d;d.Foo(); // is this B's Foo() or C's Foo() ?? Virtual inheritance is there to solve this problem. When you specify virtual when inheriting your classes, you're telling the compiler that you only want a single instance. class A { public: void Foo() {} };class B : public virtual A {};class C : public virtual A {};class D : public B, public C {}; This means that there is only one "instance" of A included in the hierarchy. Hence D d;d.Foo(); // no longer ambiguous This is a mini summary. For more information, have a read of this and this . A good example is also available here . | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/21558",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1556/"
]
} |
21,564 | I ended up writing a quick little script for this in Python, but I was wondering if there was a utility you could feed text into which would prepend each line with some text -- in my specific case, a timestamp. Ideally, the use would be something like: cat somefile.txt | prepend-timestamp (Before you answer sed, I tried this: cat somefile.txt | sed "s/^/`date`/" But that only evaluates the date command once when sed is executed, so the same timestamp is incorrectly prepended to each line.) | Could try using awk : <command> | awk '{ print strftime("%Y-%m-%d %H:%M:%S"), $0; fflush(); }' You may need to make sure that <command> produces line buffered output, i.e. it flushes its output stream after each line; the timestamp awk adds will be the time that the end of the line appeared on its input pipe. If awk shows errors, then try gawk instead. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/21564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/156/"
]
} |
21,574 | I'm not clear on the differences between the "current" version of Ruby (1.8) and the "new" version (1.9). Is there an "easy" or a "simple" explanation of the differences and why it is so different? | Sam Ruby has a cool slideshow that outline the differences . In the interest of bringing this information inline for easier reference, and in case the link goes dead in the abstract future, here's an overview of Sam's slides. The slideshow is less overwhelming to review, but having it all laid out in a list like this is also helpful. Ruby 1.9 - Major Features Performance Threads/Fibers Encoding/Unicode gems is (mostly) built-in now if statements do not introduce scope in Ruby. What's changed? Single character strings. Ruby 1.9 irb(main):001:0> ?c=> "c" Ruby 1.8.6 irb(main):001:0> ?c=> 99 String index. Ruby 1.9 irb(main):001:0> "cat"[1]=> "a" Ruby 1.8.6 irb(main):001:0> "cat"[1]=> 97 {"a","b"} No Longer Supported Ruby 1.9 irb(main):002:0> {1,2}SyntaxError: (irb):2: syntax error, unexpected ',', expecting tASSOC Ruby 1.8.6 irb(main):001:0> {1,2}=> {1=>2} Action: Convert to {1 => 2} Array.to_s Now Contains Punctuation Ruby 1.9 irb(main):001:0> [1,2,3].to_s=> "[1, 2, 3]" Ruby 1.8.6 irb(main):001:0> [1,2,3].to_s=> "123" Action: Use .join instead Colon No Longer Valid In When Statements Ruby 1.9 irb(main):001:0> case 'a'; when /\w/: puts 'word'; endSyntaxError: (irb):1: syntax error, unexpected ':',expecting keyword_then or ',' or ';' or '\n' Ruby 1.8.6 irb(main):001:0> case 'a'; when /\w/: puts 'word'; endword Action: Use semicolon, then, or newline Block Variables Now Shadow Local Variables Ruby 1.9 irb(main):001:0> i=0; [1,2,3].each {|i|}; i=> 0irb(main):002:0> i=0; for i in [1,2,3]; end; i=> 3 Ruby 1.8.6 irb(main):001:0> i=0; [1,2,3].each {|i|}; i=> 3 Hash.index Deprecated Ruby 1.9 irb(main):001:0> {1=>2}.index(2)(irb):18: warning: Hash#index is deprecated; use Hash#key=> 1irb(main):002:0> {1=>2}.key(2)=> 1 Ruby 1.8.6 irb(main):001:0> {1=>2}.index(2)=> 1 Action: Use Hash.key Fixnum.to_sym Now Gone Ruby 1.9 irb(main):001:0> 5.to_symNoMethodError: undefined method 'to_sym' for 5:Fixnum Ruby 1.8.6 irb(main):001:0> 5.to_sym=> nil (Cont'd) Ruby 1.9 # Find an argument value by name or index.def [](index) lookup(index.to_sym)end svn.ruby-lang.org/repos/ruby/trunk/lib/rake.rb Hash Keys Now Unordered Ruby 1.9 irb(main):001:0> {:a=>"a", :c=>"c", :b=>"b"}=> {:a=>"a", :c=>"c", :b=>"b"} Ruby 1.8.6 irb(main):001:0> {:a=>"a", :c=>"c", :b=>"b"}=> {:a=>"a", :b=>"b", :c=>"c"} Order is insertion order Stricter Unicode Regular Expressions Ruby 1.9 irb(main):001:0> /\x80/uSyntaxError: (irb):2: invalid multibyte escape: /\x80/ Ruby 1.8.6 irb(main):001:0> /\x80/u=> /\x80/u tr and Regexp Now Understand Unicode Ruby 1.9 unicode(string).tr(CP1252_DIFFERENCES, UNICODE_EQUIVALENT). gsub(INVALID_XML_CHAR, REPLACEMENT_CHAR). gsub(XML_PREDEFINED) {|c| PREDEFINED[c.ord]} pack and unpack Ruby 1.8.6 def xchr(escape=true) n = XChar::CP1252[self] || self case n when *XChar::VALID XChar::PREDEFINED[n] or (n>128 ? n.chr : (escape ? "&##{n};" : [n].pack('U*'))) else Builder::XChar::REPLACEMENT_CHAR endendunpack('U*').map {|n| n.xchr(escape)}.join BasicObject More Brutal Than BlankSlate Ruby 1.9 irb(main):001:0> class C < BasicObject; def f; Math::PI; end; end; C.new.fNameError: uninitialized constant C::Math Ruby 1.8.6 irb(main):001:0> require 'blankslate'=> trueirb(main):002:0> class C < BlankSlate; def f; Math::PI; end; end; C.new.f=> 3.14159265358979 Action: Use ::Math::PI Delegation Changes Ruby 1.9 irb(main):002:0> class C < SimpleDelegator; end=> nilirb(main):003:0> C.new('').class=> String Ruby 1.8.6 irb(main):002:0> class C < SimpleDelegator; end=> nilirb(main):003:0> C.new('').class=> Cirb(main):004:0> Defect 17700 Use of $KCODE Produces Warnings Ruby 1.9 irb(main):004:1> $KCODE = 'UTF8'(irb):4: warning: variable $KCODE is no longer effective; ignored=> "UTF8" Ruby 1.8.6 irb(main):001:0> $KCODE = 'UTF8'=> "UTF8" instance_methods Now an Array of Symbols Ruby 1.9 irb(main):001:0> {}.methods.sort.last=> :zip Ruby 1.8.6 irb(main):001:0> {}.methods.sort.last=> "zip" Action: Replace instance_methods.include? with method_defined? Source File Encoding Basic # coding: utf-8 Emacs # -*- encoding: utf-8 -*- Shebang #!/usr/local/rubybook/bin/ruby# encoding: utf-8 Real Threading Race Conditions Implicit Ordering Assumptions Test Code What's New? Alternate Syntax for Symbol as Hash Keys Ruby 1.9 {a: b}redirect_to action: show Ruby 1.8.6 {:a => b}redirect_to :action => show Block Local Variables Ruby 1.9 [1,2].each {|value; t| t=value*value} Inject Methods Ruby 1.9 [1,2].inject(:+) Ruby 1.8.6 [1,2].inject {|a,b| a+b} to_enum Ruby 1.9 short_enum = [1, 2, 3].to_enumlong_enum = ('a'..'z').to_enumloop do puts "#{short_enum.next} #{long_enum.next}"end No block? Enum! Ruby 1.9 e = [1,2,3].each Lambda Shorthand Ruby 1.9 p = -> a,b,c {a+b+c}puts p.(1,2,3)puts p[1,2,3] Ruby 1.8.6 p = lambda {|a,b,c| a+b+c}puts p.call(1,2,3) Complex Numbers Ruby 1.9 Complex(3,4) == 3 + 4.im Decimal Is Still Not The Default Ruby 1.9 irb(main):001:0> 1.2-1.1=> 0.0999999999999999 Regex “Properties” Ruby 1.9 /\p{Space}/ Ruby 1.8.6 /[:space:]/ Splat in Middle Ruby 1.9 def foo(first, *middle, last)(->a, *b, c {p a-c}).(*5.downto(1)) Fibers Ruby 1.9 f = Fiber.new do a,b = 0,1 Fiber.yield a Fiber.yield b loop do a,b = b,a+b Fiber.yield b endend10.times {puts f.resume} Break Values Ruby 1.9 match = while line = gets next if line =~ /^#/ break line if line.find('ruby') end “Nested” Methods Ruby 1.9 def toggle def toggle "subsequent times" end "first time"end HTH! | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/21574",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/757/"
]
} |
21,583 | This past summer I was developing a basic ASP.NET/SQL Server CRUD app, and unit testing was one of the requirements. I ran into some trouble when I tried to test against the database. To my understanding, unit tests should be: stateless independent from each other repeatable with the same results i.e. no persisting changes These requirements seem to be at odds with each other when developing for a database. For example, I can't test Insert() without making sure the rows to be inserted aren't there yet, thus I need to call the Delete() first. But, what if they aren't already there? Then I would need to call the Exists() function first. My eventual solution involved very large setup functions (yuck!) and an empty test case which would run first and indicate that the setup ran without problems. This is sacrificing on the independence of the tests while maintaining their statelessness. Another solution I found is to wrap the function calls in a transaction which can be easily rolled back, like Roy Osherove's XtUnit . This work, but it involves another library, another dependency, and it seems a little too heavy of a solution for the problem at hand. So, what has the SO community done when confronted with this situation? tgmdbm said: You typically use your favourite automated unit testing framework to perform integration tests, which is why some people get confused, but they don't follow the same rules. You are allowed to involve the concrete implementation of many of your classes (because they've been unit tested). You are testing how your concrete classes interact with each other and with the database . So if I read this correctly, there is really no way to effectively unit-test a Data Access Layer. Or, would a "unit test" of a Data Access Layer involve testing, say, the SQL/commands generated by the classes, independent of actual interaction with the database? | There's no real way to unit test a database other than asserting that the tables exist, contain the expected columns, and have the appropriate constraints. But that's usually not really worth doing. You don't typically unit test the database. You usually involve the database in integration tests. You typically use your favourite automated unit testing framework to perform integration tests, which is why some people get confused, but they don't follow the same rules. You are allowed to involve the concrete implementation of many of your classes (because they've been unit tested). You are testing how your concrete classes interact with each other and with the database. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/21583",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1266/"
]
} |
21,593 | What is the difference between using angle brackets and quotes in an include directive? #include <filename> #include "filename" | What differs is the locations in which the preprocessor searches for the file to be included. #include <filename> The preprocessor searches in an implementation-defined manner, normally in directories pre-designated by the compiler/IDE. This method is normally used to include header files for the C standard library and other header files associated with the target platform. #include "filename" The preprocessor also searches in an implementation-defined manner, but one that is normally used to include programmer-defined header files and typically includes same directory as the file containing the directive (unless an absolute path is given). For GCC, a more complete description is available in the GCC documentation on search paths . | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/21593",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2399/"
]
} |
21,640 | Is there a simple way in .NET to quickly get the current protocol, host, and port? For example, if I'm on the following URL: http://www.mywebsite.com:80/pages/page1.aspx I need to return: http://www.mywebsite.com:80 I know I can use Request.Url.AbsoluteUri to get the complete URL, and I know I can use Request.Url.Authority to get the host and port, but I'm not sure of the best way to get the protocol without parsing out the URL string. Any suggestions? | The following (C#) code should do the trick Uri uri = new Uri("http://www.mywebsite.com:80/pages/page1.aspx");string requested = uri.Scheme + Uri.SchemeDelimiter + uri.Host + ":" + uri.Port; | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/21640",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2076253/"
]
} |
21,647 | The following C++ code uses a ifstream object to read integers from a text file (which has one number per line) until it hits EOF . Why does it read the integer on the last line twice? How to fix this? Code: #include <iostream>#include <fstream>using namespace std;int main(){ ifstream iFile("input.txt"); // input.txt has integers, one per line while (!iFile.eof()) { int x; iFile >> x; cerr << x << endl; } return 0;} input.txt : 10 20 30 Output : 10 20 30 30 Note : I've skipped all error checking code to keep the code snippet small. The above behaviour is seen on Windows (Visual C++), cygwin (gcc) and Linux (gcc). | Just follow closely the chain of events. Grab 10 Grab 20 Grab 30 Grab EOF Look at the second-to-last iteration. You grabbed 30, then carried on to check for EOF. You haven't reached EOF because the EOF mark hasn't been read yet ("binarically" speaking, its conceptual location is just after the 30 line). Therefore you carry on to the next iteration. x is still 30 from previous iteration. Now you read from the stream and you get EOF. x remains 30 and the ios::eofbit is raised. You output to stderr x (which is 30, just like in the previous iteration). Next you check for EOF in the loop condition, and this time you're out of the loop. Try this: while (true) { int x; iFile >> x; if( iFile.eof() ) break; cerr << x << endl;} By the way, there is another bug in your code. Did you ever try to run it on an empty file? The behaviour you get is for the exact same reason. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/21647",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1630/"
]
} |
21,669 | I didn't get the answer to this anywhere. What is the runtime complexity of a Regex match and substitution? Edit: I work in python. But would like to know in general about most popular languages/tools (java, perl, sed). | From a purely theoretical stance: The implementation I am familiar with would be to build a Deterministic Finite Automaton to recognize the regex. This is done in O(2^m), m being the size of the regex, using a standard algorithm. Once this is built, running a string through it is linear in the length of the string - O(n), n being string length. A replacement on a match found in the string should be constant time. So overall, I suppose O(2^m + n). | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/21669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448/"
]
} |
21,697 | I'm currently writing an ASP.Net app from the UI down. I'm implementing an MVP architecture because I'm sick of Winforms and wanted something that had a better separation of concerns. So with MVP, the Presenter handles events raised by the View. Here's some code that I have in place to deal with the creation of users: public class CreateMemberPresenter{ private ICreateMemberView view; private IMemberTasks tasks; public CreateMemberPresenter(ICreateMemberView view) : this(view, new StubMemberTasks()) { } public CreateMemberPresenter(ICreateMemberView view, IMemberTasks tasks) { this.view = view; this.tasks = tasks; HookupEventHandlersTo(view); } private void HookupEventHandlersTo(ICreateMemberView view) { view.CreateMember += delegate { CreateMember(); }; } private void CreateMember() { if (!view.IsValid) return; try { int newUserId; tasks.CreateMember(view.NewMember, out newUserId); view.NewUserCode = newUserId; view.Notify(new NotificationDTO() { Type = NotificationType.Success }); } catch(Exception e) { this.LogA().Message(string.Format("Error Creating User: {0}", e.Message)); view.Notify(new NotificationDTO() { Type = NotificationType.Failure, Message = "There was an error creating a new member" }); } }} I have my main form validation done using the built in .Net Validation Controls, but now I need to verify that the data sufficiently satisfies the criteria for the Service Layer. Let's say the following Service Layer messages can show up: E-mail account already exists (failure) Refering user entered does not exist (failure) Password length exceeds datastore allowed length (failure) Member created successfully (success) Let's also say that more rules will be in the service layer that the UI cannot anticipate. Currently I'm having the service layer throw an exception if things didn't go as planned. Is that a sufficent strategy? Does this code smell to you guys? If I wrote a service layer like this would you be annoyed at having to write Presenters that use it in this way? Return codes seem too old school and a bool is just not informative enough. Edit not by OP: merging in follow-up comments that were posted as answers by the OP Cheekysoft, I like the concept of a ServiceLayerException. I already have a global exception module for the exceptions that I don't anticipate. Do you find making all these custom exceptions tedious? I was thinking that catching base Exception class was a bit smelly but wasn't exactly sure how progress from there. tgmdbm, I like the clever use of the lambda expression there! Thanks Cheekysoft for the follow-up. So I'm guessing that would be the strategy if you don't mind the user being displayed a separate page (I'm primarily a web developer) if the Exception is not handled. However, if I want to return the error message in the same view where the user submitted the data that caused the error, I would then have to catch the Exception in the Presenter? Here's what the CreateUserView looks like when the Presenter has handled the ServiceLayerException: For this kind of error, it's nice to report it to the same view. Anyways, I think we're going beyond the scope of my original question now. I'll play around with what you've posted and if I need further details I'll post a new question. | That sounds just right to me. Exceptions are preferable as they can be thrown up to the top of the service layer from anywhere inside the service layer, no matter how deeply nested inside the service method implementation it is. This keeps the service code clean as you know the calling presenter will always get notification of the problem. Don't catch Exception However, don't catch Exception in the presenter, I know its tempting because it keeps the code shorter, but you need to catch specific exceptions to avoid catching the system-level exceptions. Plan a Simple Exception Hierarchy If you are going to use exceptions in this way, you should design an exception hierarchy for your own exception classes. At a minumum create a ServiceLayerException class and throw one of these in your service methods when a problem occurs. Then if you need to throw an exception that should/could be handled differently by the presenter, you can throw a specific subclass of ServiceLayerException: say, AccountAlreadyExistsException. Your presenter then has the option of doing try { // call service etc. // handle success to view} catch (AccountAlreadyExistsException) { // set the message and some other unique data in the view}catch (ServiceLayerException) { // set the message in the view}// system exceptions, and unrecoverable exceptions are allowed to bubble // up the call stack so a general error can be shown to the user, rather // than showing the form again. Using inheritance in your own exception classes means you are not required to catch multipile exceptions in your presenter -- you can if there's a need to -- and you don't end up accidentally catching exceptions you can't handle. If your presenter is already at the top of the call stack, add a catch( Exception ) block to handle the system errors with a different view. I always try and think of my service layer as a seperate distributable library, and throw as specific an exception as makes sense. It is then up to the presenter/controller/remote-service implementation to decide if it needs to worry about the specific details or just to treat problems as a generic error. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/21697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1894/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.