source_id
int64
1
74.7M
question
stringlengths
0
40.2k
response
stringlengths
0
111k
metadata
dict
106,880
I am trying to use the InternalsVisibleTo assembly attribute to make my internal classes in a .NET class library visible to my unit test project. For some reason, I keep getting an error message that says: 'MyClassName' is inaccessible due to its protection level Both assemblies are signed and I have the correct key listed in the attribute declaration. Any ideas?
Are you absolutely sure you have the correct public key specified in the attribute?Note that you need to specify the full public key, not just the public key token. It looks something like: [assembly: InternalsVisibleTo("MyFriendAssembly,PublicKey=0024000004800000940000000602000000240000525341310004000001000100F73F4DDC11F0CA6209BC63EFCBBAC3DACB04B612E04FA07F01D919FB5A1579D20283DC12901C8B66A08FB8A9CB6A5E81989007B3AA43CD7442BED6D21F4D33FB590A46420FB75265C889D536A9519674440C3C2FB06C5924360243CACD4B641BE574C31A434CE845323395842FAAF106B234C2C1406E2F553073FF557D2DB6C5")] It's 320 or so hex digits. Not sure why you need to specify the full public key - possibly with just the public key token that is used in other assembly references it would be easier for someone to spoof the friend assembly's identity.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/106880", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14101/" ] }
106,896
I'm used to the Java model where you can have one public class per file. Python doesn't have this restriction, and I'm wondering what's the best practice for organizing classes.
A Python file is called a "module" and it's one way to organize your software so that it makes "sense". Another is a directory, called a "package". A module is a distinct thing that may have one or two dozen closely-related classes. The trick is that a module is something you'll import, and you need that import to be perfectly sensible to people who will read, maintain and extend your software. The rule is this: a module is the unit of reuse . You can't easily reuse a single class. You should be able to reuse a module without any difficulties. Everything in your library (and everything you download and add) is either a module or a package of modules. For example, you're working on something that reads spreadsheets, does some calculations and loads the results into a database. What do you want your main program to look like? from ssReader import Readerfrom theCalcs import ACalc, AnotherCalcfrom theDB import Loaderdef main( sourceFileName ): rdr= Reader( sourceFileName ) c1= ACalc( options ) c2= AnotherCalc( options ) ldr= Loader( parameters ) for myObj in rdr.readAll(): c1.thisOp( myObj ) c2.thatOp( myObj ) ldr.laod( myObj ) Think of the import as the way to organize your code in concepts or chunks. Exactly how many classes are in each import doesn't matter. What matters is the overall organization that you're portraying with your import statements.
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/106896", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14139/" ] }
106,907
We put all of our unit tests in their own projects. We find that we have to make certain classes public instead of internal just for the unit tests. Is there anyway to avoid having to do this. What are the memory implication by making classes public instead of sealed?
If you're using .NET, the InternalsVisibleTo assembly attribute allows you to create "friend" assemblies. These are specific strongly named assemblies that are allowed to access internal classes and members of the other assembly. Note, this should be used with discretion as it tightly couples the involved assemblies. A common use for InternalsVisibleTo is for unit testing projects. It's probably not a good choice for use in your actual application assemblies, for the reason stated above. Example: [assembly: InternalsVisibleTo("NameAssemblyYouWantToPermitAccess")]namespace NameOfYourNameSpace{
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/106907", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4653/" ] }
106,941
I'm a pretty new C# and .NET developer. I recently created an MMC snapin using C# and was gratified by how easy it was to do, especially after hearing a lot of horror stories by some other developers in my organisation about how hard it is to do in C++. I pretty much went through the whole project at some point and made every instance of the "public" keyword to "internal", except as required by the runtime in order to run the snapin. What is your feeling on this, should you generally make classes and methods public or internal?
I believe in blackboxes where possible. As a programmer, I want a well defined blackbox which I can easily drop into my systems, and have it work. I give it values, call the appropriate methods, and then get my results back out of it. To that end, give me only the functionality that the class needs to expose to work. Consider an elevator. To get it to go to a floor, I push a button. That's the public interface to the black box which activates all the functions needed to get the elevator to the desired floor.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/106941", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3146/" ] }
106,963
I'm building the world's simplest library application. All I want to be able to do is scan in a book's UPC (barcode) using a typical scanner (which just types the numbers of the barcode into a field) and then use it to look up data about the book... at a minimum, title, author, year published, and either the Dewey Decimal or Library of Congress catalog number. The goal is to print out a tiny sticker ("spine label") with the card catalog number that I can stick on the spine of the book, and then I can sort the books by card catalog number on the shelves in our company library. That way books on similar subjects will tend to be near each other, for example, if you know you're looking for a book about accounting, all you have to do is find SOME book about accounting and you'll see the other half dozen that we have right next to it which makes it convenient to browse the library. There seem to be lots of web APIs to do this, including Amazon and the Library of Congress. But those are all extremely confusing to me. What I really just want is a single higher level function that takes a UPC barcode number and returns some basic data about the book.
There's a very straightforward web based solution over at ISBNDB.com that you may want to look at. Edit: Updated API documentation link, now there's version 2 available as well Link to prices and tiers here You can be up and running in just a few minutes (these examples are from API v1): register on the site and get a key to use the API try a URL like: http://isbndb.com/api/books.xml?access_key= {yourkey} &index1=isbn&results=details&value1=9780143038092 The results=details gets additional details including the card catalog number. As an aside, generally the barcode is the isbn in either isbn10 or isbn13. You just have to delete the last 5 numbers if you are using a scanner and you pick up 18 numbers. Here's a sample response: <ISBNdb server_time="2008-09-21T00:08:57Z"> <BookList total_results="1" page_size="10" page_number="1" shown_results="1"> <BookData book_id="the_joy_luck_club_a12" isbn="0143038095"> <Title>The Joy Luck Club</Title> <TitleLong/> <AuthorsText>Amy Tan, </AuthorsText> <PublisherText publisher_id="penguin_non_classics">Penguin (Non-Classics)</PublisherText> <Details dewey_decimal="813.54" physical_description_text="288 pages" language="" edition_info="Paperback; 2006-09-21" dewey_decimal_normalized="813.54" lcc_number="" change_time="2006-12-11T06:26:55Z" price_time="2008-09-20T23:51:33Z"/> </BookData> </BookList></ISBNdb>
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/106963", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4/" ] }
106,965
Is there a way to read a locked file across a network given that you are the machine admin on the remote machine? I haven't been able to read the locked file locally, and attempting it over the network adds another layer of difficulty.
There's a very straightforward web based solution over at ISBNDB.com that you may want to look at. Edit: Updated API documentation link, now there's version 2 available as well Link to prices and tiers here You can be up and running in just a few minutes (these examples are from API v1): register on the site and get a key to use the API try a URL like: http://isbndb.com/api/books.xml?access_key= {yourkey} &index1=isbn&results=details&value1=9780143038092 The results=details gets additional details including the card catalog number. As an aside, generally the barcode is the isbn in either isbn10 or isbn13. You just have to delete the last 5 numbers if you are using a scanner and you pick up 18 numbers. Here's a sample response: <ISBNdb server_time="2008-09-21T00:08:57Z"> <BookList total_results="1" page_size="10" page_number="1" shown_results="1"> <BookData book_id="the_joy_luck_club_a12" isbn="0143038095"> <Title>The Joy Luck Club</Title> <TitleLong/> <AuthorsText>Amy Tan, </AuthorsText> <PublisherText publisher_id="penguin_non_classics">Penguin (Non-Classics)</PublisherText> <Details dewey_decimal="813.54" physical_description_text="288 pages" language="" edition_info="Paperback; 2006-09-21" dewey_decimal_normalized="813.54" lcc_number="" change_time="2006-12-11T06:26:55Z" price_time="2008-09-20T23:51:33Z"/> </BookData> </BookList></ISBNdb>
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/106965", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2712/" ] }
107,005
I'm trying to find if there is a reliable way (using SQLite ) to find the ID of the next row to be inserted, before it gets inserted . I need to use the id for another insert statement, but don't have the option of instantly inserting and getting the next row. Is predicting the next id as simple as getting the last id and adding one? Is that a guarantee? Edit: A little more reasoning...I can't insert immediately because the insert may end up being canceled by the user. User will make some changes, SQL statements will be stored, and from there the user can either save (inserting all the rows at once), or cancel (not changing anything). In the case of a program crash, the desired functionality is that nothing gets changed.
Either scrapping or committing a series of database operations all at once is exactly what transactions are for. Query BEGIN; before the user starts fiddling and COMMIT; once he/she's done. You're guaranteed that either all the changes are applied (if you commit) or everything is scrapped (if you query ROLLBACK; , if the program crashes, power goes out, etc). Once you read from the db, you're also guaranteed that the data is good until the end of the transaction, so you can grab MAX(id) or whatever you want without worrying about race conditions. http://www.sqlite.org/lang_transaction.html
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/107005", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14278/" ] }
107,117
I noticed that you can call Queue.Synchronize to get a thread-safe queue object, but the same method isn't available on Queue<T>. Does anyone know why? Seems kind of weird.
Update - in .NET 4, there now is ConcurrentQueue<T> in System.Collections.Concurrent, as documented here http://msdn.microsoft.com/en-us/library/dd267265.aspx . It's interesting to note that its IsSynchronized method (rightly) returns false. ConcurrentQueue<T> is a complete ground up rewrite, creating copies of the queue to enumerate, and using advanced no-lock techniques like Interlocked.CompareExchange() and Thread.SpinWait() . The rest of this answer is still relevant insofar as it relates to the demise of the old Synchronize() and SyncRoot members, and why they didn't work very well from an API perspective. As per Zooba's comment, the BCL team decided that too many developers were misunderstanding the purpose of Synchronise (and to a lesser extent, SyncRoot) Brian Grunkemeyer described this on the BCL team blog a couple of years back: http://blogs.msdn.com/bclteam/archive/2005/03/15/396399.aspx The key issue is getting the correct granularity around locks, where some developers would naively use multiple properties or methods on a "synchronized" collection and believe their code to be thread safe. Brian uses Queue as his example, if (queue.Count > 0) { object obj = null; try { obj = queue.Dequeue(); Developers wouldn't realize that Count could be changed by another thread before Dequeue was invoked. Forcing developers to use an explicit lock statement around the whole operation means preventing this false sense of security. As Brian mentions, the removal of SyncRoot was partly because it had mainly been introduced to support Synchronized, but also because in many cases there is a better choice of lock object - most of the time, either the Queue instance itself, or a private static object lockObjForQueueOperations = new object(); on the class owning the instance of the Queue... This latter approach is usually safest as it avoids some other common traps: Never lock(this) Don't lock(string) or lock(typeof(A)) As they say, threading is hard , and making it seem easy can be dangerous.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/107117", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14101/" ] }
107,132
As a follow up to " What are indexes and how can I use them to optimise queries in my database? " where I am attempting to learn about indexes, what columns are good index candidates? Specifically for an MS SQL database? After some googling, everything I have read suggests that columns that are generally increasing and unique make a good index (things like MySQL's auto_increment), I understand this, but I am using MS SQL and I am using GUIDs for primary keys, so it seems that indexes would not benefit GUID columns...
Indexes can play an important role in query optimization and searching the results speedily from tables. The most important step is to select which columns are to be indexed. There are two major places where we can consider indexing: columns referenced in the WHERE clause and columns used in JOIN clauses. In short, such columns should be indexed against which you are required to search particular records. Suppose, we have a table named buyers where the SELECT query uses indexes like below: SELECT buyer_id /* no need to index */FROM buyersWHERE first_name='Tariq' /* consider indexing */AND last_name='Iqbal' /* consider indexing */ Since "buyer_id" is referenced in the SELECT portion, MySQL will not use it to limit the chosen rows. Hence, there is no great need to index it. The below is another example little different from the above one: SELECT buyers.buyer_id, /* no need to index */ country.name /* no need to index */FROM buyers LEFT JOIN countryON buyers.country_id=country.country_id /* consider indexing */WHERE first_name='Tariq' /* consider indexing */AND last_name='Iqbal' /* consider indexing */ According to the above queries first_name, last_name columns can be indexed as they are located in the WHERE clause. Also an additional field, country_id from country table, can be considered for indexing because it is in a JOIN clause. So indexing can be considered on every field in the WHERE clause or a JOIN clause. The following list also offers a few tips that you should always keep in mind when intend to create indexes into your tables: Only index those columns that are required in WHERE and ORDER BY clauses. Indexing columns in abundance will result in some disadvantages. Try to take benefit of "index prefix" or "multi-columns index" feature of MySQL. If you create an index such as INDEX(first_name, last_name), don’t create INDEX(first_name). However, "index prefix" or "multi-columns index" is not recommended in all search cases. Use the NOT NULL attribute for those columns in which you consider the indexing, so that NULL values will never be stored. Use the --log-long-format option to log queries that aren’t using indexes. In this way, you can examine this log file and adjust your queries accordingly. The EXPLAIN statement helps you to reveal that how MySQL will execute a query. It shows how and in what order tables are joined. This can be much useful for determining how to write optimized queries, and whether the columns are needed to be indexed. Update (23 Feb'15): Any index (good/bad) increases insert and update time. Depending on your indexes (number of indexes and type), result is searched. If your search time is gonna increase because of index then that's bad index. Likely in any book, "Index Page" could have chapter start page, topic page number starts, also sub topic page starts. Some clarification in Index page helps but more detailed index might confuse you or scare you. Indexes are also having memory. Index selection should be wise. Keep in mind not all columns would require index.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/107132", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1638/" ] }
107,165
I'm asking more about what this means to my code. I understand the concepts mathematically, I just have a hard time wrapping my head around what they mean conceptually. For example, if one were to perform an O(1) operation on a data structure, I understand that the number of operations it has to perform won't grow because there are more items. And an O(n) operation would mean that you would perform a set of operations on each element. Could somebody fill in the blanks here? Like what exactly would an O(n^2) operation do? And what the heck does it mean if an operation is O(n log(n))? And does somebody have to smoke crack to write an O(x!)?
One way of thinking about it is this: O(N^2) means for every element, you're doing something with every other element, such as comparing them. Bubble sort is an example of this. O(N log N) means for every element, you're doing something that only needs to look at log N of the elements. This is usually because you know something about the elements that let you make an efficient choice. Most efficient sorts are an example of this, such as merge sort. O(N!) means to do something for all possible permutations of the N elements. Traveling salesman is an example of this, where there are N! ways to visit the nodes, and the brute force solution is to look at the total cost of every possible permutation to find the optimal one.
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/107165", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ] }
107,196
When I do a clean build my C# project, the produced dll is different then the previously built one (which I saved separately). No code changes were made, just clean and rebuild. Diff shows some bytes in the DLL have changes -- few near the beginning and few near the end, but I can't figure out what these represent. Does anybody have insights on why this is happening and how to prevent it? This is using Visual Studio 2005 / WinForms. Update: Not using automatic version incrementing, or signing the assembly. If it's a timestamp of some sort, how to I prevent VS from writing it? Update: After looking in Ildasm/diff, it seems like the following items are different: Two bytes in PE header at the start of the file. <PrivateImplementationDetails>{ guid } section Cryptic part of the string table near the end (wonder why, I did not change the strings) Parts of assembly info at the end of file. No idea how to eliminate any of these, if at all possible...
My best guess would be the changed bytes you're seeing are the internally-used metadata columns that are automatically generated at build-time. Some of the Ecma-335 Partition II (CLI Specification Metadata Definition) columns that can change per-build, even if the source code doesn't change at all: Module.Mvid: A build-time-generated GUID. Always changes, every build. AssemblyRef.HashValue: Could change if you're referencing another assembly that has also been rebuilt since the old build. If this really, really bothers you, my best tip on finding out exactly what is changing would be to diff the actual metadata tables. The way to get these is to use the ildasm MetaInfo window: View > MetaInfo > Raw:Header,Schema,Rows // important, otherwise you get very basic info from the next stepView > MetaInfo > Show!
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/107196", "https://Stackoverflow.com", "https://Stackoverflow.com/users/838/" ] }
107,243
I was reading Andrew Kennedy's blog post series on units of measurement in F# and it makes a lot of sense in a lot of cases. Are there any other languages that have such a system? Edit: To be more clear, I mean the flexible units of measurement system where you can define your own arbitrarily.
Does TI-89 BASIC count? Enter 54_kg * (_c^2) and it will give you an answer in joules. Other than that, I can't recall any languages that have it built in, but any language with decent OO should make it simple to roll your own. Which means someone else probably already did. Google confirms. For example, here's one in Python . __repr__ could easily be amended to also select the most appropriate derived unit, etc. CPAN has several modules for Perl: Physics::Unit , Data::Dimensions , Class::Measure , Math::Units::PhysicalValue , and a handful of others that will convert but don't really combine values with units.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/107243", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4977/" ] }
107,264
How often should I commit changes to source control ? After every small feature, or only for large features ? I'm working on a project and have a long-term feature to implement. Currently, I'm committing after every chunk of work, i.e. every sub-feature implemented and bug fixed. I even commit after I've added a new chunk of tests for some feature after discovering a bug. However, I'm concerned about this pattern. In a productive day of work I might make 10 commits. Given that I'm using Subversion, these commits affect the whole repository, so I wonder if it indeed is a good practice to make so many ?
Anytime I complete a "full thought" of code that compiles and runs I check-in. This usually ends up being anywhere between 15-60 minutes. Sometimes it could be longer, but I always try to checkin if I have a lot of code changes that I wouldn't want to rewrite in case of failure. I also usually make sure my code compiles and I check-in at the end of the work day before I go home. I wouldn't worry about making "too many" commits/check-ins. It really sucks when you have to rewrite something, and it's nice to be able to rollback in small increments just in case.
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/107264", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8206/" ] }
107,288
After having read that QuickSilver was no longer supported by BlackTree and has since gone open source, I noticed more and more people switching to/suggesting other app launchers i.e. Buttler and LaunchBar. Is QuickSilver still relevant? Has anyone experienced any instability since it's gone open source?
Quicksilver is still alive and well. There are at least a couple of endeavours to keep it going, up to date and restructure and clean up the code base. Check out the code from Google Code. As for launching apps, not even Spotlight comes close to how fast it is in Quicksilver. Of course the real joy of Quicksilver is past just launching apps and using triggers, scripts and the many plugins. My workflow goes to a new level with Quicksilver. I'd be lost without it. Update: Since posting this I switched and use LaunchBar for a while. This was during the time that QuickSilver seemed to be almost close to death. Loved LaunchBar and didn't need to switch back to QuickSilver. Recently though, I have left LaunchBar and have been using Alfred. I would highly recommend it. For me, LaunchBar and Alfred are pretty close. But, aesthetically and operationally, Alfred suits my tastes more than LaunchBar.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/107288", "https://Stackoverflow.com", "https://Stackoverflow.com/users/877/" ] }
107,314
We've been using selenium with great success to handle high-level website testing (in addition to extensive python doctests at a module level). However now we're using extjs for a lot of pages and its proving difficult to incorporate Selenium tests for the complex components like grids. Has anyone had success writing automated tests for extjs-based web pages? Lots of googling finds people with similar problems, but few answers. Thanks!
The biggest hurdle in testing ExtJS with Selenium is that ExtJS doesn't render standard HTML elements and the Selenium IDE will naively (and rightfully) generate commands targeted at elements that just act as decor -- superfluous elements that help ExtJS with the whole desktop-look-and-feel. Here are a few tips and tricks that I've gathered while writing automated Selenium test against an ExtJS app. General Tips Locating Elements When generating Selenium test cases by recording user actions with Selenium IDE on Firefox, Selenium will base the recorded actions on the ids of the HTML elements. However, for most clickable elements, ExtJS uses generated ids like "ext-gen-345" which are likely to change on a subsequent visit to the same page, even if no code changes have been made. After recording user actions for a test, there needs to be a manual effort to go through all such actions that depend on generated ids and to replace them. There are two types of replacements that can be made: Replacing an Id Locator with a CSS or XPath Locator CSS locators begin with "css=" and XPath locators begin with "//" (the "xpath=" prefix is optional). CSS locators are less verbose and are easier to read and should be preferred over XPath locators. However, there can be cases where XPath locators need to be used because a CSS locator simply can't cut it. Executing JavaScript Some elements require more than simple mouse/keyboard interactions due to the complex rendering carried out by ExtJS. For example, a Ext.form.CombBox is not really a <select> element but a text input with a detached drop-down list that's somewhere at the bottom of the document tree. In order to properly simulate a ComboBox selection, it's possible to first simulate a click on the drop-down arrow and then to click on the list that appears. However, locating these elements through CSS or XPath locators can be cumbersome. An alternative is to locate the ComoBox component itself and call methods on it to simulate the selection: var combo = Ext.getCmp('genderComboBox'); // returns the ComboBox componentscombo.setValue('female'); // set the valuecombo.fireEvent('select'); // because setValue() doesn't trigger the event In Selenium the runScript command can be used to perform the above operation in a more concise form: with (Ext.getCmp('genderComboBox')) { setValue('female'); fireEvent('select'); } Coping with AJAX and Slow Rendering Selenium has "*AndWait" flavors for all commands for waiting for page loads when a user action results in page transitions or reloads. However, since AJAX fetches don't involve actual page loads, these commands can't be used for synchronization. The solution is to make use of visual clues like the presence/absence of an AJAX progress indicator or the appearance of rows in a grid, additional components, links etc. For example: Command: waitForElementNotPresentTarget: css=div:contains('Loading...') Sometimes an element will appear only after a certain amount of time, depending on how fast ExtJS renders components after a user action results in a view change. Instead of using arbitary delays with the pause command, the ideal method is to wait until the element of interest comes within our grasp. For example, to click on an item after waiting for it to appear: Command: waitForElementPresentTarget: css=span:contains('Do the funky thing')Command: clickTarget: css=span:contains('Do the funky thing') Relying on arbitrary pauses is not a good idea since timing differences that result from running the tests in different browsers or on different machines will make the test cases flaky. Non-clickable Items Some elements can't be triggered by the click command. It's because the event listener is actually on the container, watching for mouse events on its child elements, that eventually bubble up to the parent. The tab control is one example. To click on the a tab, you have to simulate a mouseDown event at the tab label: Command: mouseDownAtTarget: css=.x-tab-strip-text:contains('Options')Value: 0,0 Field Validation Form fields (Ext.form.* components) that have associated regular expressions or vtypes for validation will trigger validation with a certain delay (see the validationDelay property which is set to 250ms by default), after the user enters text or immediately when the field loses focus -- or blurs (see the validateOnDelay property). In order to trigger field validation after issuing the type Selenium command to enter some text inside a field, you have to do either of the following: Triggering Delayed Validation ExtJS fires off the validation delay timer when the field receives keyup events. To trigger this timer, simply issue a dummy keyup event (it doesn't matter which key you use as ExtJS ignores it), followed by a short pause that is longer than the validationDelay: Command: keyUpTarget: someTextAreaValue: xCommand: pauseTarget: 500 Triggering Immediate Validation You can inject a blur event into the field to trigger immediate validation: Command: runScriptTarget: someComponent.nameTextField.fireEvent("blur") Checking for Validation Results Following validation, you can check for the presence or absence of an error field: Command: verifyElementNotPresent Target: //*[@id="nameTextField"]/../*[@class="x-form-invalid-msg" and not(contains(@style, "display: none"))]Command: verifyElementPresent Target: //*[@id="nameTextField"]/../*[@class="x-form-invalid-msg" and not(contains(@style, "display: none"))] Note that the "display: none" check is necessary because once an error field is shown and then it needs to be hidden, ExtJS will simply hide error field instead of entirely removing it from the DOM tree. Element-specific Tips Clicking an Ext.form.Button Option 1 Command: click Target: css=button:contains('Save') Selects the button by its caption Option 2 Command: click Target: css=#save-options button Selects the button by its id Selecting a Value from an Ext.form.ComboBox Command: runScriptTarget: with (Ext.getCmp('genderComboBox')) { setValue('female'); fireEvent('select'); } First sets the value and then explicitly fires the select event in case there are observers.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/107314", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19506/" ] }
107,390
They both seem to be sending data to the server inside the body, so what makes them different?
HTTP PUT: PUT puts a file or resource at a specific URI, and exactly at that URI. If there's already a file or resource at that URI, PUT replaces that file or resource. If there is no file or resource there, PUT creates one. PUT is idempotent , but paradoxically PUT responses are not cacheable. HTTP 1.1 RFC location for PUT HTTP POST: POST sends data to a specific URI and expects the resource at that URI to handle the request. The web server at this point can determine what to do with the data in the context of the specified resource. The POST method is not idempotent , however POST responses are cacheable so long as the server sets the appropriate Cache-Control and Expires headers. The official HTTP RFC specifies POST to be: Annotation of existing resources; Posting a message to a bulletin board, newsgroup, mailing list,or similar group of articles; Providing a block of data, such as the result of submitting aform, to a data-handling process; Extending a database through an append operation. HTTP 1.1 RFC location for POST Difference between POST and PUT: The RFC itself explains the core difference: The fundamental difference between thePOST and PUT requests is reflected inthe different meaning of theRequest-URI. The URI in a POST requestidentifies the resource that willhandle the enclosed entity. Thatresource might be a data-acceptingprocess, a gateway to some otherprotocol, or a separate entity thataccepts annotations. In contrast, theURI in a PUT request identifies theentity enclosed with the request --the user agent knows what URI isintended and the server MUST NOTattempt to apply the request to someother resource. If the server desiresthat the request be applied to adifferent URI, it MUST send a 301 (Moved Permanently) response; the user agent MAY then makeits own decision regarding whether or not to redirect the request. Additionally, and a bit more concisely, RFC 7231 Section 4.3.4 PUT states (emphasis added), 4.3.4. PUT The PUT method requests that the state of the target resource be created or replaced with the state defined by the representationenclosed in the request message payload. Using the right method, unrelated aside: One benefit of REST ROA vs SOAP is that when using HTTP REST ROA, it encourages the proper usage of the HTTP verbs/methods. So for example you would only use PUT when you want to create a resource at that exact location. And you would never use GET to create or modify a resource.
{ "score": 11, "source": [ "https://Stackoverflow.com/questions/107390", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10708/" ] }
107,405
What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if http://somedomain/foo/ will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this?
edit : This answer works, but nowadays you should just use the requests library as mentioned by other answers below. Use httplib . >>> import httplib>>> conn = httplib.HTTPConnection("www.google.com")>>> conn.request("HEAD", "/index.html")>>> res = conn.getresponse()>>> print res.status, res.reason200 OK>>> print res.getheaders()[('content-length', '0'), ('expires', '-1'), ('server', 'gws'), ('cache-control', 'private, max-age=0'), ('date', 'Sat, 20 Sep 2008 06:43:36 GMT'), ('content-type', 'text/html; charset=ISO-8859-1')] There's also a getheader(name) to get a specific header.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/107405", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10708/" ] }
107,464
There have been some questions about whether or not JavaScript is an object-oriented language. Even a statement, "just because a language has objects doesn't make it OO." Is JavaScript an object-oriented language?
IMO (and it is only an opinion) the key characteristic of an object orientated language would be that it would support polymorphism . Pretty much all dynamic languages do that. The next characteristic would be encapsulation and that is pretty easy to do in Javascript also. However in the minds of many it is inheritance (specifically implementation inheritance) which would tip the balance as to whether a language qualifies to be called object oriented. Javascript does provide a fairly easy means to inherit implementation via prototyping but this is at the expense of encapsulation. So if your criteria for object orientation is the classic threesome of polymorphism, encapsulation and inheritance then Javascript doesn't pass. Edit : The supplementary question is raised "how does prototypal inheritance sacrifice encapsulation?" Consider this example of a non-prototypal approach:- function MyClass() { var _value = 1; this.getValue = function() { return _value; }} The _value attribute is encapsulated, it cannot be modified directly by external code. We might add a mutator to the class to modify it in a way entirely controlled by code that is part of the class. Now consider a prototypal approach to the same class:- function MyClass() { var _value = 1;}MyClass.prototype.getValue = function() { return _value; } Well this is broken. Since the function assigned to getValue is no longer in scope with _value it can't access it. We would need to promote _value to an attribute of this but that would make it accessable outside of the control of code written for the class, hence encapsulation is broken. Despite this my vote still remains that Javascript is object oriented. Why? Because given an OOD I can implement it in Javascript.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/107464", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1538/" ] }
107,566
My last couple of projects have involved websites that sell a product/service and require a 'checkout' process in which users put in their credit card information and such. Obviously we got SSL certificates for the security of it plus giving peace of mind to the customers. I am, however, a little clueless as to the subtleties of it, and most importantly as to which parts of the website should 'use' the certificate. For example, I've been to websites where the moment you hit the homepage you are put in https - mostly banking sites - and then there are websites where you are only put in https when you are finally checking out. Is it overkill to make the entire website run through https if it doesn't deal with something on the level of banking? Should I only make the checkout page https? What is the performance hit on going all out?
I personally go with "SSL from go to woe". If your user never enters a credit card number, sure, no SSL. But there's an inherent possible security leak from the cookie replay. User visits site and gets assigned a cookie. User browses site and adds data to cart ( using cookie ) User proceeds to payment page using cookie. Right here there is a problem, especially if you have to handle payment negotiation yourself. You have to transmit information from the non-secure domain to the secure domain, and back again, with no guarantees of protection. If you do something dumb like share the same cookie with unsecure as you do with secure, you may find some browsers ( rightly ) will just drop the cookie completely ( Safari ) for the sake of security, because if somebody sniffs that cookie in the open, they can forge it and use it in the secure mode to, degrading your wonderful SSL security to 0, and if the Card details ever get even temporarily stored in the session, you have a dangerous leak waiting to happen. If you can't be certain that your software is not prone to these weaknesses, I would suggest SSL from the start, so their initial cookie is transmitted in the secure.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/107566", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16417/" ] }
107,591
How do you unit test a large MFC UI application? We have a few large MFC applications that have been in development for many years, we use some standard automated QA tools to run basic scripts to check fundamentals, file open etc. These are run by the QA group post the daily build. But we would like to introduce procedures such that individual developers can build and run tests against dialogs, menus, and other visual elements of the application before submitting code to the daily build. I have heard of such techniques as hidden test buttons on dialogs that only appear in debug builds, are there any standard toolkits for this. Environment is C++/C/FORTRAN, MSVC 2005, Intel FORTRAN 9.1, Windows XP/Vista x86 & x64.
It depends on how the App is structured. If logic and GUI code is separated (MVC) then testing the logic is easy. Take a look at Michael Feathers "Humble Dialog Box" (PDF). EDIT: If you think about it: You should very carefully refactor if the App is not structured that way. There is no other technique for testing the logic. Scripts which simulate clicks are just scratching the surface. It is actually pretty easy: Assume your control/window/whatever changes the contents of a listbox when the user clicks a button and you want to make sure the listbox contains the right stuff after the click. Refactor so that there is a separate list with the items for the listbox to show. The items are stored in the list and are not extracted from whereever your data comes from. The code that makes the listbox list things knows only about the new list. Then you create a new controller object which will contain the logic code. The method that handles the button click only calls mycontroller->ButtonWasClicked(). It does not know about the listbox or anythings else. MyController::ButtonWasClicked() does whats need to be done for the intended logic, prepares the item list and tells the control to update. For that to work you need to decouple the controller and the control by creating a interface (pure virtual class) for the control. The controller knows only an object of that type, not the control. Thats it. The controller contains the logic code and knows the control only via the interface. Now you can write regular unit test for MyController::ButtonWasClicked() by mocking the control. If you have no idea what I am talking about, read Michaels article. Twice. And again after that. (Note to self: must learn not to blather that much)
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/107591", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2387/" ] }
107,603
I am constantly learning new tools, even old fashioned ones, because I like to use the right solution for the problem. Nevertheless, I wonder if there is still any reason to learn some of them. awk for example is interesting to me, but for simple text processing, I can use grep , cut , sed , etc. while for complex ones, I'll go for Python. Now I don't mean that's it's not a powerful and handy tool. But since it takes time and energy to learn a new tool, is it worth it ?
I think it depends on the environment you find yourself in. If you are a *nix person, then knowing awk is a Good Thing. The only other scripting environment that can be found on virtually every *nix is sh . So while grep , sed, etc can surely replace awk on a modern mainstream linux distro, when you move to more exotic systems, knowing a little awk is going to be Real Handy. awk can also be used for more than just text processing. For example one of my supervisors writes astronomy code in awk - that is how utterly old school and awesome he is. Back in his days, it was the best tool for the job... and now even though his students like me use python and what not, he sticks to what he knows and works well. In closing, there is a lot of old code kicking around the world, knowing a little awk isn't going to hurt. It will also make you better *nix person :-)
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/107603", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9951/" ] }
107,668
If you have a situation where a TCP connection is potentially too slow and a UDP 'connection' is potentially too unreliable what do you use? There are various standard reliable UDP protocols out there, what experiences do you have with them? Please discuss one protocol per reply and if someone else has already mentioned the one you use then consider voting them up and using a comment to elaborate if required. I'm interested in the various options here, of which TCP is at one end of the scale and UDP is at the other. Various reliable UDP options are available and each brings some elements of TCP to UDP. I know that often TCP is the correct choice but having a list of the alternatives is often useful in helping one come to that conclusion. Things like Enet, RUDP, etc that are built on UDP have various pros and cons, have you used them, what are your experiences? For the avoidance of doubt there is no more information, this is a hypothetical question and one that I hoped would elicit a list of responses that detailed the various options and alternatives available to someone who needs to make a decision.
It's difficult to answer this question without some additional information on the domain of the problem.For example, what volume of data are you using? How often? What is the nature of the data? (eg. is it unique, one off data? Or is it a stream of sample data? etc.)What platform are you developing for? (eg. desktop/server/embedded)To determine what you mean by "too slow", what network medium are you using? But in (very!) general terms I think you're going to have to try really hard to beat tcp for speed, unless you can make some hard assumptions about the data that you're trying to send. For example, if the data that you're trying to send is such that you can tolerate the loss of a single packet (eg. regularly sampled data where the sampling rate is many times higher than the bandwidth of the signal) then you can probably sacrifice some reliability of transmission by ensuring that you can detect data corruption (eg. through the use of a good crc) But if you cannot tolerate the loss of a single packet, then you're going to have to start introducing the types of techniques for reliability that tcp already has. And, without putting in a reasonable amount of work, you may find that you're starting to build those elements into a user-space solution with all of the inherent speed issues to go with it.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/107668", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7925/" ] }
107,674
I'd like to build a real quick and dirty administrative backend for a Ruby on Rails application I have been attached to at the last minute. I've looked at activescaffold and streamlined and think they are both very attractive and they should be simple to get running, but I don't quite understand how to set up either one as a backend administration page. They seem designed to work like standard Ruby on Rails generators/scaffolds for creating visible front ends with model-view-controller-table name correspondence. How do you create a admin_players interface when players is already in use and you want to avoid, as much as possible, affecting any of its related files? The show, edit and index of the original resource are not usuable for the administrator.
I think namespaces is the solution to the problem you have here: map.namespace :admin do |admin| admin.resources :customersend Which will create routes admin_customers , new_admin_customers , etc. Then inside the app/controller directory you can have an admin directory. Inside your admin directory, create an admin controller: ./script/generate rspec_controller admin/adminclass Admin::AdminController < ApplicationController layout "admin" before_filter :login_requiredend Then create an admin customers controller: ./script/generate rspec_controller admin/customers And make this inhert from your ApplicationController: class Admin::CustomersController < Admin::AdminController This will look for views in app/views/admin/customers and will expect a layout in app/views/layouts/admin.html.erb . You can then use whichever plugin or code you like to actually do your administration, streamline, ActiveScaffold, whatever personally I like to use resourcecs_controller , as it saves you a lot of time if you use a REST style architecture, and forcing yourself down that route can save a lot of time elsewhere. Though if you inherited the application that's a moot point by now.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/107674", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6805/" ] }
107,683
Question in the title. And what happens when all 3 of $_GET[foo] , $_POST[foo] and $_COOKIE[foo] exist? Which one of them gets included to $_REQUEST?
I'd say never. If I wanted something to be set via the various methods, I'd code for each of them to remind myself that I'd done it that way - otherwise you might end up with things being overwritten without realising. Shouldn't it work like this: $_GET = non destructive actions (sorting, recording actions, queries) $_POST = destructive actions (deleting, updating) $_COOKIE = trivial settings (stylesheet preferences etc) $_SESSION = non trivial settings (username, logged in?, access levels)
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/107683", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1897/" ] }
107,693
I'm having trouble with global variables in php. I have a $screen var set in one file, which requires another file that calls an initSession() defined in yet another file. The initSession() declares global $screen and then processes $screen further down using the value set in the very first script. How is this possible? To make things more confusing, if you try to set $screen again then call the initSession() , it uses the value first used once again. The following code will describe the process. Could someone have a go at explaining this? $screen = "list1.inc"; // From model.phprequire "controller.php"; // From model.phpinitSession(); // From controller.phpglobal $screen; // From Include.Session.inc echo $screen; // prints "list1.inc" // From anywhere$screen = "delete1.inc"; // From model2.phprequire "controller2.php" initSession();global $screen;echo $screen; // prints "list1.inc" Update: If I declare $screen global again just before requiring the second model, $screen is updated properly for the initSession() method. Strange.
Global DOES NOT make the variable global. I know it's tricky :-) Global says that a local variable will be used as if it was a variable with a higher scope . E.G : <?php$var = "test"; // this is accessible in all the rest of the code, even an included onefunction foo2(){ global $var; echo $var; // this print "test" $var = 'test2';}global $var; // this is totally useless, unless this file is included inside a class or functionfunction foo(){ echo $var; // this print nothing, you are using a local var $var = 'test3';}foo();foo2();echo $var; // this will print 'test2'?> Note that global vars are rarely a good idea. You can code 99.99999% of the time without them and your code is much easier to maintain if you don't have fuzzy scopes. Avoid global if you can.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/107693", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10583/" ] }
107,701
How can I remove those annoying Mac OS X .DS_Store files from a Git repository?
Remove existing .DS_Store files from the repository: find . -name .DS_Store -print0 | xargs -0 git rm -f --ignore-unmatch Add this line: .DS_Store to the file .gitignore , which can be found at the top level of your repository (or create the file if it isn't there already). You can do this easily with this command in the top directory: echo .DS_Store >> .gitignore Then commit the file to the repo: git add .gitignoregit commit -m '.DS_Store banished!'
{ "score": 13, "source": [ "https://Stackoverflow.com/questions/107701", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1450/" ] }
107,705
Is output buffering enabled by default in Python's interpreter for sys.stdout ? If the answer is positive, what are all the ways to disable it? Suggestions so far: Use the -u command line switch Wrap sys.stdout in an object that flushes after every write Set PYTHONUNBUFFERED env var sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0) Is there any other way to set some global flag in sys / sys.stdout programmatically during execution?
From Magnus Lycka answer on a mailing list : You can skip buffering for a wholepython process using python -u or bysetting the environment variablePYTHONUNBUFFERED. You could also replace sys.stdout withsome other stream like wrapper whichdoes a flush after every call. class Unbuffered(object): def __init__(self, stream): self.stream = stream def write(self, data): self.stream.write(data) self.stream.flush() def writelines(self, datas): self.stream.writelines(datas) self.stream.flush() def __getattr__(self, attr): return getattr(self.stream, attr)import syssys.stdout = Unbuffered(sys.stdout)print 'Hello'
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/107705", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8206/" ] }
107,735
After hitting a few StackOverflowExceptions in .NET I noticed they completely bypass the unhandled exception handlers that .NET offers (Application.ThreadException / AppDomain.UnhandledException).This is very disturbing since we have critical cleanup code in those exception handlers. Is there any way to overcome this?
There are three kind of so-called "asynchronous exceptions". That are the ThreadAbortException, the OutOfMemoryException and the mentioned StackOverflowException. Those excepions are allowed to occur at any instruction in your code. And, there's also a way to overcome them: The easiest is the ThreadAbortException. When the current code executes in a finally-block. ThreadAbortExceptions are kind of "moved" to the end of the finally-block. So everything in a finally-block can't be aborted by a ThreadAbortException. To avoid an OutOfMemoryException, you have only one possibility: Do not allocate anything on the Heap. This means that you're not allowed to create any new reference-types. To overcome the StackOverflowException, you need some help from the Framework. This help manifests in Constrained Execution Regions. The required stack is allocated before the actual code is executed and additionally also ensures that the code is already JIT-Compiled and therefor is available for execution. There are three forms to execute code in Constrained Execution Regions (copied from the BCL Team Blog ): ExecuteCodeWithGuaranteedCleanup, a stack-overflow safe form of a try/finally. A try/finally block preceded immediately by a call to RuntimeHelpers.PrepareConstrainedRegions. The try block is not constrained, but all catch, finally, and fault blocks for that try are. As a critical finalizer - any subclass of CriticalFinalizerObject has a finalizer that is eagerly prepared before an instance of the object is allocated. A special case is SafeHandle's ReleaseHandle method, a virtual method that is eagerly prepared before the subclass is allocated, and called from SafeHandle's critical finalizer. You can find more at these blog posts: Constrained Execution Regions and other errata [Brian Grunkemeyer] at the BCL Team Blog. Joe Duffy's Weblog about Atomicity and asynchronous exception failures where he gives a very good overview over asynchronous exceptions and robustness in the .net Framework.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/107735", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18910/" ] }
107,823
(Jeopardy-style question, I wish the answer had been online when I had this issue) Using Java 1.4, I have a method that I want to run as a thread some of the time, but not at others. So I declared it as a subclass of Thread, then either called start() or run() depending on what I needed. But I found that my program would leak memory over time. What am I doing wrong?
This is a known bug in Java 1.4: http://bugs.sun.com/bugdatabase/view_bug.do;jsessionid=5869e03fee226ffffffffc40d4fa881a86e3:WuuT?bug_id=4533087 It's fixed in Java 1.5 but Sun doesn't intend to fix it in 1.4. The issue is that, at construction time, a Thread is added to a list of references in an internal thread table. It won't get removed from that list until its start() method has completed. As long as that reference is there, it won't get garbage collected. So, never create a thread unless you're definitely going to call its start() method. A Thread object's run() method should not be called directly. A better way to code it is to implement the Runnable interface rather than subclass Thread . When you don't need a thread, call myRunnable.run(); When you do need a thread: Thread myThread = new Thread(myRunnable);myThread.start();
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/107823", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7512/" ] }
107,828
I don't want PHP errors to display /html, but I want them to display in /html/beta/usercomponent . Everything is set up so that errors do not display at all. How can I get errors to just show up in that one folder (and its subfolders)?
In .htaccess : php_value error_reporting 2147483647 This number, according to documentation should enable 'all' errors irrespective of version, if you want a more granular setting, manually OR the values together, or run php -r 'echo E_ALL | E_STRICT ;' to let php compute the value for you. You need AllowOverride All in apaches master configuration to enable .htaccess files. More Reading on this can be found here: Php/Error Reporting Flag Php/Error Reporting values Php/Different Ways of Tuning Settings Notice If you are using Php-CGI instead of mod_php, this may not work as advertised, and all you will get is an internal server error, and you will be left without much option other than enabling it either site-wide on a per-script basis with error_reporting( E_ALL | E_STRICT ); or similar constructs before the error occurs. My advice is to disable displaying errors to the user, and utilize heavily php's error_log feature. display_errors = 0error_logging = E_ALL | E_STRICT error_log = /var/log/php If you have problems with this being too noisy, this is not a sign you need to just take error reporting off selectively, this is a sign somebody should fix the code. @Roger Yes, you can use it in a < Directory> construct in apaches configuration too, however, the .htaccess in this case is equivalent, and makes it more portable especially if you have multiple working checkout copies of the same codebase and you want to distribute this change to all of them. If you have multiple virtual hosts, you'll want the construct in the respective virtual hosts definition, otherwise, yes <Directory /path/to/wherever/on/filesystem> <IfModule mod_php5.c> php_value error_reporting 214748364 </IfModule> </Directory> The Additional "ifmodule" commands are just a safety net so the above problem with apache dying if you don't have mod_php won't occur.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/107828", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
107,855
How many lines of code can a .java file contain? Does it depend on the JVM being used?
To extend upon Jonas's response , the Java Virtual Machine Specification, Section 4.8 Constraints on Java Virtual Machine Code says that: The Java virtual machine code for a method, instance initialization method (§3.9), or class or interface initialization method (§3.9) is stored in the code array of the Code attribute of a method_info structure of a class file. This section describes the constraints associated with the contents of the Code_attribute structure. Continuing to Section 4.8.1, Static Constraints The static constraints on a class file are those defining the well-formedness of the file. With the exception of the static constraints on the Java virtual machine code of the class file, these constraints have been given in the previous section. The static constraints on the Java virtual machine code in a class file specify how Java virtual machine instructions must be laid out in the code array and what the operands of individual instructions must be. The static constraints on the instructions in the code array are as follows: ... The value of the code_length item must be less than 65536. ... So a method does have a limit of 65535 bytes of bytecode per method. (see note below) For more limitations to the JVM, see Section 4.10 Limitations of the Java Virtual Machine . Note: Although there is apparently a problem with the design of the JVM, where if the instruction at byte 65535 is an instruction that is 1 byte long, it is not protected by exception handler - this is listed in footnote 4 of Section 4.10.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/107855", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15181/" ] }
107,888
On Linux/GCC I can use the -rpath flag to change an executables search path for shared libraries without tempering with environment variables. Can this also be accomplished on Windows? As far as I know, dlls are always searched in the executable's directory and in PATH. My scenario: I would like to put shared libraries into locations according to their properties (32/64bit/Debug/Release) without taking care of unique names. On Linux, this is easily be done via rpath, but I haven't found any way doing this on Windows yet. Thanks for any hints!
Sadly there is no direct analogue to RPATH. There are a number of alternative possibilities, each of them most likely undesirable to you in its own special way. Given that you need a different exe for each build flavor anyway to avoid runtime library clashes, as you might guess the easiest thing to do is to put each exe in the same folder as each set of DLLs. As you also mentioned, the most universal method is to change the PATH variable by using a batch file to bootstrap the exe. You could instead change the current working directory before running the program to the desired DLL folder. You can use the function SetDllDirectory or AddDllDirectory inside your exe. This is probably the closest to an RPATH, but only works on WinXP SP1 or later. If you're willing to alter the file name of each exe flavor, you can use the "App Paths" registry key. Each exe would need a unique filename.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/107888", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
107,903
On Mac OS X, you can create a zip archive from the Finder by selecting some files and selecting "Compress" from the contextual menu or the File menu. Unfortunately, the resulting file is not identical to the archive created by the zip command (with the default options). This distinction matters to at least one service operated by Apple, which fails to accept archives created with the zip command. Having to create archives manually is preventing me from fully automating my release build process. How can I create a zip archive in the correct format within a shell script? EDIT: Since writing this question long ago, I've figured out that the key difference between ditto and zip is how they handle symbolic links: because the code signature inside an app bundle contains a symlink, it needs to be preserved as a link and not stored as a regular file. ditto does this by default, but zip does not (option -y is required).
Use the ditto command-line tool as follows: ditto -ck --rsrc --sequesterRsrc folder file.zip See the ditto man page for more.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/107903", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10947/" ] }
107,919
Say we have realized a value of TDD too late. Project is already matured, good deal of customers started using it. Say automated testing used are mostly functional/system testing and there is a good deal of automated GUI testing. Say we have new feature requests, and new bug reports (!). So good deal of development still goes on. Note there would already be plenty of business object with no or little unit testing. Too much collaboration/relationships between them, which again is tested only through higher level functional/system testing. No integration testing per se. Big databases in place with plenty of tables, views, etc. Just to instantiate a single business object there already goes good deal of database round trips. How can we introduce TDD at this stage? Mocking seems to be the way to go. But the amount of mocking we need to do here seems like too much. Sounds like elaborate infrastructure needs to be developed for the mocking system working for existing stuff (BO, databases, etc.). Does that mean TDD is a suitable methodology only when starting from scratch? I am interested to hear about the feasible strategies to introduce TDD in an already mature product.
Creating a complex mocking infrastructure will probably just hide the problems in your code. I would recommend that you start with integration tests, with a test database, around the areas of the code base that you plan to change. Once you have enough tests to ensure that you won't break anything if you make a change, you can start to refactor the code to make it more testable. Se also Michael Feathers excellent book Working effectively with legacy code , its a must read for anyone thinking of introducing TDD into a legacy code base.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/107919", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15139/" ] }
107,936
Is there a way to add some custom font on a website without using images, Flash or some other graphics? For example, I was working on a wedding website, and I found a lot of nice fonts for that subject. But I can't find the right way to add that font on the server. And how do I include that font with CSS into the HTML? Is this possible to do without graphics?
This could be done via CSS: <style type="text/css">@font-face { font-family: "My Custom Font"; src: url(http://www.example.org/mycustomfont.ttf) format("truetype");}p.customfont { font-family: "My Custom Font", Verdana, Tahoma;}</style><p class="customfont">Hello world!</p> It is supported for all of the regular browsers if you use TrueType-Fonts (TTF), the Web Open Font Format (WOFF) or Embedded Opentype (EOT).
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/107936", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16039/" ] }
107,972
Is it possible under any set of circumstances to be able to accomplish this? My current circumstances are this: public class CustomForm : Form{ public class CustomGUIElement { ... public event MouseEventHandler Click; // etc, and so forth. ... } private List<CustomGUIElement> _elements; ... public void CustomForm_Click(object sender, MouseEventArgs e) { // we might want to call one of the _elements[n].Click in here // but we can't because we aren't in the same class. }} My first thought was to have a function similar to: internal enum GUIElementHandlers { Click, ... }internal void CustomGUIElement::CallHandler(GUIElementHandler h, object[] args) { switch (h) { case Click: this.Click(this, (EventArgs)args[0]); break; ... // etc and so forth }} It's a horribly ugly kludge, but it should work... There must be a more elegant solution though? The .NET library does this all the time with message handlers and calling events in Control's. Does anyone else have any other/better ideas?
You just need to add a public method for invoking the event. Microsoft already does this for some events such as PerformClick for controls that expose a Click event. public class CustomGUIElement { public void PerformClick() { OnClick(EventArgs.Empty); } protected virtual void OnClick(EventArgs e) { if (Click != null) Click(this, e); }} You would then do the following inside your example event handler... public void CustomForm_Click(object sender, MouseEventArgs e) { _elements[0].PerformClick();}
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/107972", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15537/" ] }
107,984
In toad, I can see unicode characters that are coming from oracle db. But when I click one of the fields in the data grid into the edit mode, the unicode characters are converted to meaningless symbols, but this is not the big issue. While editing this field, the unicode characters are displayed correctly as I type. But as soon as I press enter and exit edit mode, they are converted to the nearest (most similar) non-unicode character. So I cannot type unicode characters on data grids. Copy & pasting one of the unicode characters also does not work. How can I solve this? Edit: I am using toad 9.0.0.160.
You just need to add a public method for invoking the event. Microsoft already does this for some events such as PerformClick for controls that expose a Click event. public class CustomGUIElement { public void PerformClick() { OnClick(EventArgs.Empty); } protected virtual void OnClick(EventArgs e) { if (Click != null) Click(this, e); }} You would then do the following inside your example event handler... public void CustomForm_Click(object sender, MouseEventArgs e) { _elements[0].PerformClick();}
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/107984", "https://Stackoverflow.com", "https://Stackoverflow.com/users/31505/" ] }
107,995
The unzip command doesn't have an option for recursively unzipping archives. If I have the following directory structure and archives: /Mother/Loving.zip/Scurvy/Sea Dogs.zip/Scurvy/Cures/Limes.zip And I want to unzip all of the archives into directories with the same name as each archive: /Mother/Loving/1.txt/Mother/Loving.zip/Scurvy/Sea Dogs/2.txt/Scurvy/Sea Dogs.zip/Scurvy/Cures/Limes/3.txt/Scurvy/Cures/Limes.zip What command or commands would I issue? It's important that this doesn't choke on filenames that have spaces in them.
If you want to extract the files to the respective folder you can try this find . -name "*.zip" | while read filename; do unzip -o -d "`dirname "$filename"`" "$filename"; done; A multi-processed version for systems that can handle high I/O: find . -name "*.zip" | xargs -P 5 -I fileName sh -c 'unzip -o -d "$(dirname "fileName")/$(basename -s .zip "fileName")" "fileName"'
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/107995", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10645/" ] }
108,005
first question here. I'm developing a program in C# (.NET 3.5) that displays files in a listview. I'd like to have the "large icon" view display the icon that Windows Explorer uses for that filetype, otherwise I'll have to use some existing code like this: private int getFileTypeIconIndex(string fileName) { string fileLocation = Application.StartupPath + "\\Quarantine\\" + fileName; FileInfo fi = new FileInfo(fileLocation); switch (fi.Extension) { case ".pdf": return 1; case ".doc": case ".docx": case ".docm": case ".dotx":case ".dotm": case ".dot":case ".wpd": case ".wps": return 2; default: return 0; } } The above code returns an integer that is used to select an icon from an imagelist that I populated with some common icons. It works fine but I'd need to add every extension under the sun! Is there a better way? Thanks!
You might find the use of Icon.ExtractAssociatedIcon a much simpler (an managed) approach than using SHGetFileInfo. But watch out: two files with the same extension may have different icons.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108005", "https://Stackoverflow.com", "https://Stackoverflow.com/users/14333/" ] }
108,037
Could anybody explain in plain words how Cloud computing works?I have read the Wikipedia article , but still not sure that I understand how cloud actually works.
Aside from the latest marketing term? Basically all the resources your program needs are held "somewhere" on the internet. You interact with them via a defined service contract; SOAP, REST, POX or whatever and what happens after that is up to the service provider. You don't care about how your information is stored or how the service is provided, just that it is. If, for example, you wanted to store files, you may choose to use Amazon's S3 cloud system. You connect to the service and upload your files; you don't know or care where the files are stored, only the location of the entry point to that service. If you have an application then it may also be ran in the cloud, assuming it's suitable. Live Mesh for example is a virtual machine which you can code against and run your software both locally and within the cloud, so your user simply goes to a URI and finds your program, you don't care where it is beyond it being available somewhere on the cloud.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108037", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19268/" ] }
108,081
Are there any good, cross platform (SBCL and CLISP at the very least) easy to install GUI libraries?
Ltk is quite popular, very portable, and reasonably well documented through the Tk docs. Installation on SBCL is as easy as saying: (require :asdf-install)(asdf-install:install :ltk) There's also Cells-Gtk , which is reported to be quite usable but may have a slightly steeper learning curve because of its reliance on Cells. EDIT: Note that ASDF-INSTALL is integrated this well with SBCL only . Installing libraries from within other Lisp implementations may prove harder. (Personally, I always install my libraries from within SBCL and then use them from all implementations.) Sorry about any confusion this may have caused.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108081", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7780/" ] }
108,094
Is there anyway to disable the rather annoying feature that Visual Studio (2008 in my case) has of copying the line (with text on it) the cursor is on when CTRL - C is pressed and no selection is made? I know of the option to disable copying blank lines. But this is driving me crazy as well. ETA: I'm not looking to customize the keyboard shortcut. ETA-II: I am NOT looking for "Tools->Options->Text Editor->All Languages->Apply cut or copy to blank lines...".
The real problem you probably experience is that you go to paste, with CTRL + V . And you accidentally type CTRL + C , and end up overwriting the stuff that's on your clipboard. You can't disable this as far as I know, however, the work around for this, is that you can press CTRL + SHIFT + V multiple times to go back up the stack of things you have copied in visual studio. Not only does this allow you to recover what you originally copied, but you'll also find that CTRL + SHIFT + V very useful in a lot of other situations.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108094", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4192/" ] }
108,104
Once again one of those: "Is there an easier built-in way of doing things instead of my helper method?" So it's easy to get the underlying type from a nullable type, but how do I get the nullable version of a .NET type? So I have typeof(int)typeof(DateTime)System.Type t = something; and I want int? DateTime? or Nullable<int> (which is the same)if (t is primitive) then Nullable<T> else just T Is there a built-in method?
Here is the code I use: Type GetNullableType(Type type) { // Use Nullable.GetUnderlyingType() to remove the Nullable<T> wrapper if type is already nullable. type = Nullable.GetUnderlyingType(type) ?? type; // avoid type becoming null if (type.IsValueType) return typeof(Nullable<>).MakeGenericType(type); else return type;}
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/108104", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5790/" ] }
108,116
What problems does MEF (Managed Extensibility Framework) solves that cannot be solved by existing IoC/DI containers?
The principle purpose of MEF is extensibility; to serve as a 'plug-in' framework for when the author of the application and the author of the plug-in ( extension ) are different and have no particular knowledge of each other beyond a published interface ( contract ) library. Another problem space MEF addresses that's different from the usual IoC suspects, and one of MEFs strengths, is [extension] discovery. It has a lot of, well, extensible discovery mechanisms that operate on metadata you can associate with extensions. From the MEF CodePlex site: "MEF allows tagging extensions with additonal metadata which facilitates rich querying and filtering" Combined with an ability to delay-load tagged extensions, being able to interrogate extension metadata prior to loading opens the door to a slew of interesting scenarios and substantially enables capabilities such as [plug-in] versioning. MEF also has 'Contract Adapters' which allow extensions to be 'adapted' or 'transformed' ( from type > to type ) with complete control over the details of those transforms. Contract Adapters open up another creative front relative to just what 'discovery' means and entails. Again, MEFs 'intent' is tightly focused on anonymous plug-in extensibility, something that very much differentiates it from other IoC containers. So while MEF can be used for composition, that's merely a small intersection of its capabilities relative to other IoCs, with which I suspect we'll be seeing a lot of incestuous interplay going forward.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/108116", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19268/" ] }
108,149
I know that it's possible to replace the browse button, which is generated in html, when you use input tag with type="file . I'm not sure what is the best way, so if someone has experience with this please contribute.
The best way is to make the file input control almost invisible (by giving it a very low opacity - do not do " visibility: hidden " or " display: none ") and absolutely position something under it - with a lower z-index . This way, the actual control will not be visible, and whatever you put under it will show through. But since the control is positioned above that button, it will still capture the click events (this is why you want to use opacity, not visibility or display - browsers make the element unclickable if you use those to hide it). This article goes in-depth on the technique.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108149", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16039/" ] }
108,183
I have a small server program that accepts connections on a TCP or local UNIX socket, reads a simple command and (depending on the command) sends a reply. The problem is that the client may have no interest in the answer and sometimes exits early. So writing to that socket will cause a SIGPIPE and make my server crash. What's the best practice to prevent the crash here? Is there a way to check if the other side of the line is still reading? ( select() doesn't seem to work here as it always says the socket is writable). Or should I just catch the SIGPIPE with a handler and ignore it?
You generally want to ignore the SIGPIPE and handle the error directly in your code. This is because signal handlers in C have many restrictions on what they can do. The most portable way to do this is to set the SIGPIPE handler to SIG_IGN . This will prevent any socket or pipe write from causing a SIGPIPE signal. To ignore the SIGPIPE signal, use the following code: signal(SIGPIPE, SIG_IGN); If you're using the send() call, another option is to use the MSG_NOSIGNAL option, which will turn the SIGPIPE behavior off on a per call basis. Note that not all operating systems support the MSG_NOSIGNAL flag. Lastly, you may also want to consider the SO_SIGNOPIPE socket flag that can be set with setsockopt() on some operating systems. This will prevent SIGPIPE from being caused by writes just to the sockets it is set on.
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/108183", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12523/" ] }
108,193
class Tag(models.Model): name = models.CharField(maxlength=100)class Blog(models.Model): name = models.CharField(maxlength=100) tags = models.ManyToManyField(Tag) Simple models just to ask my question. I wonder how can i query blogs using tags in two different ways. Blog entries that are tagged with "tag1" or "tag2": Blog.objects.filter(tags_in=[1,2]).distinct() Blog objects that are tagged with "tag1" and "tag2" : ? Blog objects that are tagged with exactly "tag1" and "tag2" and nothing else : ?? Tag and Blog is just used for an example.
You could use Q objects for #1: # Blogs who have either hockey or django tags.from django.db.models import QBlog.objects.filter( Q(tags__name__iexact='hockey') | Q(tags__name__iexact='django')) Unions and intersections, I believe, are a bit outside the scope of the Django ORM, but its possible to to these. The following examples are from a Django application called called django-tagging that provides the functionality. Line 346 of models.py : For part two, you're looking for a union of two queries, basically def get_union_by_model(self, queryset_or_model, tags): """ Create a ``QuerySet`` containing instances of the specified model associated with *any* of the given list of tags. """ tags = get_tag_list(tags) tag_count = len(tags) queryset, model = get_queryset_and_model(queryset_or_model) if not tag_count: return model._default_manager.none() model_table = qn(model._meta.db_table) # This query selects the ids of all objects which have any of # the given tags. query = """ SELECT %(model_pk)s FROM %(model)s, %(tagged_item)s WHERE %(tagged_item)s.content_type_id = %(content_type_id)s AND %(tagged_item)s.tag_id IN (%(tag_id_placeholders)s) AND %(model_pk)s = %(tagged_item)s.object_id GROUP BY %(model_pk)s""" % { 'model_pk': '%s.%s' % (model_table, qn(model._meta.pk.column)), 'model': model_table, 'tagged_item': qn(self.model._meta.db_table), 'content_type_id': ContentType.objects.get_for_model(model).pk, 'tag_id_placeholders': ','.join(['%s'] * tag_count), } cursor = connection.cursor() cursor.execute(query, [tag.pk for tag in tags]) object_ids = [row[0] for row in cursor.fetchall()] if len(object_ids) > 0: return queryset.filter(pk__in=object_ids) else: return model._default_manager.none() For part #3 I believe you're looking for an intersection. See line 307 of models.py def get_intersection_by_model(self, queryset_or_model, tags): """ Create a ``QuerySet`` containing instances of the specified model associated with *all* of the given list of tags. """ tags = get_tag_list(tags) tag_count = len(tags) queryset, model = get_queryset_and_model(queryset_or_model) if not tag_count: return model._default_manager.none() model_table = qn(model._meta.db_table) # This query selects the ids of all objects which have all the # given tags. query = """ SELECT %(model_pk)s FROM %(model)s, %(tagged_item)s WHERE %(tagged_item)s.content_type_id = %(content_type_id)s AND %(tagged_item)s.tag_id IN (%(tag_id_placeholders)s) AND %(model_pk)s = %(tagged_item)s.object_id GROUP BY %(model_pk)s HAVING COUNT(%(model_pk)s) = %(tag_count)s""" % { 'model_pk': '%s.%s' % (model_table, qn(model._meta.pk.column)), 'model': model_table, 'tagged_item': qn(self.model._meta.db_table), 'content_type_id': ContentType.objects.get_for_model(model).pk, 'tag_id_placeholders': ','.join(['%s'] * tag_count), 'tag_count': tag_count, } cursor = connection.cursor() cursor.execute(query, [tag.pk for tag in tags]) object_ids = [row[0] for row in cursor.fetchall()] if len(object_ids) > 0: return queryset.filter(pk__in=object_ids) else: return model._default_manager.none()
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108193", "https://Stackoverflow.com", "https://Stackoverflow.com/users/12785/" ] }
108,207
I want the search box on my web page to display the word "Search" in gray italics. When the box receives focus, it should look just like an empty text box. If there is already text in it, it should display the text normally (black, non-italics). This will help me avoid clutter by removing the label. BTW, this is an on-page Ajax search, so it has no button.
Another option, if you're happy to have this feature only for newer browsers, is to use the support offered by HTML 5's placeholder attribute: <input name="email" placeholder="Email Address"> In the absence of any styles, in Chrome this looks like: You can try demos out here and in HTML5 Placeholder Styling with CSS . Be sure to check the browser compatibility of this feature . Support in Firefox was added in 3.7. Chrome is fine. Internet Explorer only added support in 10. If you target a browser that does not support input placeholders, you can use a jQuery plugin called jQuery HTML5 Placeholder , and then just add the following JavaScript code to enable it. $('input[placeholder], textarea[placeholder]').placeholder();
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/108207", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7668/" ] }
108,211
I want to add a column to an existing legacy database and write a procedure by which I can assign each record a different value. Something like adding a column and autogenerate the data for it. Like, if I add a new column called "ID" (number) I want to then initialize a unique value to each of the records. So, my ID column will have records from say 1 to 1000 . How do I do that?
This will depend on the database but for SQL Server, this could be achieved as follows: alter table Exampleadd NewColumn int identity(1,1)
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/108211", "https://Stackoverflow.com", "https://Stackoverflow.com/users/384/" ] }
108,272
As a novice, I've spent time learning a smattering of C and a fair bit of PHP. I've looked at writing desktop applications for Windows, but there seems to be a fair barrier to entry due to complexity of APIs. Is it worth learning this, or will native applications become less common in the future? The way I see it, the only desktop application I ever use is a web browser and a text editor as well as the obviously the OS itself. Everything I need is online now. Is learning to write non-web applications a useful skill going forwards? If so, what should I learn?
I don't think it is ever a good idea to choose one side and stick with it religiously. I think a good engineer will expose themselves to as much as they can so he can make an informed decision about which is the best tool to complete a task. In other words, don't choose a platform, OS, programming language, etc. and then ignore the others. It is best to be well-rounded in your skill set.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108272", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16511/" ] }
108,318
I need a function like this: // return true if 'n' is a power of 2, e.g.// is_power_of_2(16) => true // is_power_of_2(3) => falsebool is_power_of_2(int n); Can anyone suggest how I could write this?
(n & (n - 1)) == 0 is best. However, note that it will incorrectly return true for n=0, so if that is possible, you will want to check for it explicitly. http://www.graphics.stanford.edu/~seander/bithacks.html has a large collection of clever bit-twiddling algorithms, including this one.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/108318", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19641/" ] }
108,320
What is the purpose of the code behind view file in ASP.NET MVC besides setting of the generic parameter of ViewPage ?
Here's my list of reasons why code-behind can be useful taken from my own post . I'm sure there are many more. Databinding legacy ASP.NET controls - if an alternative is not available or a temporary solution is needed. View logic that requires recursion to create some kind of nested or hierarchical HTML. View logic that uses temporary variables. I refuse to define local variables in my tag soup! I'd want them as properties on the view class at the very least. Logic that is specific only to one view or model and does not belong to an HtmlHelper. As a side note I don't think an HtmlHelper should know about any 'Model' classes. Its fine if it knows about the classes defined inside a model (such as IEnumerable, but I dont think for instance you should ever have an HtmlHelper that takes a ProductModel.HtmlHelper methods end up becoming visible from ALL your views when you type Html+dot and i really want to minimize this list as much as possible. What if I want to write code that uses HtmlGenericControl and other classes in that namespace to generate my HTML in an object oriented way (or I have existing code that does that that I want to port). What if I'm planning on using a different view engine in future. I might want to keep some of the logic aside from the tag soup to make it easier to reuse later. What if I want to be able to rename my Model classes and have it automatically refactor my view without having to go to the view.aspx and change the class name. What if I'm coordinating with an HTML designer who I don't trust to not mess up the 'tag soup' and want to write anythin beyond very basic looping in the .aspx.cs file. If you want to sort the data based upon the view's default sort option. I really dont think the controller should be sorting data for you if you have multiple sorting options accessible only from the view. You actually want to debug the view logic in code that actuallky looks like .cs and not HTML. You want to write code that may be factored out later and reused elsewhere - you're just not sure yet. You want to prototype what may become a new HtmlHelper but you haven't yet decided whether its generic enough or not to warrant creating an HtmlHelper. (basically same as previous point) You want to create a helper method to render a partial view, but need to create a model for it by plucking data out of the main page's view and creating a model for the partial control which is based on the current loop iteration. You believe that programming complex logic IN A SINGLE FUNCTION is an out of date and unmaintainable practice. You did it before RC1 and didn't run into any problems !! Yes! Some views should not need codebehind at all. Yes! It sucks to get a stupid .designer file created in addition to .cs file. Yes! Its kind of annoying to get those little + signs next to each view. BUT - It's really not that hard to NOT put data access logic in the code-behind. They are most certainly NOT evil .
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108320", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19268/" ] }
108,387
What are some good ways to do this? Is it even possible to do cleanly? Ideally I'd like to use packet headers to decide which server should handle requests. However, if there is an easier/better way let me know.
It's impossible for both servers to listen on the same port at the same IP address: since a single socket can only be opened by a single process, only the first server configured for a certain IP/port combination will successfully bind, and the second one will fail. You will thus need a workaround to achieve what you want. Easiest is probably to run Apache on your primary IP/port combination, and have it route requests for IIS (which should be configured for a different IP and/or port) to it using mod_rewrite . Keep in mind that the alternative IP and port IIS runs on should be reachable to the clients connecting to your server: if you only have a single IP address available, you should take care to pick an IIS port that isn't generally blocked by firewalls (8080 might be a good option, or 443, even though you're running regular HTTP and not SSL) P.S. Also, please note that you do need to modify the IIS default configuration using httpcfg before it will allow other servers to run on port 80 on any IP address on the same server: see Micky McQuade's answer for the procedure to do that...
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108387", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1946/" ] }
108,396
I am using Fluent NHibernate and having some issues getting a many to many relationship setup with one of my classes. It's probably a stupid mistake but I've been stuck for a little bit trying to get it working. Anyways, I have a couple classes that have Many-Many relationships. public class Person{ public Person() { GroupsOwned = new List<Groups>(); } public virtual IList<Groups> GroupsOwned { get; set; }}public class Groups{ public Groups() { Admins= new List<Person>(); } public virtual IList<Person> Admins{ get; set; }} With the mapping looking like this Person: ... HasManyToMany<Groups>(x => x.GroupsOwned) .WithTableName("GroupAdministrators") .WithParentKeyColumn("PersonID") .WithChildKeyColumn("GroupID") .Cascade.SaveUpdate(); Groups: ... HasManyToMany<Person>(x => x.Admins) .WithTableName("GroupAdministrators") .WithParentKeyColumn("GroupID") .WithChildKeyColumn("PersonID") .Cascade.SaveUpdate(); When I run my integration test, basically I'm creating a new person and group. Adding the Group to the Person.GroupsOwned. If I get the Person Object back from the repository, the GroupsOwned is equal to the initial group, however, when I get the group back if I check count on Group.Admins, the count is 0. The Join table has the GroupID and the PersonID saved in it. Thanks for any advice you may have.
The fact that it is adding two records to the table looks like you are missing an inverse attribute . Since both the person and the group are being changed, NHibernate is persisting the relation twice (once for each object). The inverse attribute is specifically for avoiding this. I'm not sure about how to add it in mapping in code, but the link shows how to do it in XML.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108396", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1385358/" ] }
108,403
Assume a table structure of MyTable(KEY, datafield1, datafield2...) . Often I want to either update an existing record, or insert a new record if it doesn't exist. Essentially: IF (key exists) run update commandELSE run insert command What's the best performing way to write this?
don't forget about transactions. Performance is good, but simple (IF EXISTS..) approach is very dangerous. When multiple threads will try to perform Insert-or-update you can easily get primary key violation. Solutions provided by @Beau Crawford & @Esteban show general idea but error-prone. To avoid deadlocks and PK violations you can use something like this: begin tranif exists (select * from table with (updlock,serializable) where key = @key)begin update table set ... where key = @keyendelsebegin insert into table (key, ...) values (@key, ...)endcommit tran or begin tran update table with (serializable) set ... where key = @key if @@rowcount = 0 begin insert into table (key, ...) values (@key,..) endcommit tran
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/108403", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18907/" ] }
108,405
It's really annoying that visual studio hides typos in aspx pages (not the code behind).If the compiler would compile them, I would get a compile error.
Compile the pages at compile time. See Mike Hadlow's post here: http://mikehadlow.blogspot.com/2008/05/compiling-aspx-templates-using.html
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108405", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13376/" ] }
108,439
I'm looking to get the result of a command as a variable in a Windows batch script (see how to get the result of a command in bash for the bash scripting equivalent). A solution that will work in a .bat file is preferred, but other common windows scripting solutions are also welcome.
The humble for command has accumulated some interesting capabilities over the years: D:\> FOR /F "delims=" %i IN ('date /t') DO set today=%iD:\> echo %today%Sat 20/09/2008 Note that "delims=" overwrites the default space and tab delimiters so that the output of the date command gets gobbled all at once. To capture multi-line output, it can still essentially be a one-liner (using the variable lf as the delimiter in the resulting variable): REM NB:in a batch file, need to use %%i not %isetlocal EnableDelayedExpansionSET lf=-FOR /F "delims=" %%i IN ('dir \ /b') DO if ("!out!"=="") (set out=%%i) else (set out=!out!%lf%%%i)ECHO %out% To capture a piped expression, use ^| : FOR /F "delims=" %%i IN ('svn info . ^| findstr "Root:"') DO set "URL=%%i"
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/108439", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3535/" ] }
108,485
I have a .z80 memory dump. How do I reverse engineer it? What do I need to know? How can I minimize manual labour?
Most powerful disassembler - IDA supports z80. Also list of disassemblers published at " Software Development Tools for Z80 Family " page
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108485", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13967/" ] }
108,503
Consider an indexed MySQL table with 7 columns, being constantly queried and written to. What is the advisable number of rows that this table should be allowed to contain before the performance would be improved by splitting the data off into other tables?
Whether or not you would get a performance gain by partitioning the data depends on the data and the queries you will run on it. You can store many millions of rows in a table and with good indexes and well-designed queries it will still be super-fast. Only consider partitioning if you are already confident that your indexes and queries are as good as they can be, as it can be more trouble than its worth.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108503", "https://Stackoverflow.com", "https://Stackoverflow.com/users/192/" ] }
108,518
I am developing a Win32 application and I would like to use an RSA encryption library. Which library would you recommend?
If you're using Win32, why don't you simply use the built-in win32 crypto-API? Here's a little example how it works in practice: http://www.codeproject.com/KB/security/EncryptionCryptoAPI.aspx
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108518", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17675/" ] }
108,558
Can a cookie be shared between two sites on the same top level domain? Say www.example.com and secure.example.com ?We are looking into implementing a cache for non-secure content, and need to segregate secure content to another domain.What parameters does the cookie need? I'm using asp.net
Yes, you can. Use: Response.Cookies("UID").Domain = ".myserver.com"
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108558", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7277/" ] }
108,567
I have a SVN structure like this: /Projects /Project1 /Project2/someFolder /Project3 /Project4 I would like to move all the projects into the /Projects folder, which means I want to move Projects 3 and 4 from /someFolder into the /projects folder. The caveat: I'd like to keep the full history. I assume that every client would have to check out the stuff from the new location again, which is fine, but I still wonder what the simplest approach is to move directories without completely destroying the history? Subversion 1.5 if that matters.
svn help rename Moving/renaming in subversion keeps history intact.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/108567", "https://Stackoverflow.com", "https://Stackoverflow.com/users/91/" ] }
108,682
Version Control with Subversion recommends the following layout for (single-project) repositories (complemented by this question ): /trunk/tags /rel.1 (approximately) .../branches /rel1fixes What are the relative merits of this arrangement when compared with a (perhaps) more process-oriented one?: /development /current /stable/qa (maybe) .../production /stable /Prod.2 /Prod.1/vendor /Rel.5.1 /Rel.5.2 Please note that I'm thinking of in-house deployment, rather than building a product. Disclaimer: although I'm a Subversion user, I've never had to deploy with it in a real live environment.
The main difference between the recommended layout and your proposed layout is that the recommended layout is somewhat self-documenting as to where to commit things, and how it behaves. For example, in the recommended layout, it's obvious that all new development is committed to trunk, and most branches are made from trunk. Also, it's obvious that you should never commit anything into /tags. Finally, it's safe to assume that branches are truly branches, which may contain changes specific to that particular branch purpose. With the proposed layout, some of these things are less certain. Is /development/stable branched from /current? What's the relation between /development/stable and /production/stable? Which of these directories are tags, and which ones can I actually check stuff into? Certainly this behavior can be documented, but by sticking to the accepted layout that everybody uses, you'll have an easier time getting new hires up to speed on how it works.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108682", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9634/" ] }
108,699
Is there a good object-relational-mapping library for PHP? I know of PDO /ADO, but they seem to only provide abstraction of differences between database vendors not an actual mapping between the domain model and the relational model. I'm looking for a PHP library that functions similarly to the way Hibernate does for Java and NHibernate does for .NET.
Look into Doctrine . Doctrine 1.2 implements Active Record. Doctrine 2+ is a DataMapper ORM. Also, check out Xyster . It's based on the Data Mapper pattern. Also, take a look at DataMapper vs. Active Record .
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/108699", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2327/" ] }
108,717
I'm using VS2005 in a solution with a mix of VB and C# in different projects. I use this solution on several different computers and XML comments with both /// (c#) and ''' (VB) have been fine for months. all of a sudden, on my main development machine, they've stopped working in VB. They still work in C#. They work in other projects, too (in VB). It's just all VB projects within this one solution. Does anyone have any ideas? I can't pinpoint when it stopped working as I haven't modified much of the VB code for weeks/months.
aha! in the 'compile' tab under properties, the 'generate documentation' checkbox was not ticked. looking at SVN it looks like someone checked in the VB projects with this unticked, for some reason. thanks for the help! it's my first time using this site. looks like the guys involved have done a good job. i love the fact you don't have to register.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108717", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
108,728
I am redesigning a command line application and am looking for a way to make its use more intuitive. Are there any conventions for the format of parameters passed into a command line application? Or any other method that people have found useful?
I see a lot of Windows command line specifics, but if your program is intended for Linux, I find the GNU command line standard to be the most intuitive. Basically, it uses double hyphens for the long form of a command (e.g., --help ) and a single hyphen for the short version (e.g., -h ). You can also "stack" the short versions together (e.g., tar -zxvf filename ) and mix 'n match long and short to your heart's content. The GNU site also lists standard option names . The getopt library greatly simplifies parsing these commands. If C's not your bag, Python has a similar library, as does Perl .
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108728", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2064/" ] }
108,768
I got a comment to my answer on this thread: Malloc inside a function call appears to be getting freed on return? In short I had code like this: int * somefunc (void){ int * temp = (int*) malloc (sizeof (int)); temp[0] = 0; return temp;} I got this comment: Can I just say, please don't cast the return value of malloc? It is not required and can hide errors. I agree that the cast is not required in C. It is mandatory in C++, so I usually add them just in case I have to port the code in C++ one day. However, I wonder how casts like this can hide errors. Any ideas? Edit: Seems like there are very good and valid arguments on both sides. Thanks for posting, folks.
It seems fitting I post an answer, since I left the comment :P Basically, if you forget to include stdlib.h the compiler will assume malloc returns an int . Without casting, you will get a warning. With casting you won't. So by casting you get nothing, and run the risk of suppressing legitimate warnings. Much is written about this, a quick google search will turn up more detailed explanations. edit It has been argued that TYPE * p;p = (TYPE *)malloc(n*sizeof(TYPE)); makes it obvious when you accidentally don't allocate enough memory because say, you thought p was TYPe not TYPE , and thus we should cast malloc because the advantage of this method overrides the smaller cost of accidentally suppressing compiler warnings. I would like to point out 2 things: you should write p = malloc(sizeof(*p)*n); to always ensure you malloc the right amount of space with the above approach, you need to make changes in 3 places if you ever change the type of p : once in the declaration, once in the malloc , and once in the cast. In short, I still personally believe there is no need for casting the return value of malloc and it is certainly not best practice.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108768", "https://Stackoverflow.com", "https://Stackoverflow.com/users/15955/" ] }
108,782
Do you still use session or entity EJBs in your project? Why?
EJB3 is a vast improvement over previous versions. It's still technically the standard server-side implementation toolset for JavaEE and since it now has none of the previous baggage (thanks to annotations and Java Persistence), is quite usable and being deployed as we speak. As one commenter noted, JBoss SEAM is based upon it. EJB 3 is a viable alternative to Spring, and the two technologies may become more tightly related. this article details that Spring 3.0 will be compatible with EJB Lite (which I'm not sure what that is, exactly) and possibly be part of Java EE 6. EJB is not going anywhere.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108782", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19281/" ] }
108,813
How can I Handler 404 errors without the framework throwing an Exception 500 error code?
http://jason.whitehorn.ws/2008/06/17/Friendly-404-Errors-In-ASPNET-MVC.aspx gives the following explanation: Add a wildcard routing rule as your final rule: routes.MapRoute("Error", "{*url}", new { controller = "Error", action = "Http404" }); Any request that doesn't match another rule gets routed to the Http404 action of the Error controller, which you also need to configure: public ActionResult Http404(string url) { Response.StatusCode = 404; ViewData["url"] = url; return View();}
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108813", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19710/" ] }
108,819
What is the best way to randomize an array of strings with .NET? My array contains about 500 strings and I'd like to create a new Array with the same strings but in a random order. Please include a C# example in your answer.
If you're on .NET 3.5, you can use the following IEnumerable coolness: Random rnd=new Random();string[] MyRandomArray = MyArray.OrderBy(x => rnd.Next()).ToArray(); Edit: and here's the corresponding VB.NET code: Dim rnd As New System.RandomDim MyRandomArray = MyArray.OrderBy(Function() rnd.Next()).ToArray() Second edit, in response to remarks that System.Random "isn't threadsafe" and "only suitable for toy apps" due to returning a time-based sequence: as used in my example, Random() is perfectly thread-safe, unless you're allowing the routine in which you randomize the array to be re-entered, in which case you'll need something like lock (MyRandomArray) anyway in order not to corrupt your data, which will protect rnd as well. Also, it should be well-understood that System.Random as a source of entropy isn't very strong. As noted in the MSDN documentation , you should use something derived from System.Security.Cryptography.RandomNumberGenerator if you're doing anything security-related. For example: using System.Security.Cryptography; ... RNGCryptoServiceProvider rnd = new RNGCryptoServiceProvider();string[] MyRandomArray = MyArray.OrderBy(x => GetNextInt32(rnd)).ToArray(); ... static int GetNextInt32(RNGCryptoServiceProvider rnd) { byte[] randomInt = new byte[4]; rnd.GetBytes(randomInt); return Convert.ToInt32(randomInt[0]); }
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/108819", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16440/" ] }
108,822
I would like to wipe out all data for a specific kind in Google App Engine. What is thebest way to do this?I wrote a delete script (hack), but since there is so much data istimeout's out after a few hundred records.
I am currently deleting the entities by their key, and it seems to be faster. from google.appengine.ext import dbclass bulkdelete(webapp.RequestHandler): def get(self): self.response.headers['Content-Type'] = 'text/plain' try: while True: q = db.GqlQuery("SELECT __key__ FROM MyModel") assert q.count() db.delete(q.fetch(200)) time.sleep(0.5) except Exception, e: self.response.out.write(repr(e)+'\n') pass from the terminal, I run curl -N http://...
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108822", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8548/" ] }
108,832
Is it possible to clone a virtual machine using VMware Fusion on Mac OS X? I'm trying the 30 day evaluation version but there doesn't appear to be a clone feature. I tried using the Finder to copy a VM's package structure but the copy didn't appear in the Virtual Machine Library.
Just use File->open to open the copy of the VM. It will probably ask you if you want to change the VM's unique ID. If you plan to run both the original and the clone at the same time, and it's not a Windows OS that needs activation, you should say yes.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108832", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1450/" ] }
108,853
I have some code in a javascript file that needs to send queries back to the server. The question is, how do I find the url for the script that I am in, so I can build a proper request url for ajax. I.e., the same script is included on / , /help , /whatever , and so on, while it will always need to request from /data.json . Additionally, the same site is run on different servers, where the / -folder might be placed differently. I have means to resolve the relative url where I include the Javascript (ez-publish template), but not within the javascript file itself. Are there small scripts that will work on all browsers made for this?
For this I like to put <link> elements in the page's <head> , containing the URLs to use for requests. They can be generated by your server-side language so they always point to the right view: <link id="link-action-1" href="${reverse_url ('action_1')}"/> becomes <link id="link-action-1" href="/my/web/root/action-1/"/> and can be retrieved by Javascript with: document.getElementById ('link-action-1').href;
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108853", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3355/" ] }
108,866
Is there a version of memset() which sets a value that is larger than 1 byte (char)? For example, let's say we have a memset32() function, so using it we can do the following: int32_t array[10];memset32(array, 0xDEADBEEF, sizeof(array)); This will set the value 0xDEADBEEF in all the elements of array. Currently it seems to me this can only be done with a loop. Specifically, I am interested in a 64 bit version of memset(). Know anything like that?
void memset64( void * dest, uint64_t value, uintptr_t size ){ uintptr_t i; for( i = 0; i < (size & (~7)); i+=8 ) { memcpy( ((char*)dest) + i, &value, 8 ); } for( ; i < size; i++ ) { ((char*)dest)[i] = ((char*)&value)[i&7]; } } (Explanation, as requested in the comments: when you assign to a pointer, the compiler assumes that the pointer is aligned to the type's natural alignment; for uint64_t, that is 8 bytes. memcpy() makes no such assumption. On some hardware unaligned accesses are impossible, so assignment is not a suitable solution unless you know unaligned accesses work on the hardware with small or no penalty, or know that they will never occur, or both. The compiler will replace small memcpy()s and memset()s with more suitable code so it is not as horrible is it looks; but if you do know enough to guarantee assignment will always work and your profiler tells you it is faster, you can replace the memcpy with an assignment. The second for() loop is present in case the amount of memory to be filled is not a multiple of 64 bits. If you know it always will be, you can simply drop that loop.)
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108866", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7748/" ] }
108,892
Basically, growl notifications (or other callbacks) when tests break or pass. Does anything like this exist? If not, it should be pretty easy to write.. Easiest way would be to.. run python-autotest myfile1.py myfile2.py etc.py Check if files-to-be-monitored have been modified (possibly just if they've been saved). Run any tests in those files. If a test fails, but in the previous run it passed, generate a growl alert. Same with tests that fail then pass. Wait, and repeat steps 2-5. The problem I can see there is if the tests are in a different file. The simple solution would be to run all the tests after each save.. but with slower tests, this might take longer than the time between saves, and/or could use a lot of CPU power etc.. The best way to do it would be to actually see what bits of code have changed, if function abc() has changed, only run tests that interact with this.. While this would be great, I think it'd be extremely complex to implement? To summarise: Is there anything like the Ruby tool autotest (part of the ZenTest package ), but for Python code? How do you check which functions have changed between two revisions of a script? Is it possible to determine which functions a command will call? (Somewhat like a reverse traceback)
I found autonose to be pretty unreliable but sniffer seems to work very well. $ pip install sniffer$ cd myproject Then instead of running "nosetests", you run: $ sniffer Or instead of nosetests --verbose --with-doctest , you run: $ sniffer -x--verbose -x--with-doctest As described in the readme , it's a good idea to install one of the platform-specific filesystem-watching libraries, pyinotify , pywin32 or MacFSEvents (all installable via pip etc)
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/108892", "https://Stackoverflow.com", "https://Stackoverflow.com/users/745/" ] }
108,926
On a company that I've worked, me and my colleagues, implemented a tailored document distribution system on top of XSL-FO. My task was to get the script to deliver the documents and configure the CUPS print server and the Fax server, so I never had the time to get my hands dirty on XSL-FO. I'm thinking of implementing something in the region that was made there but I'll need some templates to work with while testing. Where can I find some good tutorials on XSL-FO, since the fop process I've mastered already?
I like to refer people to this 2003 IBM developerWorks article: HTML to Formatting Objects (FO) conversion guide I don't recommend using the provided .xsl to convert HTML to FO, but use the narrative to understand the different XSL-FO constructs and how they relate to HTML (which we all understand).
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/108926", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8167/" ] }
108,971
We have two versions of a managed C++ assembly, one for x86 and one for x64. This assembly is called by a .net application complied for AnyCPU. We are deploying our code via a file copy install, and would like to continue to do so. Is it possible to use a Side-by-Side assembly manifest to loading a x86 or x64 assembly respectively when an application is dynamically selecting it's processor architecture? Or is there another way to get this done in a file copy deployment (e.g. not using the GAC)?
I created a simple solution that is able to load platform-specific assembly from an executable compiled as AnyCPU. The technique used can be summarized as follows: Make sure default .NET assembly loading mechanism ("Fusion" engine) can't find either x86 or x64 version of the platform-specific assembly Before the main application attempts loading the platform-specific assembly, install a custom assembly resolver in the current AppDomain Now when the main application needs the platform-specific assembly, Fusion engine will give up (because of step 1) and call our custom resolver (because of step 2); in the custom resolver we determine current platform and use directory-based lookup to load appropriate DLL. To demonstrate this technique, I am attaching a short, command-line based tutorial. I tested the resulting binaries on Windows XP x86 and then Vista SP1 x64 (by copying the binaries over, just like your deployment). Note 1 : "csc.exe" is a C-sharp compiler. This tutorial assumes it is in your path (my tests were using "C:\WINDOWS\Microsoft.NET\Framework\v3.5\csc.exe") Note 2 : I recommend you create a temporary folder for the tests and run command line (or powershell) whose current working directory is set to this location, e.g. (cmd.exe)C:mkdir \TEMP\CrossPlatformTestcd \TEMP\CrossPlatformTest Step 1 : The platform-specific assembly is represented by a simple C# class library: // file 'library.cs' in C:\TEMP\CrossPlatformTestnamespace Cross.Platform.Library{ public static class Worker { public static void Run() { System.Console.WriteLine("Worker is running"); System.Console.WriteLine("(Enter to continue)"); System.Console.ReadLine(); } }} Step 2 : We compile platform-specific assemblies using simple command-line commands: (cmd.exe from Note 2)mkdir platform\x86csc /out:platform\x86\library.dll /target:library /platform:x86 library.csmkdir platform\amd64csc /out:platform\amd64\library.dll /target:library /platform:x64 library.cs Step 3 : Main program is split into two parts. "Bootstrapper" contains main entry point for the executable and it registers a custom assembly resolver in current appdomain: // file 'bootstrapper.cs' in C:\TEMP\CrossPlatformTestnamespace Cross.Platform.Program{ public static class Bootstrapper { public static void Main() { System.AppDomain.CurrentDomain.AssemblyResolve += CustomResolve; App.Run(); } private static System.Reflection.Assembly CustomResolve( object sender, System.ResolveEventArgs args) { if (args.Name.StartsWith("library")) { string fileName = System.IO.Path.GetFullPath( "platform\\" + System.Environment.GetEnvironmentVariable("PROCESSOR_ARCHITECTURE") + "\\library.dll"); System.Console.WriteLine(fileName); if (System.IO.File.Exists(fileName)) { return System.Reflection.Assembly.LoadFile(fileName); } } return null; } }} "Program" is the "real" implementation of the application (note that App.Run was invoked at the end of Bootstrapper.Main): // file 'program.cs' in C:\TEMP\CrossPlatformTestnamespace Cross.Platform.Program{ public static class App { public static void Run() { Cross.Platform.Library.Worker.Run(); } }} Step 4 : Compile the main application on command line: (cmd.exe from Note 2)csc /reference:platform\x86\library.dll /out:program.exe program.cs bootstrapper.cs Step 5 : We're now finished. The structure of the directory we created should be as follows: (C:\TEMP\CrossPlatformTest, root dir) platform (dir) amd64 (dir) library.dll x86 (dir) library.dll program.exe *.cs (source files) If you now run program.exe on a 32bit platform, platform\x86\library.dll will be loaded; if you run program.exe on a 64bit platform, platform\amd64\library.dll will be loaded. Note that I added Console.ReadLine() at the end of the Worker.Run method so that you can use task manager/process explorer to investigate loaded DLLs, or you can use Visual Studio/Windows Debugger to attach to the process to see the call stack etc. When program.exe is run, our custom assembly resolver is attached to current appdomain. As soon as .NET starts loading the Program class, it sees a dependency on 'library' assembly, so it tries loading it. However, no such assembly is found (because we've hidden it in platform/* subdirectories). Luckily, our custom resolver knows our trickery and based on the current platform it tries loading the assembly from appropriate platform/* subdirectory.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/108971", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6156/" ] }
109,000
My C application uses 3rd libraries, which do their own memory management.In order to be robust, my application has code to deal with failures of library functions due to lack of free memory. I would like to test this code, and for this, I need to simulate failures due to lack of memory. What tool/s are recommended for this?My environment is Linux/gcc.
You can use ulimit to limit the amount of resources a user can use, including memory. So you create a test user, limit their memory use to something just enough to launch your program, and watch it die :) Example: ulimit -m 64 Sets a memory limit of 64kb.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/109000", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11886/" ] }
109,023
8 bits representing the number 7 look like this: 00000111 Three bits are set. What are the algorithms to determine the number of set bits in a 32-bit integer?
This is known as the ' Hamming Weight ', 'popcount' or 'sideways addition'. Some CPUs have a single built-in instruction to do it and others have parallel instructions which act on bit vectors. Instructions like x86's popcnt (on CPUs where it's supported) will almost certainly be fastest for a single integer. Some other architectures may have a slow instruction implemented with a microcoded loop that tests a bit per cycle ( citation needed - hardware popcount is normally fast if it exists at all.). The 'best' algorithm really depends on which CPU you are on and what your usage pattern is. Your compiler may know how to do something that's good for the specific CPU you're compiling for, e.g. C++20 std::popcount() , or C++ std::bitset<32>::count() , as a portable way to access builtin / intrinsic functions (see another answer on this question). But your compiler's choice of fallback for target CPUs that don't have hardware popcnt might not be optimal for your use-case. Or your language (e.g. C) might not expose any portable function that could use a CPU-specific popcount when there is one. Portable algorithms that don't need (or benefit from) any HW support A pre-populated table lookup method can be very fast if your CPU has a large cache and you are doing lots of these operations in a tight loop. However it can suffer because of the expense of a 'cache miss', where the CPU has to fetch some of the table from main memory. (Look up each byte separately to keep the table small.) If you want popcount for a contiguous range of numbers, only the low byte is changing for groups of 256 numbers, making this very good . If you know that your bytes will be mostly 0's or mostly 1's then there are efficient algorithms for these scenarios, e.g. clearing the lowest set with a bithack in a loop until it becomes zero. I believe a very good general purpose algorithm is the following, known as 'parallel' or 'variable-precision SWAR algorithm'. I have expressed this in a C-like pseudo language, you may need to adjust it to work for a particular language (e.g. using uint32_t for C++ and >>> in Java): GCC10 and clang 10.0 can recognize this pattern / idiom and compile it to a hardware popcnt or equivalent instruction when available, giving you the best of both worlds. ( https://godbolt.org/z/qGdh1dvKK ) int numberOfSetBits(uint32_t i){ // Java: use int, and use >>> instead of >>. Or use Integer.bitCount() // C or C++: use uint32_t i = i - ((i >> 1) & 0x55555555); // add pairs of bits i = (i & 0x33333333) + ((i >> 2) & 0x33333333); // quads i = (i + (i >> 4)) & 0x0F0F0F0F; // groups of 8 return (i * 0x01010101) >> 24; // horizontal sum of bytes} For JavaScript: coerce to integer with |0 for performance: change the first line to i = (i|0) - ((i >> 1) & 0x55555555); This has the best worst-case behaviour of any of the algorithms discussed, so will efficiently deal with any usage pattern or values you throw at it. (Its performance is not data-dependent on normal CPUs where all integer operations including multiply are constant-time. It doesn't get any faster with "simple" inputs, but it's still pretty decent.) References: https://graphics.stanford.edu/~seander/bithacks.html https://catonmat.net/low-level-bit-hacks for bithack basics, like how subtracting 1 flips contiguous zeros. https://en.wikipedia.org/wiki/Hamming_weight http://gurmeet.net/puzzles/fast-bit-counting-routines/ http://aggregate.ee.engr.uky.edu/MAGIC/#Population%20Count%20(Ones%20Count) How this SWAR bithack works: i = i - ((i >> 1) & 0x55555555); The first step is an optimized version of masking to isolate the odd / even bits, shifting to line them up, and adding. This effectively does 16 separate additions in 2-bit accumulators ( SWAR = SIMD Within A Register ). Like (i & 0x55555555) + ((i>>1) & 0x55555555) . The next step takes the odd/even eight of those 16x 2-bit accumulators and adds again, producing 8x 4-bit sums. The i - ... optimization isn't possible this time so it does just mask before / after shifting. Using the same 0x33... constant both times instead of 0xccc... before shifting is a good thing when compiling for ISAs that need to construct 32-bit constants in registers separately. The final shift-and-add step of (i + (i >> 4)) & 0x0F0F0F0F widens to 4x 8-bit accumulators. It masks after adding instead of before, because the maximum value in any 4-bit accumulator is 4 , if all 4 bits of the corresponding input bits were set. 4+4 = 8 which still fits in 4 bits, so carry between nibble elements is impossible in i + (i >> 4) . So far this is just fairly normal SIMD using SWAR techniques with a few clever optimizations. Continuing on with the same pattern for 2 more steps can widen to 2x 16-bit then 1x 32-bit counts. But there is a more efficient way on machines with fast hardware multiply: Once we have few enough "elements", a multiply with a magic constant can sum all the elements into the top element . In this case byte elements. Multiply is done by left-shifting and adding, so a multiply of x * 0x01010101 results in x + (x<<8) + (x<<16) + (x<<24) . Our 8-bit elements are wide enough (and holding small enough counts) that this doesn't produce carry into that top 8 bits. A 64-bit version of this can do 8x 8-bit elements in a 64-bit integer with a 0x0101010101010101 multiplier, and extract the high byte with >>56 . So it doesn't take any extra steps, just wider constants. This is what GCC uses for __builtin_popcountll on x86 systems when the hardware popcnt instruction isn't enabled. If you can use builtins or intrinsics for this, do so to give the compiler a chance to do target-specific optimizations. With full SIMD for wider vectors (e.g. counting a whole array) This bitwise-SWAR algorithm could parallelize to be done in multiple vector elements at once, instead of in a single integer register, for a speedup on CPUs with SIMD but no usable popcount instruction. (e.g. x86-64 code that has to run on any CPU, not just Nehalem or later.) However, the best way to use vector instructions for popcount is usually by using a variable-shuffle to do a table-lookup for 4 bits at a time of each byte in parallel. (The 4 bits index a 16 entry table held in a vector register). On Intel CPUs, the hardware 64bit popcnt instruction can outperform an SSSE3 PSHUFB bit-parallel implementation by about a factor of 2, but only if your compiler gets it just right . Otherwise SSE can come out significantly ahead. Newer compiler versions are aware of the popcnt false dependency problem on Intel . https://github.com/WojciechMula/sse-popcount state-of-the-art x86 SIMD popcount for SSSE3, AVX2, AVX512BW, AVX512VBMI, or AVX512 VPOPCNT. Using Harley-Seal across vectors to defer popcount within an element. (Also ARM NEON) Counting 1 bits (population count) on large data using AVX-512 or AVX-2 related: https://github.com/mklarqvist/positional-popcount - separate counts for each bit-position of multiple 8, 16, 32, or 64-bit integers. (Again, x86 SIMD including AVX-512 which is really good at this, with vpternlogd making Harley-Seal very good.)
{ "score": 11, "source": [ "https://Stackoverflow.com/questions/109023", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16881/" ] }
109,086
I am using setInterval(fname, 10000); to call a function every 10 seconds in JavaScript. Is it possible to stop calling it on some event? I want the user to be able to stop the repeated refresh of data.
setInterval() returns an interval ID, which you can pass to clearInterval() : var refreshIntervalId = setInterval(fname, 10000);/* later */clearInterval(refreshIntervalId); See the docs for setInterval() and clearInterval() .
{ "score": 12, "source": [ "https://Stackoverflow.com/questions/109086", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1448/" ] }
109,087
Is there a built-in method in Python to get an array of all a class' instance variables? For example, if I have this code: class hi: def __init__(self): self.ii = "foo" self.kk = "bar" Is there a way for me to do this: >>> mystery_method(hi)["ii", "kk"] Edit: I originally had asked for class variables erroneously.
Every object has a __dict__ variable containing all the variables and its values in it. Try this >>> hi_obj = hi()>>> hi_obj.__dict__.keys() Output dict_keys(['ii', 'kk'])
{ "score": 9, "source": [ "https://Stackoverflow.com/questions/109087", "https://Stackoverflow.com", "https://Stackoverflow.com/users/422/" ] }
109,124
Is it possible to run an external process from Perl, capture its stderr, stdout AND the process exit code? I seem to be able to do combinations of these, e.g. use backticks to get stdout, IPC::Open3 to capture outputs, and system() to get exit codes. How do you capture stderr, stdout, and the exit code all at once?
If you reread the documentation for IPC::Open3, you'll see a note that you should call waitpid to reap the child process. Once you do this, the status should be available in $? . The exit value is $? >> 8 . See $? in perldoc perlvar .
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/109124", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19214/" ] }
109,141
We have a large backlog of things we should do in our software, in a lot of different categories, for example: New problem areas for our products to solve New functionality supporting existing problem areas New functionality requested by our existing users Usability and "look" enhancements Architectural upgrades to the back-end Bug fixes Managing all of these in a sensible fashion is a job that falls to Product Management, but it is tricky for a lot of reasons. Firstly, we have a number of different systems that hold the different things (market requirements document in files, bugs in a bug database, customer requirements in our help desk system, enginering's wish-list on our intranet, etc). And secondly, many of the items are of wildly different size, scope, complexity and of course value, which means that choosing isn't as simple as just ordering a list by priority. Because we now are fairly large, have a complex product and lots of customers, the basic solutions (a spreadsheet, a google doc, a basecamp to-do list) just isn't sufficient to deal with this. We need a way to group things together in various ways, prioritise them on an ongoing basis, make it clear what we're doing and what is coming - without it requiring all of someone's time to just manage some tool. How do you manage this in a way that allows the business to always do what is most valuable to existing customers, helps get new ones, and keeps the software innards sane? Note that this is different from the development-side, which I think we have down pretty well. We develop everything in an iterative, agile fashion, and once something has been chosen for design and implementation, we can do that. It's the part where we need to figure out what to do next that's hardest! Have you found a method or a tool that works? If so, please share! (And if you would like to know the answer too, rate up the question so it stays visible :) Addendum: Of course it's nice to fix all the bugs first, but in a real system that actually is installed on customers' machines, that is not always practical. For example, we may have a bug that only occurs very rarely and that it would take a huge amount of time and architectural upheaval to fix - we might leave that for a while. Or we might have a bug where someone thinks something is hard to use, and we think fixing it should wait for a bigger revamp of that area. So, there are lots of reasons why we don't just fix them all straight away, but keep them open so we don't forget. Besides, it is the prioritization of the non-bugs that is the hardest; just imagine we don't have any :)
Managing a large backlog in an aggressive manner is almost always wasteful. By the time you get to the middle of a prioritized pile things have more often than not changed. I'd recommend adopting something like what Corey Ladas calls a priority filter: http://leansoftwareengineering.com/2008/08/19/priority-filter/ Essentially, you have a few buckets of increasing size and decreasing priority. You allow stakeholders to fill them, but force them to ignore the rest of the stories until there are openings in the buckets. Very simple but very effective. Edit: Allan asked what to do if tasks are of different sizes. Basically, a big part of making this work is right-sizing your tasks. We only apply this prioritization to user stories. User stories are typically significantly smaller than "create a community site". I would consider the community site bit an epic or even a project. It would need to be broken down into significantly smaller bits in order to be prioritized. That said, it can still be challenging to make stories similarly sized. Sometimes you just can't, so you communicate that during your planning decisions. With regards to moving wibbles two pixels, many of these things that are easy can be done for "free". You just have to be careful to balance these and only do them if they're really close to free and they're actually somewhat important. We treat bugs similarly. Bugs get one of three categories, Now, Soon or Eventually. We fix Now and Soon bugs as quickly as we can with the only difference being when we publish the fixes. Eventually bugs don't get fix unless devs get bored and have nothing to do or they somehow become higher priority.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/109141", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13394/" ] }
109,179
This question is about the difference between ReadWrite and NonStrictReadWrite cache concurrency strategies for NHibernate's second level cache. As I understand it, the difference between these two strategies is relevant when you have a distributed replicated cache - nonstrict won't guarantee that one cache has the exact same value as another cache, while strict read/write should - assuming the cache provider does the appropriate distributed locking. The part I don't understand is how the strict vs nonstrict distinction is relevant when you have a single cache, or a distributed partitioned (non replicated) cache. Can it be relevant? It seems to me that in non replicated scenarios, the timestamps cache will ensure that stale results are not served. If it can be relevant, I would like to see an example.
What you assume is right, in a single target/thread environment there's little difference. However if you look at the cache providers there is a bit going on even in a multi-threaded scenario. How an object is re-cached from it's modified state is different in the non-strict. For example, if your object is much heftier to reload but you'd like it to after an update instead of footing the next user with the bill, then you'll see different performance with strict vs non-strict. For example: non-strict simply dumps an object from cache after an update is performed...price is paid for the fetch on the next access instead of a post-update event handler. In the strict model, the re-cache is taken care of automatically. A similar thing happens with inserts, non-strict will do nothing where strict will go behind and load the newly inserted object into cache. In non-strict you also have the possibility of a dirty read, since the cache isn't locked at the time of the read you would not see the result of another thread's change to the item. In strict the cache key for that item would lock and you would be held up but see the absolute latest result. So, even in a single target environment, if there is a large amount of concurrent reads/edits on objects then you have a chance to see data that isn't really accurate. This of course becomes a problem when a save is performed and an edit screen is loading: the person thinking they're editing the latest version of the object really isn't, and they're in for a nasty surprise when they try to save the edits to the stale data they loaded.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/109179", "https://Stackoverflow.com", "https://Stackoverflow.com/users/48281/" ] }
109,186
We are setting up a new SharePoint for which we don't have a valid SSL certificate yet. I would like to call the Lists web service on it to retrieve some meta data about the setup. However, when I try to do this, I get the exception: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The nested exception contains the error message: The remote certificate is invalid according to the validation procedure. This is correct since we are using a temporary certificate. My question is: how can I tell the .Net web service client ( SoapHttpClientProtocol ) to ignore these errors?
Alternatively you can register a call back delegate which ignores the certification error: ...ServicePointManager.ServerCertificateValidationCallback = MyCertHandler;...static bool MyCertHandler(object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors error){// Ignore errorsreturn true;}
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/109186", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9540/" ] }
109,188
Does anyone know how I can check to see if a directory is writeable in PHP? The function is_writable doesn't work for folders. Edit: It does work. See the accepted answer.
Yes, it does work for folders.... Returns TRUE if the filename exists and is writable. The filename argument may be a directory name allowing you to check if a directory is writable.
{ "score": 8, "source": [ "https://Stackoverflow.com/questions/109188", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5261/" ] }
109,210
Normally I use imagecreatefromjpeg() and then getimagesize() , but with Firefox 3 I need to go round this different. So now im using imagecreatefromstring() , but how do I retreive the image dimensions now?
imagesx() and imagesy() functions seem to work with images made with imagecreatefromstring() , too.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/109210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/18671/" ] }
109,232
What is the best way (performance wise) to paginate results in SQL Server 2000, 2005, 2008, 2012 if you also want to get the total number of results (before paginating)?
Getting the total number of results and paginating are two different operations. For the sake of this example, let's assume that the query you're dealing with is SELECT * FROM Orders WHERE OrderDate >= '1980-01-01' ORDER BY OrderDate In this case, you would determine the total number of results using: SELECT COUNT(*) FROM Orders WHERE OrderDate >= '1980-01-01' ...which may seem inefficient, but is actually pretty performant, assuming all indexes etc. are properly set up. Next, to get actual results back in a paged fashion, the following query would be most efficient: SELECT *FROM ( SELECT ROW_NUMBER() OVER ( ORDER BY OrderDate ) AS RowNum, * FROM Orders WHERE OrderDate >= '1980-01-01' ) AS RowConstrainedResultWHERE RowNum >= 1 AND RowNum < 20ORDER BY RowNum This will return rows 1-19 of the original query. The cool thing here, especially for web apps, is that you don't have to keep any state, except the row numbers to be returned.
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/109232", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19331/" ] }
109,280
I'm working with DotNetNuke's scheduler to schedule tasks and I'm looking to get the physical file path of a email template that I created. The problem is that HttpContext is NULL because the scheduled task is on a different thread and there is not http request. How would you go about getting the file's physical path?
System.Web.Hosting.HostingEnvironment.MapPath is what you're looking for. Whenever you're using the Server or HttpContext.Current objects, check first to see if HostingEnvironment has what you need.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/109280", "https://Stackoverflow.com", "https://Stackoverflow.com/users/19490/" ] }
109,284
Is it possible to test the use of a given layout using RSpec with Rails, for example I'd like a matcher that does the following: response.should use_layout('my_layout_name') I found a use_layout matcher when Googling but it doesn't work as neither the response or controller seem to have a layout property that matcher was looking for.
David Chelimsky posted a good answer over on the Ruby Forum : response.should render_template("layouts/some_layout")
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/109284", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6432/" ] }
109,317
Is there any good reason to use C-strings in C++ nowadays? My textbook uses them in examples at some points, and I really feel like it would be easier just to use a std::string.
The only reasons I've had to use them is when interfacing with 3rd party libraries that use C style strings. There might also be esoteric situations where you would use C style strings for performance reasons, but more often than not, using methods on C++ strings is probably faster due to inlining and specialization, etc. You can use the c_str() method in many cases when working with those sort of APIs, but you should be aware that the char * returned is const, and you should not modify the string via that pointer. In those sort of situations, you can still use a vector<char> instead, and at least get the benefit of easier memory management.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/109317", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2147/" ] }
109,318
Using .Net what limitations (if any) are there in using the XmlSerializer?For example, can you serialize Images to XML?
The XmlSerializer has a few drawbacks. It must know all the types being serialized. You cannot pass it something by interface that represents a type that the serializer does not know. It cannot do circular references. It will serializes the same object multiple times if referenced multiple times in the object graph. Cannot handle private field serialization. I (stupidly) wrote my own serializer to get around some of these problems. Don't do that; it is a lot of work and you will find subtle bugs in it months down the road. The only thing I gained in writing my own serializer and formatter was a greater appreciation of the minutia involved in object graph serialization. I found the NetDataContractSerializer when WCF came out. It does all the stuff from above that XmlSerializer doesn't do. It drives the serialization in a similar fashion to the XmlSerializer. One decorates various properties or fields with attributes to inform the serializer what to serialize. I replaced the custom serializer I had written with the NetDataContractSerializer and was very happy with the results. I would highly recommend it.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/109318", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13227/" ] }
109,325
How do you perform the equivalent of Oracle's DESCRIBE TABLE in PostgreSQL (using the psql command)?
Try this (in the psql command-line tool): \d+ tablename See the manual for more info.
{ "score": 13, "source": [ "https://Stackoverflow.com/questions/109325", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2657951/" ] }
109,364
I'm trying to find/make an algorithm to compute the intersection (a new filled object) of two arbitrary filled 2D objects. The objects are defined using either lines or cubic beziers and may have holes or self-intersect. I'm aware of several existing algorithms doing the same with polygons, listed here . However, I'd like to support beziers without subdividing them into polygons, and the output should have roughly the same control points as the input in areas where there are no intersections. This is for an interactive program to do some CSG but the clipping doesn't need to be real-time. I've searched for a while but haven't found good starting points.
I found the following publication to be the best of information regarding Bezier Clipping: T. W. Sederberg, BYU, Computer Aided Geometric Design Course Notes Chapter 7 that talks about Curve Intersection is available online. It outlines 4 different approaches to find intersections and describes Bezier Clipping in detail: https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=1000&context=facpub
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/109364", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16509/" ] }
109,383
I am relatively new to Java, and often find that I need to sort a Map<Key, Value> on the values. Since the values are not unique, I find myself converting the keySet into an array , and sorting that array through array sort with a custom comparator that sorts on the value associated with the key. Is there an easier way?
Here's a generic-friendly version: public class MapUtil { public static <K, V extends Comparable<? super V>> Map<K, V> sortByValue(Map<K, V> map) { List<Entry<K, V>> list = new ArrayList<>(map.entrySet()); list.sort(Entry.comparingByValue()); Map<K, V> result = new LinkedHashMap<>(); for (Entry<K, V> entry : list) { result.put(entry.getKey(), entry.getValue()); } return result; }}
{ "score": 10, "source": [ "https://Stackoverflow.com/questions/109383", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9466/" ] }
109,399
I know there's JScript.NET, but it isn't the same as the JavaScript we know from the web. Does anyone know if there are any JavaScript based platforms/compilers for desktop development? Most specifically Windows desktop development.
Yes, with Adobe AIR . Adobe AIR lets you make desktop applications with Javascript, Flex, or Flash.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/109399", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7831/" ] }
109,432
In which parts of a project writing unit tests is nearly or really impossible? Data access? ftp? If there is an answer to this question then %100 coverage is a myth, isn't it?
Here I found (via haacked something Michael Feathers says that can be an answer: He says, A test is not a unit test if: It talks to the database It communicates across the network It touches the file system It can't run at the same time as any of your other unit tests You have to do special things to your environment (such as editing config files) to run it. Again in same article he adds: Generally, unit tests are supposed to be small, they test a method or the interaction of a couple of methods. When you pull the database, sockets, or file system access into your unit tests, they are not really about those methods any more; they are about the integration of your code with that other software.
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/109432", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11374/" ] }
109,449
Is there a (cross-platform) way to get a C FILE* handle from a C++ std::fstream ? The reason I ask is because my C++ library accepts fstreams and in one particular function I'd like to use a C library that accepts a FILE*.
The short answer is no. The reason, is because the std::fstream is not required to use a FILE* as part of its implementation. So even if you manage to extract file descriptor from the std::fstream object and manually build a FILE object, then you will have other problems because you will now have two buffered objects writing to the same file descriptor. The real question is why do you want to convert the std::fstream object into a FILE* ? Though I don't recommend it, you could try looking up funopen() . Unfortunately, this is not a POSIX API (it's a BSD extension) so its portability is in question. Which is also probably why I can't find anybody that has wrapped a std::stream with an object like this. FILE *funopen( const void *cookie, int (*readfn )(void *, char *, int), int (*writefn)(void *, const char *, int), fpos_t (*seekfn) (void *, fpos_t, int), int (*closefn)(void *) ); This allows you to build a FILE object and specify some functions that will be used to do the actual work. If you write appropriate functions you can get them to read from the std::fstream object that actually has the file open.
{ "score": 7, "source": [ "https://Stackoverflow.com/questions/109449", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ] }
109,488
I keep hearing that div tags should be used for layout purposes and not table tags. So does that also apply to form layout? I know a form layout is still a layout, but it seems like creating form layouts with div s requires more html and css . So with that in mind, should forms layouts use div tags instead?
Yes, it does apply for form layouts. Keep in mind that there are also tags like FIELDSET and LABEL which exist specifically for adding structure to a form, so it's not really a question of just using DIV. You should be able to markup a form with pretty minimal HTML, and let CSS do the rest of the work. E.g.: <fieldset> <div> <label for="nameTextBox">Name:</label> <input id="nameTextBox" type="text" /> </div> ...</fieldset>
{ "score": 6, "source": [ "https://Stackoverflow.com/questions/109488", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10708/" ] }
109,553
I need to query existing rules, as well as being able to easily add and delete rules. I haven't found any API's for doing this. Is there something that I'm missing? The closest I've come to a solution is using iptables-save | iptables-xml for querying and manually calling the iptables command itself to add/delete rules. Another solution I've considered is simply regenerating the entire ruleset out of my application's database and flushing the whole chain, then applying it again. But I want to avoid this as I don't want to drop any packets -- unless there's a way to atomically do this. I'm wondering if there's a better way. An API in C would be great; however, as I'm planning to build this into a stand-alone suid program, libraries that do this in ANY language are fine too.
From the netfilter FAQ : The answer unfortunately is: No. Now you might think 'but what about libiptc?'. As has been pointed out numerous times on the mailinglist(s), libiptc was NEVER meant to be used as a public interface. We don't guarantee a stable interface, and it is planned to remove it in the next incarnation of linux packet filtering. libiptc is way too low-layer to be used reasonably anyway. We are well aware that there is a fundamental lack for such an API, and we are working on improving that situation. Until then, it is recommended to either use system() or open a pipe into stdin of iptables-restore. The latter will give you a way better performance.
{ "score": 5, "source": [ "https://Stackoverflow.com/questions/109553", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10495/" ] }