source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
37,743 | I'd like to find the different ways to solve a real life problem I had: imagine to have a contest, or a game, during which the users collect points. You have to build a query to show the list of users with the best "n" scores. I'm making an example to clarify. Let's say that this is the Users table, with the points earned: UserId - Points1 - 1002 - 753 - 504 - 505 - 506 - 25 If I want the top 3 scores, the result will be: UserId - Points1 - 1002 - 753 - 504 - 505 - 50 This can be realized in a view or a stored procedure, as you want. My target db is Sql Server. Actually I solved this, but I think there are different way to obtain the result... faster or more efficent than mine. | Untested, but should work: select * from users where points in(select distinct top 3 points from users order by points desc) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/37743",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1178/"
]
} |
37,759 | I have the following code in a web.config file of the default IIS site. <httpModules> <add type="MDL.BexWebControls.Charts.ChartStreamHandler,Charts" name="ChartStreamHandler"/></httpModules> Then when I setup and browse to a virtual directory I get this error Could not load file or assembly 'Charts' or one of its dependencies. The system cannot find the file specified. The virtual directory is inheriting the modules from the default web.config. How do you stop this inheritance? | I've found the answer. Wrap the HttpModule section in location tags and set the inheritInChildApplications attribute to false. <location path="." inheritInChildApplications="false"> <system.web> <httpModules> <add type="MDL.BexWebControls.Charts.ChartStreamHandler,Charts" name="ChartStreamHandler"/> </httpModules> </system.web></location> Now any virtual directories will not inherit the settings in this location section. @GateKiller This isn't another website, its a virtual directory so inheritance does occur. @petrich I've had hit and miss results using <remove /> . I have to remember to add it to every virtual directory which is a pain. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/37759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2471/"
]
} |
37,791 | At the moment my code (PHP) has too many SQL queries in it. eg... // not a real example, but you get the idea...$results = $db->GetResults("SELECT * FROM sometable WHERE iUser=$userid");if ($results) { // Do something} I am looking into using stored procedures to reduce this and make things a little more robust, but I have some concerns.. I have hundreds of different queries in use around the web site, and many of them are quite similar. How should I manage all these queries when they are removed from their context (the code that uses the results) and placed in a stored procedure on the database? | The best course of action for you will depend on how you are approaching your data access. There are three approaches you can take: Use stored procedures Keep the queries in the code (but put all your queries into functions and fix everything to use PDO for parameters, as mentioned earlier) Use an ORM tool If you want to pass your own raw SQL to the database engine then stored procedures would be the way to go if all you want to do is get the raw SQL out of your PHP code but keep it relatively unchanged. The stored procedures vs raw SQL debate is a bit of a holy war, but K. Scott Allen makes an excellent point - albeit a throwaway one - in an article about versioning databases : Secondly, stored procedures have fallen out of favor in my eyes. I came from the WinDNA school of indoctrination that said stored procedures should be used all the time. Today, I see stored procedures as an API layer for the database. This is good if you need an API layer at the database level, but I see lots of applications incurring the overhead of creating and maintaining an extra API layer they don't need. In those applications stored procedures are more of a burden than a benefit. I tend to lean towards not using stored procedures. I've worked on projects where the DB has an API exposed through stored procedures, but stored procedures can impose some limitations of their own, and those projects have all , to varying degrees, used dynamically generated raw SQL in code to access the DB. Having an API layer on the DB gives better delineation of responsibilities between the DB team and the Dev team at the expense of some of the flexibility you'd have if the query was kept in the code, however PHP projects are less likely to have sizable enough teams to benefit from this delineation. Conceptually, you should probably have your database versioned. Practically speaking, however, you're far more likely to have just your code versioned than you are to have your database versioned. You are likely to be changing your queries when you are making changes to your code, but if you are changing the queries in stored procedures stored against the database then you probably won't be checking those in when you check the code in and you lose many of the benefits of versioning for a significant area of your application. Regardless of whether or not you elect not to use stored procedures though, you should at the very least ensure that each database operation is stored in an independent function rather than being embedded into each of your page's scripts - essentially an API layer for your DB which is maintained and versioned with your code. If you're using stored procedures, this will effectively mean you have two API layers for your DB, one with the code and one with the DB, which you may feel unnecessarily complicates things if your project does not have separate teams. I certainly do. If the issue is one of code neatness, there are ways to make code with SQL jammed in it more presentable, and the UserManager class shown below is a good way to start - the class only contains queries which relate to the 'user' table, each query has its own method in the class and the queries are indented into the prepare statements and formatted as you would format them in a stored procedure. // UserManager.php:class UserManager{ function getUsers() { $pdo = new PDO(...); $stmt = $pdo->prepare(' SELECT u.userId as id, u.userName, g.groupId, g.groupName FROM user u INNER JOIN group g ON u.groupId = g.groupId ORDER BY u.userName, g.groupName '); // iterate over result and prepare return value } function getUser($id) { // db code here }}// index.php:require_once("UserManager.php");$um = new UserManager;$users = $um->getUsers();foreach ($users as $user) echo $user['name']; However, if your queries are quite similar but you have huge numbers of permutations in your query conditions like complicated paging, sorting, filtering, etc, an Object/Relational mapper tool is probably the way to go, although the process of overhauling your existing code to make use of the tool could be quite complicated. If you decide to investigate ORM tools, you should look at Propel , the ActiveRecord component of Yii , or the king-daddy PHP ORM, Doctrine . Each of these gives you the ability to programmatically build queries to your database with all manner of complicated logic. Doctrine is the most fully featured, allowing you to template your database with things like the Nested Set tree pattern out of the box. In terms of performance, stored procedures are the fastest, but generally not by much over raw sql. ORM tools can have a significant performance impact in a number of ways - inefficient or redundant querying, huge file IO while loading the ORM libraries on each request, dynamic SQL generation on each query... all of these things can have an impact, but the use of an ORM tool can drastically increase the power available to you with a much smaller amount of code than creating your own DB layer with manual queries. Gary Richardson is absolutely right though, if you're going to continue to use SQL in your code you should always be using PDO's prepared statements to handle the parameters regardless of whether you're using a query or a stored procedure. The sanitisation of input is performed for you by PDO. // optional$attrs = array(PDO::ATTR_PERSISTENT => true);// create the PDO object$pdo = new PDO("mysql:host=localhost;dbname=test", "user", "pass", $attrs);// also optional, but it makes PDO raise exceptions instead of // PHP errors which are far more useful for debugging$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);$stmt = $pdo->prepare('INSERT INTO venue(venueName, regionId) VALUES(:venueName, :regionId)');$stmt->bindValue(":venueName", "test");$stmt->bindValue(":regionId", 1);$stmt->execute();$lastInsertId = $pdo->lastInsertId();var_dump($lastInsertId); Caveat: assuming that the ID is 1, the above script will output string(1) "1" . PDO->lastInsertId() returns the ID as a string regardless of whether the actual column is an integer or not. This will probably never be a problem for you as PHP performs casting of strings to integers automatically. The following will output bool(true) : // regular equality testvar_dump($lastInsertId == 1); but if you have code that is expecting the value to be an integer, like is_int or PHP's "is really, truly, 100% equal to" operator: var_dump(is_int($lastInsertId));var_dump($lastInsertId === 1); you could run into some issues. Edit: Some good discussion on stored procedures here | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/37791",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4012/"
]
} |
37,805 | I have a List<int> and a List<customObject> . The customObject class has an ID property. How can I get a List<customObject> containing only the objects where the ID property is in the List<int> using LINQ? Edit : I accepted Konrads answer because it is easier/more intuitive to read. | var result = from o in objList where intList.Contains(o.ID) select o | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/37805",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2257/"
]
} |
37,809 | How can I go about generating a Friendly URL in C#? Currently I simple replace spaces with an underscore, but how would I go about generating URL's like Stack Overflow? For example how can I convert: How do I generate a Friendly URL in C#? Into how-do-i-generate-a-friendly-url-in-C | There are several things that could be improved in Jeff's solution, though. if (String.IsNullOrEmpty(title)) return ""; IMHO, not the place to test this. If the function gets passed an empty string, something went seriously wrong anyway. Throw an error or don't react at all. // remove any leading or trailing spaces left over… muuuch later:// remove trailing dash, if there is one Twice the work. Considering that each operation creates a whole new string, this is bad, even if performance is not an issue. // replace spaces with single dashtitle = Regex.Replace(title, @"\s+", "-");// if we end up with multiple dashes, collapse to single dash title = Regex.Replace(title, @"\-{2,}", "-"); Again, basically twice the work: First, use regex to replace multiple spaces at once. Then, use regex again to replace multiple dashes at once. Two expressions to parse, two automata to construct in memory, iterate twice over the string, create two strings: All these operations can be collapsed to a single one. Off the top of my head, without any testing whatsoever, this would be an equivalent solution: // make it all lower casetitle = title.ToLower();// remove entitiestitle = Regex.Replace(title, @"&\w+;", "");// remove anything that is not letters, numbers, dash, or spacetitle = Regex.Replace(title, @"[^a-z0-9\-\s]", "");// replace spacestitle = title.Replace(' ', '-');// collapse dashestitle = Regex.Replace(title, @"-{2,}", "-");// trim excessive dashes at the beginningtitle = title.TrimStart(new [] {'-'});// if it's too long, clip itif (title.Length > 80) title = title.Substring(0, 79);// remove trailing dashestitle = title.TrimEnd(new [] {'-'});return title; Notice that this method uses string functions instead of regex functions and char functions instead of string functions whenever possible. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/37809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383/"
]
} |
37,823 | Can you please point to alternative data storage tools and give good reasons to use them instead of good-old relational databases? In my opinion, most applications rarely use the full power of SQL--it would be interesting to see how to build an SQL-free application. | Plain text files in a filesystem Very simple to create and edit Easy for users to manipulate with simple tools (i.e. text editors, grep etc) Efficient storage of binary documents XML or JSON files on disk As above, but with a bit more ability to validate the structure. Spreadsheet / CSV file Very easy model for business users to understand Subversion (or similar disk based version control system) Very good support for versioning of data Berkeley DB (Basically, a disk based hashtable) Very simple conceptually (just un-typed key/value) Quite fast No administration overhead Supports transactions I believe Amazon's Simple DB Much like Berkeley DB I believe, but hosted Google's App Engine Datastore Hosted and highly scalable Per document key-value storage (i.e. flexible data model) CouchDB Document focus Simple storage of semi-structured / document based data Native language collections (stored in memory or serialised on disk) Very tight language integration Custom (hand-written) storage engine Potentially very high performance in required uses cases I can't claim to know anything much about them, but you might also like to look into object database systems . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/37823",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4018/"
]
} |
37,830 | I want to show a chromeless modal window with a close button in the upper right corner.Is this possible? | You'll pretty much have to roll your own Close button, but you can hide the window chrome completely using the WindowStyle attribute, like this: <Window WindowStyle="None"> That will still have a resize border. If you want to make the window non-resizable then add ResizeMode="NoResize" to the declaration. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/37830",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2374/"
]
} |
37,956 | I would like to open a small video file and map every frames in memory (to apply some custom filter). I don't want to handle the video codec, I would rather let the library handle that for me. I've tried to use Direct Show with the SampleGrabber filter (using this sample http://msdn.microsoft.com/en-us/library/ms787867(VS.85).aspx ), but I only managed to grab some frames (not every frames!). I'm quite new in video software programming, maybe I'm not using the best library, or I'm doing it wrong. I've pasted a part of my code (mainly a modified copy/paste from the msdn example), unfortunately it doesn't grabb the 25 first frames as expected... [...]hr = pGrabber->SetOneShot(TRUE);hr = pGrabber->SetBufferSamples(TRUE);pControl->Run(); // Run the graph.pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done.// Find the required buffer size.long cbBuffer = 0;hr = pGrabber->GetCurrentBuffer(&cbBuffer, NULL);for( int i = 0 ; i < 25 ; ++i ){ pControl->Run(); // Run the graph. pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done. char *pBuffer = new char[cbBuffer]; hr = pGrabber->GetCurrentBuffer(&cbBuffer, (long*)pBuffer); AM_MEDIA_TYPE mt; hr = pGrabber->GetConnectedMediaType(&mt); VIDEOINFOHEADER *pVih; pVih = (VIDEOINFOHEADER*)mt.pbFormat; [...]}[...] Is there somebody, with video software experience, who can advise me about code or other simpler library? Thanks Edit:Msdn links seems not to work ( see the bug ) | Currently these are the most popular video frameworks available on Win32 platforms: Video for Windows: old windows framework coming from the age of Win95 but still widely used because it is very simple to use. Unfortunately it supports only AVI files for which the proper VFW codec has been installed. DirectShow: standard WinXP framework, it can basically load all formats you can play with Windows Media Player. Rather difficult to use. Ffmpeg : more precisely libavcodec and libavformat that comes with Ffmpeg open- source multimedia utility. It is extremely powerful and can read a lot of formats (almost everything you can play with VLC ) even if you don't have the codec installed on the system. It's quite complicated to use but you can always get inspired by the code of ffplay that comes shipped with it or by other implementations in open-source software. Anyway I think it's still much easier to use than DS (and much faster). It needs to be comipled by MinGW on Windows, but all the steps are explained very well here (in this moment the link is down, hope not dead). QuickTime : the Apple framework is not the best solution for Windows platform, since it needs QuickTime app to be installed and also the proper QuickTime codec for every format; it does not support many formats, but its quite common in professional field (so some codec are actually only for QuickTime). Shouldn't be too difficult to implement. Gstreamer : latest open source framework. I don't know much about it, I guess it wraps over some of the other systems (but I'm not sure). All of this frameworks have been implemented as backend in OpenCv Highgui, except for DirectShow. The default framework for Win32 OpenCV is using VFW (and thus able only to open some AVI files), if you want to use the others you must download the CVS instead of the official release and still do some hacking on the code and it's anyway not too complete, for example FFMPEG backend doesn't allow to seek in the stream.If you want to use QuickTime with OpenCV this can help you. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/37956",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1578/"
]
} |
37,969 | Can anyone recommend a tool for quickly posting test messages onto a JMS queue? Description : The tool should allow the user to enter some data, perhaps an XMLpayload, and then submit it to a queue. I should be able to test consumer without producer. | This answer doesn't apply to all JMS brokers, but if you happen to be using Apache ActiveMQ , the web-based admin console (by default at http://localhost:8161/admin ) allows you to manually send text messages to topics or queues. It's handy for debugging. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/37969",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2454/"
]
} |
37,976 | By default IntelliJ IDEA 7.0.4 seems to use 4 spaces for indentation in XML files. The project I'm working on uses 2 spaces as indentation in all it's XML. Is there a way to configure the indentation in IntelliJ's editor? | Sure there is. This is all you need to do: Go to File -> Settings -> Global Code Style -> General Disable the checkbox next to 'Use same settings for all file types' The 'XML' tab should become enabled. Click it and set the 'tab' (and probably 'indent') size to 2. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/37976",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1113/"
]
} |
37,991 | I currently have an MS Access application that connects to a PostgreSQL database via ODBC. This successfully runs on a LAN with 20 users (each running their own version of Access). Now I am thinking through some disaster recovery scenarios, and it seems that a quick and easy method of protecting the data is to use log shipping to create a warm-standby. This lead me to think about putting this warm-standby at a remote location, but then I have the question: Is Access connecting to a remote database via ODBC usable? I.e. the remote database is maybe in the same country with ok ping times and I have a 1mbit SDSL line. | onnodb, The PostgreSQL ODBC driver is actively developed and an Access front-end combined with PostgreSQL server, in my opinion makes a great option on a LAN for rapid development. I have been involved in a reasonably big system (100+ PostgreSQL tables, 200+ Access forms, 1000+ Access queries & reports) and it has run excellently for a few years, with ~20 users. Any queries running slow because Access is doing something stupid can generally just be solved by using views , and any really data-intensive code can easily be moved into PostgreSQL functions and then called from Access. The only main ODBC-related issue we have is that there is no way to kill a slow running query from Access, so we do often get users just killing Access and then massive queries are just left executing on the server. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/37991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4045/"
]
} |
38,019 | Coming up with good, precise names for classes is notoriously difficult. Done right, it makes code more self-documenting and provides a vocabulary for reasoning about code at a higher level of abstraction. Classes which implement a particular design pattern might be given a name based on the well known pattern name (e.g. FooFactory, FooFacade), and classes which directly model domain concepts can take their names from the problem domain, but what about other classes? Is there anything like a programmer's thesaurus that I can turn to when I'm lacking inspiration, and want to avoid using generic class names (like FooHandler, FooProcessor, FooUtils, and FooManager)? | I'll cite some passages from Implementation Patterns by Kent Beck: Simple Superclass Name "[...] The names should be short and punchy.However, to make the names precisesometimes seems to require severalwords. A way out of this dilemma ispicking a strong metaphor for thecomputation. With a metaphor in mind,even single words bring with them arich web of associations, connections,and implications. For example, in theHotDraw drawing framework, my firstname for an object in a drawing was DrawingObject . Ward Cunningham camealong with the typography metaphor: adrawing is like a printed, laid-outpage. Graphical items on a page arefigures, so the class became Figure .In the context of the metaphor, Figure is simultaneously shorter, richer, andmore precise than DrawingObject ." Qualified Subclass Name "The names of subclasses have two jobs.They need to communicate what classthey are like and how they aredifferent. [...] Unlike the names atthe roots of hierarchies, subclassnames aren’t used nearly as often inconversation, so they can beexpressive at the cost of beingconcise. [...] Give subclasses that serve as theroots of hierarchies their own simplenames. For example, HotDraw has aclass Handle which presents figure-editing operations when a figure isselected. It is called, simply, Handle in spite of extending Figure . There isa whole family of handles and theymost appropriately have names like StretchyHandle and TransparencyHandle .Because Handle is the root of its ownhierarchy, it deserves a simplesuperclass name more than a qualifiedsubclass name. Another wrinkle insubclass naming is multiple-levelhierarchies. [...] Rather than blindlyprepend the modifiers to the immediatesuperclass, think about the name fromthe reader’s perspective. What classdoes he need to know this class islike? Use that superclass as the basisfor the subclass name." Interface Two styles of naming interfaces depend on how you are thinking of the interfaces.Interfaces as classes without implementations should be named as if they were classes( Simple Superclass Name , Qualified Subclass Name ). One problem with this style ofnaming is that the good names are used up before you get to naming classes. Aninterface called File needs an implementation class called something like ActualFile , ConcreteFile , or (yuck!) FileImpl (both a suffix and anabbreviation). In general, communicating whether one is dealing with a concrete orabstract object is important, whether the abstract object is implemented as aninterface or a superclass is less important. Deferring the distinction betweeninterfaces and superclasses is well >supported by this style of naming, leaving youfree to change your mind later if that >becomes necessary. Sometimes, naming concrete classes simply is more important to communication thanhiding the use of interfaces. In this case, prefix interface names with “I”. If theinterface is called IFile , the class can be simply called File . For more detailed discussion, buy the book! It's worth it! :) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/38019",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3598/"
]
} |
38,021 | How can I find the origins of conflicting DNS records? | You'll want the SOA (Start of Authority) record for a given domain name, and this is how you accomplish it using the universally available nslookup command line tool: command line> nslookup> set querytype=soa> stackoverflow.comServer: 217.30.180.230Address: 217.30.180.230#53Non-authoritative answer:stackoverflow.com origin = ns51.domaincontrol.com # ("primary name server" on Windows) mail addr = dns.jomax.net # ("responsible mail addr" on Windows) serial = 2008041300 refresh = 28800 retry = 7200 expire = 604800 minimum = 86400Authoritative answers can be found from:stackoverflow.com nameserver = ns52.domaincontrol.com.stackoverflow.com nameserver = ns51.domaincontrol.com. The origin (or primary name server on Windows) line tells you that ns51.domaincontrol is the main name server for stackoverflow.com . At the end of output all authoritative servers, including backup servers for the given domain, are listed. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/38021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/319/"
]
} |
38,039 | How do I find the start of the week (both Sunday and Monday) knowing just the current time in C#? Something like: DateTime.Now.StartWeek(Monday); | Use an extension method: public static class DateTimeExtensions{ public static DateTime StartOfWeek(this DateTime dt, DayOfWeek startOfWeek) { int diff = (7 + (dt.DayOfWeek - startOfWeek)) % 7; return dt.AddDays(-1 * diff).Date; }} Which can be used as follows: DateTime dt = DateTime.Now.StartOfWeek(DayOfWeek.Monday);DateTime dt = DateTime.Now.StartOfWeek(DayOfWeek.Sunday); | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/38039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383/"
]
} |
38,056 | The only nice way I've found is: import sysimport ostry: os.kill(int(sys.argv[1]), 0) print "Running"except: print "Not running" ( Source ) But is this reliable? Does it work with every process and every distribution? | Mark's answer is the way to go, after all, that's why the /proc file system is there. For something a little more copy/pasteable: >>> import os.path >>> os.path.exists("/proc/0") False >>> os.path.exists("/proc/12") True | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/38056",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1531/"
]
} |
38,068 | Is there any shorthand way of defining and using generic definitions without having to keep repeating a particular generic description such that if there is a change I don't have to change all definitions/usages though out the codebase for example is something like this possible: Typedef myGenDef = < Object1, Object2 >;HashMap< myGenDef > hm = new HashMap< myGenDef >();for (Entry< myGenDef > ent : hm..entrySet()){...} | There's the pseudo-typedef antipattern ... class StringList extends ArrayList<String> { } Good stuff, drink up! ;-) As the article notes, this technique has some serious issues, primarily that this "typedef" is actually a separate class and thus cannot be used interchangeably with either the type it extends or other similarly defined types. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
38,160 | Lets say I have a loop in Bash: for foo in `some-command`do do-something $foodone do-something is cpu bound and I have a nice shiny 4 core processor. I'd like to be able to run up to 4 do-something 's at once. The naive approach seems to be: for foo in `some-command`do do-something $foo &done This will run all do-something s at once, but there are a couple downsides, mainly that do-something may also have some significant I/O which performing all at once might slow down a bit. The other problem is that this code block returns immediately, so no way to do other work when all the do-something s are finished. How would you write this loop so there are always X do-something s running at once? | Depending on what you want to do xargs also can help (here: converting documents with pdf2ps): cpus=$( ls -d /sys/devices/system/cpu/cpu[[:digit:]]* | wc -w )find . -name \*.pdf | xargs --max-args=1 --max-procs=$cpus pdf2ps From the docs: --max-procs=max-procs-P max-procs Run up to max-procs processes at a time; the default is 1. If max-procs is 0, xargs will run as many processes as possible at a time. Use the -n option with -P; otherwise chances are that only one exec will be done. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/38160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/163/"
]
} |
38,181 | I have a basic understanding of mock and fake objects, but I'm not sure I have a feeling about when/where to use mocking - especially as it would apply to this scenario here . | A unit test should test a single codepath through a single method. When the execution of a method passes outside of that method, into another object, and back again, you have a dependency. When you test that code path with the actual dependency, you are not unit testing; you are integration testing. While that's good and necessary, it isn't unit testing. If your dependency is buggy, your test may be affected in such a way to return a false positive. For instance, you may pass the dependency an unexpected null, and the dependency may not throw on null as it is documented to do. Your test does not encounter a null argument exception as it should have, and the test passes. Also, you may find its hard, if not impossible, to reliably get the dependent object to return exactly what you want during a test. That also includes throwing expected exceptions within tests. A mock replaces that dependency. You set expectations on calls to the dependent object, set the exact return values it should give you to perform the test you want, and/or what exceptions to throw so that you can test your exception handling code. In this way you can test the unit in question easily. TL;DR: Mock every dependency your unit test touches. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/38181",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/781/"
]
} |
38,198 | celsius = (5.0/9.0) * (fahr-32.0); Is it just a development choice that the C developers decided upon or is there a reason to this? I believe a float is smaller than a double, so it might be to prevent overflows caused by not knowing what decimal format to use. Is that the reason, or am I overlooking something? | celsius = (5.0/9.0) * (fahr-32.0); In this expression, 5.0 , 9.0 , and 32.0 are double s. That's the default type for a floating-point constant - if you wanted them to be float s, then you would use the F suffix: celsius = (5.0F/9.0F) * (fahr-32.0F); Note that if fahr was a double , then the result of this last expression would still be a double : as Vaibhav noted, types are promoted in such a way as to avoid potentially losing precision. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
]
} |
38,238 | I'm teaching myself Python and my most recent lesson was that Python is not Java , and so I've just spent a while turning all my Class methods into functions. I now realise that I don't need to use Class methods for what I would done with static methods in Java, but now I'm not sure when I would use them. All the advice I can find about Python Class methods is along the lines of newbies like me should steer clear of them, and the standard documentation is at its most opaque when discussing them. Does anyone have a good example of using a Class method in Python or at least can someone tell me when Class methods can be sensibly used? | Class methods are for when you need to have methods that aren't specific to any particular instance, but still involve the class in some way. The most interesting thing about them is that they can be overridden by subclasses, something that's simply not possible in Java's static methods or Python's module-level functions. If you have a class MyClass , and a module-level function that operates on MyClass (factory, dependency injection stub, etc), make it a classmethod . Then it'll be available to subclasses. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/38238",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3171/"
]
} |
38,239 | Background Last year, I did an internship in a physics research group at a university. In this group, we mostly used LabVIEW to write programs for controlling our setups, doing data acquisition and analyzing our data. For the first two purposes, that works quite OK, but for data analysis, it's a real pain. On top of that, everyone was mostly self-taught, so code that was written was generally quite a mess (no wonder that every PhD quickly decided to rewrite everything from scratch). Version control was unknown, and impossible to set up because of strict software and network regulations from the IT department. Now, things actually worked out surprisingly OK, but how do people in the natural sciences do their software development? Questions Some concrete questions: What languages/environments have you used for developing scientific software, especially data analysis? What libraries? (for example, what do you use for plotting?) Was there any training for people without any significant background in programming? Did you have anything like version control, and bug tracking? How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (especially physicists are stubborn people!) Summary of answers thus far The answers (or my interpretation of them) thus far: (2008-10-11) Languages/packages that seem to be the most widely used: LabVIEW Python with SciPy , NumPy , PyLab , etc. (See also Brandon's reply for downloads and links) C/C++ MATLAB Version control is used by nearly all respondents; bug tracking and other processes are much less common. The Software Carpentry course is a good way to teach programming and development techniques to scientists. How to improve things? Don't force people to follow strict protocols. Set up an environment yourself, and show the benefits to others. Help them to start working with version control, bug tracking, etc. themselves. Reviewing other people's code can help, but be aware that not everyone may appreciate that. | The course Software Carpentry is aimed specifically at people doing scientific computing and aims to teach the basics and lessons of software engineering, and how best to apply them to projects. It covers topics like version control, debugging, testing, scripting and various other issues. I've listened to about 8 or 9 of the lectures and think it is to be highly recommended. Edit: The MP3s of the lectures are available as well. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1037/"
]
} |
38,288 | Specifically, what commands do I run from the terminal? | Without a home directory sudo useradd myuser With home directory sudo useradd -m myuser Then set the password sudo passwd myuser Then set the shell sudo usermod -s /bin/bash myuser | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/38288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3624/"
]
} |
38,323 | I'm looking to replace a couple of machines in the office with a more powerful multi-processor machine running either VMware or Microsoft's Hyper-V with a view to hosting a mix of Windows Server 2003, Windows Server 2008 and Linux operating systems. The machines are used mainly for testing ASP.Net or Perl web sites. I don't need advanced features like live migration of running systems but it would be useful to be able to restore a machine to a known state. Performance is not really a big issue either unless one is noticeable faster than the other. My question is: Should I play safe and go with VMware or is Hyper-V mature enough to be a candidate? | VMware did recently release a free version of ESXi recently. VMware has a few advantages: 1. VMware virtual machines are portable across different types of hardware. IIRC, Hyper-V uses the drivers from the Host OS. 2. VMware virtual machines are portable across different VMware products (although you may need to use their converter tool to go from some hosted virtual machines to ESX or ESXi). 3. The VMware platforms have been in use much longer, and are quite mature products and generally better-known for troubleshooting. With VMware, you could develop and test a virtual machine on your local system using VMware Workstation, Fusion, Server, or Player, and then deploy it to a production server later. With Hyper-V, I believe you would have to build the virtual machine on the target box for best results. If performance isn't really that big of an issue, then VMware Server may be the best option, for it can run most .vmx machines directly and is generally a bit easier to manage; if performance becomes critical, you still have the ESX or ESXi upgrade option that you can use those same virtual machines with. This entry talks about how Virtual Server machines will not run on Hyper-V: http://blogs.technet.com/jhoward/archive/2008/02/28/are-vhds-compatible-between-hyper-v-and-virtual-server-and-virtual-pc.aspx | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38323",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3010/"
]
} |
38,336 | Because Linux (and its variants) have a completely different set of fonts than Windows and Mac OS X, is there anyone with any experience of creating cross-platform font families - ideally finding equivalents to the common fonts found on the aforementioned operating systems? If so, what is the best route to take? | Here are some good up-to-date listings of the most-installed fonts for PC, Mac, and Linux: Sans serif font sampler and survey results Serif font sampler and survey results Hope this helps your decision! | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38336",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3654/"
]
} |
38,345 | I recently "needed" a zip function in Perl 5 (while I was thinking about How do I calculate relative time? ), i.e. a function that takes two lists and "zips" them together to one list, interleaving the elements. (Pseudo)example: @a=(1, 2, 3);@b=('apple', 'orange', 'grape');zip @a, @b; # (1, 'apple', 2, 'orange', 3, 'grape'); Haskell has zip in the Prelude and Perl 6 has a zip operator built in, but how do you do it in an elegant way in Perl 5? | Assuming you have exactly two lists and they are exactly the same length, here is a solution originally by merlyn (Randal Schwartz), who called it perversely perlish: sub zip2 { my $p = @_ / 2; return @_[ map { $_, $_ + $p } 0 .. $p - 1 ];} What happens here is that for a 10-element list, first, we find the pivot point in the middle, in this case 5, and save it in $p . Then we make a list of indices up to that point, in this case 0 1 2 3 4. Next we use map to pair each index with another index that’s at the same distance from the pivot point as the first index is from the start, giving us (in this case) 0 5 1 6 2 7 3 8 4 9. Then we take a slice from @_ using that as the list of indices. This means that if 'a', 'b', 'c', 1, 2, 3 is passed to zip2 , it will return that list rearranged into 'a', 1, 'b', 2, 'c', 3 . This can be written in a single expression along ysth’s lines like so: sub zip2 { @_[map { $_, $_ + @_/2 } 0..(@_/2 - 1)] } Whether you’d want to use either variation depends on whether you can see yourself remembering how they work, but for me, it was a mind expander. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/38345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2905/"
]
} |
38,357 | If I get an error code result from a Cocoa function, is there any easy way to figure out what it means (other than by grepping through all the .h files in the framework bundles)? | You should look at the <Framework/FrameworkErrors.h> header for whatever framework the method you're using that's returning an error comes from. For example, an NSError in the Cocoa domain that you get from a method in the Foundation framework will have its code property described in the <Foundation/FoundationErrors.h> header. Similarly with AppKit and <AppKit/AppKitErrors.h> and Core Data and <CoreData/CoreDataErrors.h> . Also, if you print the description of the NSError in the debugger, it should include not only the error domain and code, but also the name of the actual error code constant so you can look it up in the API reference. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/38357",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1175/"
]
} |
38,409 | I would like to convert the following string into an array/nested array: str = "[[this, is],[a, nested],[array]]"newarray = # this is what I need help with!newarray.inspect # => [['this','is'],['a','nested'],['array']] | You'll get what you want with YAML. But there is a little problem with your string. YAML expects that there's a space behind the comma. So we need this str = "[[this, is], [a, nested], [array]]" Code: require 'yaml'str = "[[this, is],[a, nested],[array]]"### transform your string in a valid YAML-Stringstr.gsub!(/(\,)(\S)/, "\\1 \\2")YAML::load(str)# => [["this", "is"], ["a", "nested"], ["array"]] | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4082/"
]
} |
38,421 | I'm using jquery ajax to post updates back to my server. I'm concerned about making sure I have put in place appropriate measures so that only my AJAX calls can post data. My stack is PHP on Apache against a MySQL backend. Advice greatly appreciated! | Any request that the AJAX calls in your pages can make can also be made by someone outside of the application. If done right, you will not be able to tell if they were made as part of an AJAX call from your webapp or by hand/other means. There are two scenarios I can think of which you might be talking about when you say you want to make sure that only your AJAX calls can post data: either you don't want a malicious user to be able to post data that interferes with another user's data or you actually want to restrict the posts to being in the "flow" of a multi-request operation. If you are concerned with the first case (someone posting malicious data to/as another user) the solution is the same whether you are using AJAX or not -- you just have to authenticate the user through whatever means is necessary -- usually via session cookie. If you are concerned with the second case, then you are going to have to do something like issue a unique token at each step of the process, and store the expected token on the server side. Then when a request is made, check that there is a corresponding entry on the server side for the action that is being taken and that the expected tokens match and that that token has not been used yet. If there is no, you reject the request, if there is, then you mark that token as used and process the request. If what you are concerned about is something other than one of these two scenarios then the answer will depend on more specifics than you have provided. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/38421",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2362/"
]
} |
38,437 | What is the best way to track changes in a database table? Imagine you got an application in which users (in the context of the application not DB users ) are able to change data which are store in some database table. What's the best way to track a history of all changes, so that you can show which user at what time change which data how? | In general, if your application is structured into layers, have the data access tier call a stored procedure on your database server to write a log of the database changes. In languages that support such a thing aspect-oriented programming can be a good technique to use for this kind of application. Auditing database table changes is the kind of operation that you'll typically want to log for all operations, so AOP can work very nicely. Bear in mind that logging database changes will create lots of data and will slow the system down. It may be sensible to use a message-queue solution and a separate database to perform the audit log, depending on the size of the application. It's also perfectly feasible to use stored procedures to handle this, although there may be a bit of work involved passing user credentials through to the database itself. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3711/"
]
} |
38,502 | Say you want a simple maze on an N by M grid, with one path through, and a good number of dead ends, but that looks "right" (i.e. like someone made it by hand without too many little tiny dead ends and all that). Is there a known way to do this? | From http://www.astrolog.org/labyrnth/algrithm.htm Recursive backtracker: This is somewhat related to the recursive backtracker solving method described below, and requires stack up to the size of the Maze. When carving, be as greedy as possible, and always carve into an unmade section if one is next to the current cell. Each time you move to a new cell, push the former cell on the stack. If there are no unmade cells next to the current position, pop the stack to the previous position. The Maze is done when you pop everything off the stack. This algorithm results in Mazes with about as high a "river" factor as possible, with fewer but longer dead ends, and usually a very long and twisty solution. It runs quite fast, although Prim's algorithm is a bit faster. Recursive backtracking doesn't work as a wall adder, because doing so tends to result in a solution path that follows the outside edge, where the entire interior of the Maze is attached to the boundary by a single stem. They produce only 10% dead ends is an example of a maze generated by that method. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/38502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4123/"
]
} |
38,508 | I have a function where I need to do something to a string. I need the function to return a boolean indicating whether or not the operation succeeded, and I also need to return the modified string. In C#, I would use an out parameter for the string, but there is no equivalent in Python. I'm still very new to Python and the only thing I can think of is to return a tuple with the boolean and modified string. Related question: Is it pythonic for a function to return multiple values? | def f(in_str): out_str = in_str.upper() return True, out_str # Creates tuple automaticallysucceeded, b = f("a") # Automatic tuple unpacking | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/38508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3880/"
]
} |
38,549 | Also, how do LEFT OUTER JOIN , RIGHT OUTER JOIN , and FULL OUTER JOIN fit in? | Assuming you're joining on columns with no duplicates, which is a very common case: An inner join of A and B gives the result of A intersect B, i.e. the inner part of a Venn diagram intersection. An outer join of A and B gives the results of A union B, i.e. the outer parts of a Venn diagram union. Examples Suppose you have two tables, with a single column each, and data as follows: A B- -1 32 43 54 6 Note that (1,2) are unique to A, (3,4) are common, and (5,6) are unique to B. Inner join An inner join using either of the equivalent queries gives the intersection of the two tables, i.e. the two rows they have in common. select * from a INNER JOIN b on a.a = b.b;select a.*, b.* from a,b where a.a = b.b;a | b--+--3 | 34 | 4 Left outer join A left outer join will give all rows in A, plus any common rows in B. select * from a LEFT OUTER JOIN b on a.a = b.b;select a.*, b.* from a,b where a.a = b.b(+);a | b--+-----1 | null2 | null3 | 34 | 4 Right outer join A right outer join will give all rows in B, plus any common rows in A. select * from a RIGHT OUTER JOIN b on a.a = b.b;select a.*, b.* from a,b where a.a(+) = b.b;a | b-----+----3 | 34 | 4null | 5null | 6 Full outer join A full outer join will give you the union of A and B, i.e. all the rows in A and all the rows in B. If something in A doesn't have a corresponding datum in B, then the B portion is null, and vice versa. select * from a FULL OUTER JOIN b on a.a = b.b; a | b-----+----- 1 | null 2 | null 3 | 3 4 | 4null | 6null | 5 | {
"score": 14,
"source": [
"https://Stackoverflow.com/questions/38549",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3836/"
]
} |
38,561 | The printf function takes an argument type, such as %d or %i for a signed int . However, I don't see anything for a long value. | Put an l (lowercased letter L) directly before the specifier. unsigned long n;long m;printf("%lu %ld", n, m); | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/38561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
]
} |
38,571 | I find myself doing this sort of thing all the time. I've been considering writing a macro/function to make this sort of thing easier, but it occurs to me that I'm probably reinventing the wheel. Is there an existing function that will let me accomplish this same sort of thing more succinctly? (defun remove-low-words (word-list) "Return a list with words of insufficient score removed." (let ((result nil)) (dolist (word word-list) (when (good-enough-score-p word) (push word result))) result)) | There are several built-in ways of doing this. One way would be: (remove-if-not 'good-enough-score-p word-list) And another: (loop for word in word-list when (good-enough-score-p word) collect word) And yet another: (mapcan (lambda (word) (when (good-enough-score-p word) (list word))) word-list) Etc... There's also SERIES and Iterate . The Iterate version is identical to the LOOP version, but the SERIES version is interesting: (collect (choose-if 'good-enough-score-p (scan word-list)))) So, yes, you're very likely to reinvent some wheel. :-) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/38571",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/56456/"
]
} |
38,601 | How can I use the nifty JavaScript date and time widgets that the default admin uses with my custom view? I have looked through the Django forms documentation , and it briefly mentions django.contrib.admin.widgets, but I don't know how to use it? Here is my template that I want it applied on. <form action="." method="POST"> <table> {% for f in form %} <tr> <td> {{ f.name }}</td> <td>{{ f }}</td> </tr> {% endfor %} </table> <input type="submit" name="submit" value="Add Product"></form> Also, I think it should be noted that I haven't really written a view up myself for this form, I am using a generic view. Here is the entry from the url.py: (r'^admin/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}), And I am relevantly new to the whole Django/MVC/MTV thing, so please go easy... | The growing complexity of this answer over time, and the many hacks required, probably ought to caution you against doing this at all. It's relying on undocumented internal implementation details of the admin, is likely to break again in future versions of Django, and is no easier to implement than just finding another JS calendar widget and using that. That said, here's what you have to do if you're determined to make this work: Define your own ModelForm subclass for your model (best to put it in forms.py in your app), and tell it to use the AdminDateWidget / AdminTimeWidget / AdminSplitDateTime (replace 'mydate' etc with the proper field names from your model): from django import forms from my_app.models import Product from django.contrib.admin import widgets class ProductForm(forms.ModelForm): class Meta: model = Product def __init__(self, *args, **kwargs): super(ProductForm, self).__init__(*args, **kwargs) self.fields['mydate'].widget = widgets.AdminDateWidget() self.fields['mytime'].widget = widgets.AdminTimeWidget() self.fields['mydatetime'].widget = widgets.AdminSplitDateTime() Change your URLconf to pass 'form_class': ProductForm instead of 'model': Product to the generic create_object view (that'll mean from my_app.forms import ProductForm instead of from my_app.models import Product , of course). In the head of your template, include {{ form.media }} to output the links to the Javascript files. And the hacky part: the admin date/time widgets presume that the i18n JS stuff has been loaded, and also require core.js, but don't provide either one automatically. So in your template above {{ form.media }} you'll need: <script type="text/javascript" src="/my_admin/jsi18n/"></script> <script type="text/javascript" src="/media/admin/js/core.js"></script> You may also wish to use the following admin CSS (thanks Alex for mentioning this): <link rel="stylesheet" type="text/css" href="/media/admin/css/forms.css"/> <link rel="stylesheet" type="text/css" href="/media/admin/css/base.css"/> <link rel="stylesheet" type="text/css" href="/media/admin/css/global.css"/> <link rel="stylesheet" type="text/css" href="/media/admin/css/widgets.css"/> This implies that Django's admin media ( ADMIN_MEDIA_PREFIX ) is at /media/admin/ - you can change that for your setup. Ideally you'd use a context processor to pass this values to your template instead of hardcoding it, but that's beyond the scope of this question. This also requires that the URL /my_admin/jsi18n/ be manually wired up to the django.views.i18n.javascript_catalog view (or null_javascript_catalog if you aren't using I18N). You have to do this yourself instead of going through the admin application so it's accessible regardless of whether you're logged into the admin (thanks Jeremy for pointing this out). Sample code for your URLconf: (r'^my_admin/jsi18n', 'django.views.i18n.javascript_catalog'), Lastly, if you are using Django 1.2 or later, you need some additional code in your template to help the widgets find their media: {% load adminmedia %} /* At the top of the template. *//* In the head section of the template. */<script type="text/javascript">window.__admin_media_prefix__ = "{% filter escapejs %}{% admin_media_prefix %}{% endfilter %}";</script> Thanks lupefiasco for this addition. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/38601",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2592/"
]
} |
38,635 | What tools are there available for static analysis against C# code? I know about FxCop and StyleCop. Are there others? I've run across NStatic before but it's been in development for what seems like forever - it's looking pretty slick from what little I've seen of it, so it would be nice if it would ever see the light of day. Along these same lines (this is primarily my interest for static analysis), tools for testing code for multithreading issues (deadlocks, race conditions, etc.) also seem a bit scarce. Typemock Racer just popped up so I'll be looking at that. Anything beyond this? Real-life opinions about tools you've used are appreciated. | Code violation detection Tools: FxCop , excellent tool by Microsoft. Check compliance with .NET framework guidelines. Edit October 2010: No longer available as a standalone download. It is now included in the Windows SDK and after installation can be found in Program Files\Microsoft SDKs\Windows\ [v7.1] \Bin\FXCop\FxCopSetup.exe Edit February 2018 : This functionality has now been integrated into Visual Studio 2012 and later as Code Analysis Clocksharp , based on code source analysis (to C# 2.0) Mono.Gendarme , similar to FxCop but with an open source licence (based on Mono.Cecil ) Smokey , similar to FxCop and Gendarme, based on Mono.Cecil . No longer on development, the main developer works with Gendarme team now. Coverity Prevent™ for C# , commercial product PRQA QA·C# , commercial product PVS-Studio , commercial product CAT.NET , visual studio addin that helps identification of security flaws Edit November 2019: Link is dead. CodeIt.Right Spec# Pex SonarQube , FOSS & Commercial options to support writing cleaner and safer code. Quality Metric Tools: NDepend , great visual tool. Useful for code metrics, rules, diff, coupling and dependency studies. Nitriq , free, can easily write your own metrics/constraints, nice visualizations. Edit February 2018: download links now dead. Edit June 17, 2019: Links not dead. RSM Squared , based on code source analysis C# Metrics , using a full parse of C# SourceMonitor , an old tool that occasionally gets updates Code Metrics , a Reflector add-in Vil , old tool that doesn't support .NET 2.0. Edit January 2018: Link now dead Checking Style Tools: StyleCop , Microsoft tool ( run from inside of Visual Studio or integrated into an MSBuild project). Also available as an extension for Visual Studio 2015 and C#6.0 Agent Smith , code style validation plugin for ReSharper Duplication Detection: Simian , based on source code. Works with plenty languages. CloneDR , detects parameterized clones only on language boundaries (also handles many languages other than C#) Clone Detective a Visual Studio plugin (which uses ConQAT internally) Atomiq , based on source code, plenty of languages, cool "wheel" visualization General Refactoring tools ReSharper - Majorly cool C# code analysis and refactoring features | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/38635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3656/"
]
} |
38,645 | I want to combine two structures with differing fields names. For example, starting with: A.field1 = 1;A.field2 = 'a';B.field3 = 2;B.field4 = 'b'; I would like to have: C.field1 = 1;C.field2 = 'a';C.field3 = 2;C.field4 = 'b'; Is there a more efficient way than using "fieldnames" and a for loop? EDIT: Let's assume that in the case of field name conflicts we give preference to A . | Without collisions, you can do M = [fieldnames(A)' fieldnames(B)'; struct2cell(A)' struct2cell(B)'];C=struct(M{:}); And this is reasonably efficient. However, struct errors on duplicate fieldnames, and pre-checking for them using unique kills performance to the point that a loop is better. But here's what it would look like: M = [fieldnames(A)' fieldnames(B)'; struct2cell(A)' struct2cell(B)'];[tmp, rows] = unique(M(1,:), 'last');M=M(:, rows);C=struct(M{:}); You might be able to make a hybrid solution by assuming no conflicts and using a try/catch around the call to struct to gracefully degrade to the conflict handling case. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38645",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4135/"
]
} |
38,647 | When using the Entity Framework, does ESQL perform better than Linq to Entities? I'd prefer to use Linq to Entities (mainly because of the strong-type checking), but some of my other team members are citing performance as a reason to use ESQL. I would like to get a full idea of the pro's/con's of using either method. | The most obvious differences are: Linq to Entities is strongly typed code including nice query comprehension syntax. The fact that the “from” comes before the “select” allows IntelliSense to help you. Entity SQL uses traditional string based queries with a more familiar SQL like syntax where the SELECT statement comes before the FROM. Because eSQL is string based, dynamic queries may be composed in a traditional way at run time using string manipulation. The less obvious key difference is: Linq to Entities allows you to change the shape or "project" the results of your query into any shape you require with the “select new{... }” syntax. Anonymous types, new to C# 3.0, has allowed this. Projection is not possible using Entity SQL as you must always return an ObjectQuery<T>. In some scenarios it is possible use ObjectQuery<object> however you must work around the fact that .Select always returns ObjectQuery<DbDataRecord>. See code below... ObjectQuery<DbDataRecord> query = DynamicQuery(context, "Products", "it.ProductName = 'Chai'", "it.ProductName, it.QuantityPerUnit");public static ObjectQuery<DbDataRecord> DynamicQuery(MyContext context, string root, string selection, string projection){ ObjectQuery<object> rootQuery = context.CreateQuery<object>(root); ObjectQuery<object> filteredQuery = rootQuery.Where(selection); ObjectQuery<DbDataRecord> result = filteredQuery.Select(projection); return result;} There are other more subtle differences described by one of the team members in detail here and here . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38647",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/708/"
]
} |
38,670 | Ok, so, my visual studio is broken. I say this NOT prematurely, as it was my first response to see where I had messed up in my code. When I add controls to the page I can't reference all of them in the code behind. Some of them I can, it seems that the first few I put on a page work, then it just stops. I first thought it may be the type of control as initially I was trying to reference a repeater inside an update panel. I know I am correctly referencing the code behind in my aspx page. But just in case it was a screw up on my part I started to recreate the page from scratch and this time got a few more controls down before VS stopped recognizing my controls. After creating my page twice and getting stuck I thought maybe it was still the type of controls. I created a new page and just threw some labels on it. No dice, build fails when referencing the control from the code behind. In a possibly unrelated note when I switch to the dreaded "design" mode of the aspx pages VS 2008 errors out and restarts. I have already put a trouble ticket in to Microsoft. I uninstalled all add-ins, I reinstalled visual studio. Anyone that wants to see my code just ask, but I am using the straight WYSIWYG visual studio "new aspx page" nothing fancy. I doubt anyone has run into this, but have you? Has anyone had success trouble shooting these things with Microsoft? Any way to expedite this ticket without paying??? I have been talking to a rep from Microsoft for days with no luck yet and I am dead in the water. Jon Limjap: I edited the title to both make it clear and descriptive and make sure that nobody sees it as offensive. "Foo-barred" doesn't exactly constitute a proper question title, although your question is clearly a valid one. | try clearing your local VS cache. find your project and delete the folder. the folder is created by VS for what reason I honestly don't understand. but I've had several occasions where clearing it and doing a re-build fixes things... hope this is all that you need as well. here %Temp%\VWDWebCache and possibly here %LocalAppData%\Microsoft\WebsiteCache | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38670",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4140/"
]
} |
38,691 | Hey so what I want to do is snag the content for the first paragraph. The string $blog_post contains a lot of paragraphs in the following format: <p>Paragraph 1</p><p>Paragraph 2</p><p>Paragraph 3</p> The problem I'm running into is that I am writing a regex to grab everything between the first <p> tag and the first closing </p> tag. However, it is grabbing the first <p> tag and the last closing </p> tag which results in me grabbing everything. Here is my current code: if (preg_match("/[\\s]*<p>[\\s]*(?<firstparagraph>[\\s\\S]+)[\\s]*<\\/p>[\\s\\S]*/",$blog_post,$blog_paragraph)) echo "<p>" . $blog_paragraph["firstparagraph"] . "</p>";else echo $blog_post; | Well, sysrqb will let you match anything in the first paragraph assuming there's no other html in the paragraph. You might want something more like this <p>.*?</p> Placing the ? after your * makes it non-greedy, meaning it will only match as little text as necessary before matching the </p> . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/428190/"
]
} |
38,746 | Over at Can you modify text files when committing to subversion? Grant suggested that I block commits instead. However I don't know how to check a file ends with a newline. How can you detect that the file ends with a newline? | @Konrad : tail does not return an empty line. I made a file that has some text that doesn't end in newline and a file that does. Here is the output from tail: $ cat test_no_newline.txtthis file doesn't end in newline$ $ cat test_with_newline.txtthis file ends in newline$ Though I found that tail has get last byte option. So I modified your script to: #!/bin/shc=`tail -c 1 $1`if [ "$c" != "" ]; then echo "no newline"fi | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/38746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/486/"
]
} |
38,784 | I use Delphi for many years, and although I have now moved on to Visual Studio I still fondly remember numbered bookmarks ( CTRL + K + 1 to set bookmark 1, CTRL + Q + 1 to goto bookmark 1). Is there a Visual Studio equivalent? I'm find the dumb bookmarks in VS a chore after Delphi. I want to bookmark then return to a specific place in the file. | DPack can give you numbered bookmarks in VisualStudio. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4149/"
]
} |
38,801 | What are the ways that you use to model and retrieve hierarchical info in a database? | The definitive pieces on this subject have been written by Joe Celko, and he has worked a number of them into a book called Joe Celko's Trees and Hierarchies in SQL for Smarties. He favours a technique called directed graphs. An introduction to his work on this subject can be found here | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2027/"
]
} |
38,820 | Which class design is better and why? public class User{ public String UserName; public String Password; public String FirstName; public String LastName;}public class Employee : User{ public String EmployeeId; public String EmployeeCode; public String DepartmentId;}public class Member : User{ public String MemberId; public String JoinDate; public String ExpiryDate;} OR public class User{ public String UserId; public String UserName; public String Password; public String FirstName; public String LastName;}public class Employee{ public User UserInfo; public String EmployeeId; public String EmployeeCode; public String DepartmentId;}public class Member{ public User UserInfo; public String MemberId; public String JoinDate; public String ExpiryDate;} | The question is simply answered by recognising that inheritance models an "IS-A" relationship, while membership models a "HAS-A" relationship. An employee IS A user An employee HAS A userinfo Which one is correct? This is your answer. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/38820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/191/"
]
} |
38,838 | I am using the basic-auth twitter API ( no longer available ) to integrate twitter with my blog's commenting system. The problem with this and many other web APIs out there is that they require the user's username and password to do anything useful. I don't want to deal with the hassle and cost of installing a SSL certificate, but I also don't want passwords passed over the wire in clear text. I guess my general question is: How can I send sensitive data over an insecure channel? This is my current solution and I'd like to know if there are any holes in it: Generate a random key on the server (I'm using php). Save the key in a session and also output the key in a javascript variable. On form submit, use Triple DES in javascript with the key to encrypt the password. On the server, decrypt the password using the key from the session and then destroy the session. The end result is that only the encrypted password is sent over the wire and the key is only used once and never sent with the password. Problem solved? | Generate a random key on the server (I'm using php). Save the key in a session and also output the key in a javascript variable. On form submit, use Triple DES in javascript with the key to encrypt the password. This avoids sending the password in the clear over the wire, but it requires you to send the key in the clear over the wire, which would allow anyone eavesdropping to decode the password. It's been said before and I'll say it again: don't try to make up your own cryptographic protocols! There are established protocols out there for this kind of thing that have been created, peer reviewed, beat on, hacked on, poked and prodded by professionals, use them! No one person is going to be able to come up with something better than the entire cryptographic and security community working together. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38838",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/744/"
]
} |
38,846 | What are the fundamentals to accomplish data encryption with exactly two keys (which could be password-based), but needing only one (either one) of the two keys to decrypt the data? For example, data is encrypted with a user's password and his company's password, and then he or his company can decrypt the data. Neither of them know the other password. Only one copy of the encrypted data is stored. I don't mean public/private key. Probably via symmetric key cryptography and maybe it involves something like XORing the keys together to use them for encrypting. Update: I would also like to find a solution that does not involve storing the keys at all. | The way this is customarily done is to generate a single symmetric key to encrypt the data. Then you encrypt the symmetric key with each recipient's key or password to that they can decrypt it on their own. S/MIME (actually the Cryptographic Message Syntax on which S/MIME is based) uses this technique. This way, you only have to store one copy of the encrypted message, but multiple copies of its key. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38846",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3153/"
]
} |
38,870 | I have the following tables in my database that have a many-to-many relationship, which is expressed by a connecting table that has foreign keys to the primary keys of each of the main tables: Widget: WidgetID (PK), Title, Price User: UserID (PK), FirstName, LastName Assume that each User-Widget combination is unique. I can see two options for how to structure the connecting table that defines the data relationship: UserWidgets1: UserWidgetID (PK), WidgetID (FK), UserID (FK) UserWidgets2: WidgetID (PK, FK), UserID (PK, FK) Option 1 has a single column for the Primary Key. However, this seems unnecessary since the only data being stored in the table is the relationship between the two primary tables, and this relationship itself can form a unique key. Thus leading to option 2, which has a two-column primary key, but loses the one-column unique identifier that option 1 has. I could also optionally add a two-column unique index (WidgetID, UserID) to the first table. Is there any real difference between the two performance-wise, or any reason to prefer one approach over the other for structuring the UserWidgets many-to-many table? | You only have one primary key in either case. The second one is what's called a compound key. There's no good reason for introducing a new column. In practise, you will have to keep a unique index on all candidate keys. Adding a new column buys you nothing but maintenance overhead. Go with option 2. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/38870",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/51/"
]
} |
38,875 | My website was recently attacked by, what seemed to me as, an innocent code: <?php if ( isset( $ _GET['page'] ) ) { include( $ _GET['page'] . ".php" ); } else { include("home.php"); }?> There where no SQL calls, so I wasn't afraid for SQL Injection. But, apparently, SQL isn't the only kind of injection. This website has an explanation and a few examples of avoiding code injection: http://www.theserverpages.com/articles/webmasters/php/security/Code_Injection_Vulnerabilities_Explained.html How would you protect this code from code injection? | Use a whitelist and make sure the page is in the whitelist: $whitelist = array('home', 'page'); if (in_array($_GET['page'], $whitelist)) { include($_GET['page'].'.php'); } else { include('home.php'); } | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/38875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2644/"
]
} |
38,907 | I'm developing an Eclipse plug-in, based on a bunch of core Eclipse plug-ins like SWT, JDT, GEF and others. I need my plug-in to be compatible with Eclipse 3.3, since many potential customers are still using it. However, personally I like the new features in Eclipse 3.4 and would like to use it for my development. This means I need PDE to reference 3.3 code and, when debug, execute a 3.3 instance. Any tips on how this can be achieved? Thanks. | You can change the 'Target platform' setting to point to the location of an existing set of eclipse 3.3 plugins. This will compile your code against the 3.3 plugins, making sure that they stay compatible no matter which version of eclipse you are using to develop the application. The setting is under Window->Preferences->Plug-in development->Target Platform | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38907",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2823/"
]
} |
38,940 | If I've got a table containing Field1 and Field2 can I generate a new field in the select statement? For example, a normal query would be: SELECT Field1, Field2 FROM Table And I want to also create Field3 and have that returned in the resultset... something along the lines of this would be ideal: SELECT Field1, Field2, Field3 = 'Value' FROM Table Is this possible at all? | SELECT Field1, Field2, 'Value' Field3 FROM Table or for clarity SELECT Field1, Field2, 'Value' AS Field3 FROM Table | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/38940",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/393028/"
]
} |
38,960 | I would like to test a string containing a path to a file for existence of that file (something like the -e test in Perl or the os.path.exists() in Python) in C#. | Use: File.Exists(path) MSDN: http://msdn.microsoft.com/en-us/library/system.io.file.exists.aspx Edit: In System.IO | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/38960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2260/"
]
} |
38,987 | I want to merge two dictionaries into a new dictionary. x = {'a': 1, 'b': 2}y = {'b': 3, 'c': 4}z = merge(x, y)>>> z{'a': 1, 'b': 3, 'c': 4} Whenever a key k is present in both dictionaries, only the value y[k] should be kept. | How can I merge two Python dictionaries in a single expression? For dictionaries x and y , their shallowly-merged dictionary z takes values from y , replacing those from x . In Python 3.9.0 or greater (released 17 October 2020, PEP-584 , discussed here ): z = x | y In Python 3.5 or greater: z = {**x, **y} In Python 2, (or 3.4 or lower) write a function: def merge_two_dicts(x, y): z = x.copy() # start with keys and values of x z.update(y) # modifies z with keys and values of y return z and now: z = merge_two_dicts(x, y) Explanation Say you have two dictionaries and you want to merge them into a new dictionary without altering the original dictionaries: x = {'a': 1, 'b': 2}y = {'b': 3, 'c': 4} The desired result is to get a new dictionary ( z ) with the values merged, and the second dictionary's values overwriting those from the first. >>> z{'a': 1, 'b': 3, 'c': 4} A new syntax for this, proposed in PEP 448 and available as of Python 3.5 , is z = {**x, **y} And it is indeed a single expression. Note that we can merge in with literal notation as well: z = {**x, 'foo': 1, 'bar': 2, **y} and now: >>> z{'a': 1, 'b': 3, 'foo': 1, 'bar': 2, 'c': 4} It is now showing as implemented in the release schedule for 3.5, PEP 478 , and it has now made its way into the What's New in Python 3.5 document. However, since many organizations are still on Python 2, you may wish to do this in a backward-compatible way. The classically Pythonic way, available in Python 2 and Python 3.0-3.4, is to do this as a two-step process: z = x.copy()z.update(y) # which returns None since it mutates z In both approaches, y will come second and its values will replace x 's values, thus b will point to 3 in our final result. Not yet on Python 3.5, but want a single expression If you are not yet on Python 3.5 or need to write backward-compatible code, and you want this in a single expression , the most performant while the correct approach is to put it in a function: def merge_two_dicts(x, y): """Given two dictionaries, merge them into a new dict as a shallow copy.""" z = x.copy() z.update(y) return z and then you have a single expression: z = merge_two_dicts(x, y) You can also make a function to merge an arbitrary number of dictionaries, from zero to a very large number: def merge_dicts(*dict_args): """ Given any number of dictionaries, shallow copy and merge into a new dict, precedence goes to key-value pairs in latter dictionaries. """ result = {} for dictionary in dict_args: result.update(dictionary) return result This function will work in Python 2 and 3 for all dictionaries. e.g. given dictionaries a to g : z = merge_dicts(a, b, c, d, e, f, g) and key-value pairs in g will take precedence over dictionaries a to f , and so on. Critiques of Other Answers Don't use what you see in the formerly accepted answer: z = dict(x.items() + y.items()) In Python 2, you create two lists in memory for each dict, create a third list in memory with length equal to the length of the first two put together, and then discard all three lists to create the dict. In Python 3, this will fail because you're adding two dict_items objects together, not two lists - >>> c = dict(a.items() + b.items())Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: unsupported operand type(s) for +: 'dict_items' and 'dict_items' and you would have to explicitly create them as lists, e.g. z = dict(list(x.items()) + list(y.items())) . This is a waste of resources and computation power. Similarly, taking the union of items() in Python 3 ( viewitems() in Python 2.7) will also fail when values are unhashable objects (like lists, for example). Even if your values are hashable, since sets are semantically unordered, the behavior is undefined in regards to precedence. So don't do this: >>> c = dict(a.items() | b.items()) This example demonstrates what happens when values are unhashable: >>> x = {'a': []}>>> y = {'b': []}>>> dict(x.items() | y.items())Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: unhashable type: 'list' Here's an example where y should have precedence, but instead the value from x is retained due to the arbitrary order of sets: >>> x = {'a': 2}>>> y = {'a': 1}>>> dict(x.items() | y.items()){'a': 2} Another hack you should not use: z = dict(x, **y) This uses the dict constructor and is very fast and memory-efficient (even slightly more so than our two-step process) but unless you know precisely what is happening here (that is, the second dict is being passed as keyword arguments to the dict constructor), it's difficult to read, it's not the intended usage, and so it is not Pythonic. Here's an example of the usage being remediated in django . Dictionaries are intended to take hashable keys (e.g. frozenset s or tuples), but this method fails in Python 3 when keys are not strings. >>> c = dict(a, **b)Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: keyword arguments must be strings From the mailing list , Guido van Rossum, the creator of the language, wrote: I am fine withdeclaring dict({}, **{1:3}) illegal, since after all it is abuse ofthe ** mechanism. and Apparently dict(x, **y) is going around as "cool hack" for "callx.update(y) and return x". Personally, I find it more despicable thancool. It is my understanding (as well as the understanding of the creator of the language ) that the intended usage for dict(**y) is for creating dictionaries for readability purposes, e.g.: dict(a=1, b=10, c=11) instead of {'a': 1, 'b': 10, 'c': 11} Response to comments Despite what Guido says, dict(x, **y) is in line with the dict specification, which btw. works for both Python 2 and 3. The fact that this only works for string keys is a direct consequence of how keyword parameters work and not a short-coming of dict. Nor is using the ** operator in this place an abuse of the mechanism, in fact, ** was designed precisely to pass dictionaries as keywords. Again, it doesn't work for 3 when keys are not strings. The implicit calling contract is that namespaces take ordinary dictionaries, while users must only pass keyword arguments that are strings. All other callables enforced it. dict broke this consistency in Python 2: >>> foo(**{('a', 'b'): None})Traceback (most recent call last): File "<stdin>", line 1, in <module>TypeError: foo() keywords must be strings>>> dict(**{('a', 'b'): None}){('a', 'b'): None} This inconsistency was bad given other implementations of Python (PyPy, Jython, IronPython). Thus it was fixed in Python 3, as this usage could be a breaking change. I submit to you that it is malicious incompetence to intentionally write code that only works in one version of a language or that only works given certain arbitrary constraints. More comments: dict(x.items() + y.items()) is still the most readable solution for Python 2. Readability counts. My response: merge_two_dicts(x, y) actually seems much clearer to me, if we're actually concerned about readability. And it is not forward compatible, as Python 2 is increasingly deprecated. {**x, **y} does not seem to handle nested dictionaries. the contents of nested keys are simply overwritten, not merged [...] I ended up being burnt by these answers that do not merge recursively and I was surprised no one mentioned it. In my interpretation of the word "merging" these answers describe "updating one dict with another", and not merging. Yes. I must refer you back to the question, which is asking for a shallow merge of two dictionaries, with the first's values being overwritten by the second's - in a single expression. Assuming two dictionaries of dictionaries, one might recursively merge them in a single function, but you should be careful not to modify the dictionaries from either source, and the surest way to avoid that is to make a copy when assigning values. As keys must be hashable and are usually therefore immutable, it is pointless to copy them: from copy import deepcopydef dict_of_dicts_merge(x, y): z = {} overlapping_keys = x.keys() & y.keys() for key in overlapping_keys: z[key] = dict_of_dicts_merge(x[key], y[key]) for key in x.keys() - overlapping_keys: z[key] = deepcopy(x[key]) for key in y.keys() - overlapping_keys: z[key] = deepcopy(y[key]) return z Usage: >>> x = {'a':{1:{}}, 'b': {2:{}}}>>> y = {'b':{10:{}}, 'c': {11:{}}}>>> dict_of_dicts_merge(x, y){'b': {2: {}, 10: {}}, 'a': {1: {}}, 'c': {11: {}}} Coming up with contingencies for other value types is far beyond the scope of this question, so I will point you at my answer to the canonical question on a "Dictionaries of dictionaries merge" . Less Performant But Correct Ad-hocs These approaches are less performant, but they will provide correct behavior.They will be much less performant than copy and update or the new unpacking because they iterate through each key-value pair at a higher level of abstraction, but they do respect the order of precedence (latter dictionaries have precedence) You can also chain the dictionaries manually inside a dict comprehension : {k: v for d in dicts for k, v in d.items()} # iteritems in Python 2.7 or in Python 2.6 (and perhaps as early as 2.4 when generator expressions were introduced): dict((k, v) for d in dicts for k, v in d.items()) # iteritems in Python 2 itertools.chain will chain the iterators over the key-value pairs in the correct order: from itertools import chainz = dict(chain(x.items(), y.items())) # iteritems in Python 2 Performance Analysis I'm only going to do the performance analysis of the usages known to behave correctly. (Self-contained so you can copy and paste yourself.) from timeit import repeatfrom itertools import chainx = dict.fromkeys('abcdefg')y = dict.fromkeys('efghijk')def merge_two_dicts(x, y): z = x.copy() z.update(y) return zmin(repeat(lambda: {**x, **y}))min(repeat(lambda: merge_two_dicts(x, y)))min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()}))min(repeat(lambda: dict(chain(x.items(), y.items()))))min(repeat(lambda: dict(item for d in (x, y) for item in d.items()))) In Python 3.8.1, NixOS: >>> min(repeat(lambda: {**x, **y}))1.0804965235292912>>> min(repeat(lambda: merge_two_dicts(x, y)))1.636518670246005>>> min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()}))3.1779992282390594>>> min(repeat(lambda: dict(chain(x.items(), y.items()))))2.740647904574871>>> min(repeat(lambda: dict(item for d in (x, y) for item in d.items())))4.266070580109954 $ uname -aLinux nixos 4.19.113 #1-NixOS SMP Wed Mar 25 07:06:15 UTC 2020 x86_64 GNU/Linux Resources on Dictionaries My explanation of Python's dictionary implementation , updated for 3.6. Answer on how to add new keys to a dictionary Mapping two lists into a dictionary The official Python docs on dictionaries The Dictionary Even Mightier - talk by Brandon Rhodes at Pycon 2017 Modern Python Dictionaries, A Confluence of Great Ideas - talk by Raymond Hettinger at Pycon 2017 | {
"score": 14,
"source": [
"https://Stackoverflow.com/questions/38987",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3207/"
]
} |
38,998 | I'm an Information Architect and JavaScript developer by trade nowadays, but recently I've been getting back into back-end coding again. And, whilst trying to get an HTML prototype integrated and working with our C#-based CMS, I've come to blows with our programmers over the HTML ID attributes being arbitrarily rewritten by .NET for form elements. I can understand the code-behind reasoning for .NET changing IDs, but the fact you can no longer use IDs when trying to develop e.g. jQuery enhanced interfaces is causing some friction. What can I do to work around this? I've tried using the class attribute instead, but that's really crappy, not what it's meant for and doesn't get around that problem of .NET effectively changing rendered source on the fly. It also means that CSS is less useful now and less efficient to create and maintain. Any tips or advice greatly appreciated--anything for a few less sleepless nights... | The short answer is no, with webforms the id can always be rewritten depending on the nesting of the element. You can get access to the id through the ClientID property, so you could set the ids into variables in a script at the end of the page/control then put them into jQuery. something like this: <asp:button id="ImAButton" runat="server">Click Me</asp:button><script type="text/javascript">var buttonId = "<%=ImAButton.ClientId%>";$("#"+buttonId).bind('click', function() { alert('hi); });</script> It's a hack I know, but it will work.(I should note for the un-initiated, I'm using the Prototype $ get by id method there) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/38998",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3212/"
]
} |
39,003 | If I have interface IFoo, and have several classes that implement it, what is the best/most elegant/cleverest way to test all those classes against the interface? I'd like to reduce test code duplication, but still 'stay true' to the principles of Unit testing. What would you consider best practice? I'm using NUnit, but I suppose examples from any Unit testing framework would be valid | If you have classes implement any one interface then they all need to implement the methods in that interface. In order to test these classes you need to create a unit test class for each of the classes. Lets go with a smarter route instead; if your goal is to avoid code and test code duplication you might want to create an abstract class instead that handles the recurring code. E.g. you have the following interface: public interface IFoo { public void CommonCode(); public void SpecificCode();} You might want to create an abstract class: public abstract class AbstractFoo : IFoo { public void CommonCode() { SpecificCode(); } public abstract void SpecificCode();} Testing that is easy; implement the abstract class in the test class either as an inner class: [TestFixture]public void TestClass { private class TestFoo : AbstractFoo { boolean hasCalledSpecificCode = false; public void SpecificCode() { hasCalledSpecificCode = true; } } [Test] public void testCommonCallsSpecificCode() { TestFoo fooFighter = new TestFoo(); fooFighter.CommonCode(); Assert.That(fooFighter.hasCalledSpecificCode, Is.True()); }} ...or let the test class extend the abstract class itself if that fits your fancy. [TestFixture]public void TestClass : AbstractFoo { boolean hasCalledSpecificCode; public void specificCode() { hasCalledSpecificCode = true; } [Test] public void testCommonCallsSpecificCode() { AbstractFoo fooFighter = this; hasCalledSpecificCode = false; fooFighter.CommonCode(); Assert.That(fooFighter.hasCalledSpecificCode, Is.True()); } } Having an abstract class take care of common code that an interface implies gives a much cleaner code design. I hope this makes sense to you. As a side note, this is a common design pattern called the Template Method pattern . In the above example, the template method is the CommonCode method and SpecificCode is called a stub or a hook. The idea is that anyone can extend behavior without the need to know the behind the scenes stuff. A lot of frameworks rely on this behavioral pattern, e.g. ASP.NET where you have to implement the hooks in a page or a user controls such as the generated Page_Load method which is called by the Load event, the template method calls the hooks behind the scenes. There are a lot more examples of this. Basically anything that you have to implement that is using the words "load", "init", or "render" is called by a template method. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3024/"
]
} |
39,065 | I am working on localization for a asp.net application that consists of several projects. For this, there are some strings that are used in several of these projects. Naturally, I would prefer to have only one copy of the resource file in each project. Since the resource files don't have an namespace (at least as far as I can tell), they can't be accessed like regular classes. Is there any way to reference resx files in another project, within the same solution? | You can just create a class library project, add a resource file there, and then refer to that assembly for common resources. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1090/"
]
} |
39,066 | Whenever I use my MacBook away from my desk and later plug it into an external display (as primary), I get into the state of having windows deposited in both the notebook monitor and the external one. To move all windows to a single screen, my current solution is to "Turn on mirroring" in the display preferences and then turn it off again. This is rather tedious, though. Does anyone know of a better way? I'm afraid the script posted by @ erlando does absolutely nothing for me, running Mac OS X 10.5.4. (I.e., with windows on both screens, running the script moves not a single one of them, and it does not return any errors.) I guess I'll just have to stick with using the "mirror/unmirror" method mentioned above. @ Denton : I'm afraid those links provide scripts for getting windows which are orphaned from any screen back onto the display. I ‘just’ want to move all windows from a secondary display onto the primary display. | Cmd+F1 appears to be a Mirror Displays shortcut in Snow Leopard. Don't know about Lion, etc, though. Just tap it twice and see what happens (-: For the people who prefer to set up their function keys to act in the old-fashioned way (not as brightness/sound controls etc.), it will be Cmd+Fn+F1 | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39066",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4161/"
]
} |
39,086 | I want to loop over the contents of a text file and do a search and replace on some lines and write the result back to the file. I could first load the whole file in memory and then write it back, but that probably is not the best way to do it. What is the best way to do this, within the following code? f = open(file)for line in f: if line.contains('foo'): newline = line.replace('foo', 'bar') # how to write this newline back to the file | I guess something like this should do it. It basically writes the content to a new file and replaces the old file with the new file: from tempfile import mkstempfrom shutil import move, copymodefrom os import fdopen, removedef replace(file_path, pattern, subst): #Create temp file fh, abs_path = mkstemp() with fdopen(fh,'w') as new_file: with open(file_path) as old_file: for line in old_file: new_file.write(line.replace(pattern, subst)) #Copy the file permissions from the old file to the new file copymode(file_path, abs_path) #Remove original file remove(file_path) #Move new file move(abs_path, file_path) | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/39086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4166/"
]
} |
39,104 | I've written a Python package that includes a bsddb database of pre-computed values for one of the more time-consuming computations. For simplicity, my setup script installs the database file in the same directory as the code which accesses the database (on Unix, something like /usr/lib/python2.5/site-packages/mypackage/). How do I store the final location of the database file so my code can access it? Right now, I'm using a hack based on the __file__ variable in the module which accesses the database: dbname = os.path.join(os.path.dirname(__file__), "database.dat") It works, but it seems... hackish. Is there a better way to do this? I'd like to have the setup script just grab the final installation location from the distutils module and stuff it into a "dbconfig.py" file that gets installed alongside the code that accesses the database. | Try using pkg_resources, which is part of setuptools (and available on all of the pythons I have access to right now): >>> import pkg_resources>>> pkg_resources.resource_filename(__name__, "foo.config")'foo.config'>>> pkg_resources.resource_filename('tempfile', "foo.config")'/usr/lib/python2.4/foo.config' There's more discussion about using pkg_resources to get resources on the eggs page and the pkg_resources page. Also note, where possible it's probably advisable to use pkg_resources.resource_stream or pkg_resources.resource_string because if the package is part of an egg, resource_filename will copy the file to a temporary directory. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4198/"
]
} |
39,112 | I know in certain circumstances, such as long running processes, it is important to lock ASP.NET cache in order to avoid subsequent requests by another user for that resource from executing the long process again instead of hitting the cache. What is the best way in c# to implement cache locking in ASP.NET? | Here's the basic pattern: Check the cache for the value, return if its available If the value is not in the cache, then implement a lock Inside the lock, check the cache again, you might have been blocked Perform the value look up and cache it Release the lock In code, it looks like this: private static object ThisLock = new object();public string GetFoo(){ // try to pull from cache here lock (ThisLock) { // cache was empty before we got the lock, check again inside the lock // cache is still empty, so retreive the value here // store the value in the cache here } // return the cached value here} | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/39112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2471/"
]
} |
39,116 | I'm working on a module for a CMS. This module is distributed as a class library DLL. I have several utility libraries I'd like to use in this module. Is there anyway I can link these libraries statically so I won't have to distribute several DLL's (thereby distributing my utility libraries separately)? I would like to have only one DLL. | You can merge your many DLLs with ILMERGE: http://research.microsoft.com/~mbarnett/ILMerge.aspx Haven't tried it myself. Hope it helps. Download here: http://www.microsoft.com/downloads/details.aspx?familyid=22914587-B4AD-4EAE-87CF-B14AE6A939B0&displaylang=en Brief Description (from download-page) ILMerge is a utility for merging multiple .NET assemblies into a single .NET assembly. It works on executables and DLLs alike and comes with several options for controlling the processing and format of the output. See the accompanying documentation for details. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/39116",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4192/"
]
} |
39,187 | I prefer dark backgrounds for coding, and I've downloaded a jar file containing an IntelliJ IDEA color theme that has a dark background. How do I tell IntelliJ about it? | Go to File->Import Settings... and select the jar settings file Update as of IntelliJ 2020 : Go to File -> Manage IDE Settings -> Import Settings... | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/39187",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4203/"
]
} |
39,222 | I'm working on porting a Visual C++ application to GCC (should build on MingW and Linux). The existing code uses __try { ... } __except(1) { ... } blocks in a few places so that almost nothing (short of maybe out of memory type errors?) would make the program exit without doing some minimal logging. What are the options for doing something similar with GCC? Edit: Thanks for the pointer to /EH options in Visual Studio, what I need now is some examples on how to handle signals on Linux. I've found this message from 2002. What other signals besides SIGFPE and SIGSEVG should I watch out for? (Mostly care about ones that might be raised from me doing something wrong) Bounty Information :I want my application to be able to self-log as many error conditions as possible before it exits. What signals might I get and which would generally be impossible to log an error message after? (Out of memory, what else?) How can I handle exceptions and (most importantly) signals in a portable way that the code at least works the same on Linux and MingW. #ifdef is OK. The reason I don't just have a wrapper process that logs the failure is that for performance reasons I save writing some data to disk till the last minute, so if something goes wrong I want to make all possible attempts to write the data out before exiting. | try { xxx } catch(...) { xxx } would be more portable but might not catch as much. It depends on compiler settings and environments. Using the default VC++ settings, asynchronous (SEH) errors are not delivered to the C++ EH infrastructure; to catch them you need to use SEH handlers (__try/__except) instead. VC++ allows you to route SEH errors through C++ error-handling, which allows a catch(...) to trap SEH errors; this includes memory errors such as null pointer dereferences. Details . On Linux, however, many of the errors that Windows uses SEH for are indicated through signals. These are not ever caught by try/catch; to handle them you need a signal handler. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39222",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/163/"
]
} |
39,276 | Is there any free or commercial component written in .NET (no COM interop) that will work with most twain scanners? | TwainDotNet I've just wrapped up the code from Thomas Scheidegger's article ( CodeProject: .NET TWAIN image scanning ) into a Google code project: http://code.google.com/p/twaindotnet/ I've cleaned up the API a bit and added WPF support, so check it out. :) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4171/"
]
} |
39,281 | We have a requirement in project to store all the revisions(Change History) for the entities in the database. Currently we have 2 designed proposals for this: e.g. for "Employee" Entity Design 1: -- Holds Employee Entity"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"-- Holds the Employee Revisions in Xml. The RevisionXML will contain-- all data of that particular EmployeeId"EmployeeHistories (EmployeeId, DateModified, RevisionXML)" Design 2: -- Holds Employee Entity"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"-- In this approach we have basically duplicated all the fields on Employees -- in the EmployeeHistories and storing the revision data."EmployeeHistories (EmployeeId, RevisionId, DateModified, FirstName, LastName, DepartmentId, .., ..)" Is there any other way of doing this thing? The problem with the "Design 1" is that we have to parse XML each time when you need to access data. This will slow the process and also add some limitations like we cannot add joins on the revisions data fields. And the problem with the "Design 2" is that we have to duplicate each and every field on all entities (We have around 70-80 entities for which we want to maintain revisions). | I think the key question to ask here is 'Who / What is going to be using the history'? If it's going to be mostly for reporting / human readable history, we've implemented this scheme in the past... Create a table called 'AuditTrail' or something that has the following fields... [ID] [int] IDENTITY(1,1) NOT NULL,[UserID] [int] NULL,[EventDate] [datetime] NOT NULL,[TableName] [varchar](50) NOT NULL,[RecordID] [varchar](20) NOT NULL,[FieldName] [varchar](50) NULL,[OldValue] [varchar](5000) NULL,[NewValue] [varchar](5000) NULL You can then add a 'LastUpdatedByUserID' column to all of your tables which should be set every time you do an update / insert on the table. You can then add a trigger to every table to catch any insert / update that happens and creates an entry in this table for each field that's changed. Because the table is also being supplied with the 'LastUpdateByUserID' for each update / insert, you can access this value in the trigger and use it when adding to the audit table. We use the RecordID field to store the value of the key field of the table being updated. If it's a combined key, we just do a string concatenation with a '~' between the fields. I'm sure this system may have drawbacks - for heavily updated databases the performance may be hit, but for my web-app, we get many more reads than writes and it seems to be performing pretty well. We even wrote a little VB.NET utility to automatically write the triggers based on the table definitions. Just a thought! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/39281",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/191/"
]
} |
39,288 | I have heard umpteen times that we 'should not mix business logic with other code' or statements like that. I think every single code I write (processing steps I mean) consists of logic that is related to the business requirements.. Can anyone tell me what exactly consists of business logic? How can it be distinguished from other code? Is there some simple test to determine what is business logic and what is not? | Simply define what you are doing in plain English. When you are saying things businesswise, like "make those suffer", "steal that money", "destroy this portion of earth" you are talking about business layer. To make it clear, things that get you excited go here. When you are saying "show this here", "do not show that", "make it more beautiful" you are talking about the presentation layer. These are the things that get your designers excited. When you are saying things like "save this", "get this from database", "update", "delete", etc. you are talking about the data layer. These are the things that tell you what to keep forever at all costs. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/39288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/184/"
]
} |
39,331 | What techniques can be applied effectively to improve the performance of SQL queries? Are there any general rules that apply? | Use primary keys Avoid select * Be as specific as you can when building your conditional statements De-normalisation can often be more efficient Table variables and temporary tables (where available) will often be better than using a large source table Partitioned views Employ indices and constraints | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39331",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/184/"
]
} |
39,365 | Typically I develop my websites on trunk, then merge changes to a testing branch where they are put on a 'beta' website, and then finally they are merged onto a live branch and put onto the live website. With a Facebook application things are a bit tricky. As you can't view a Facebook application through a normal web browser (it has to go through the Facebook servers) you can't easily give each developer their own version of the website to work with and test. I have not come across anything about the best way to develop and test a Facebook application while continuing to have a stable live website that users can use. My question is this, what is the best practice for organising the development and testing of a Facebook application? | The way I and my partner did it was we each made our own private Facebook applications, that pointed to our IP address where we worked on it. Since we worked in the same place, we each picked a different port, and had our router forward that port to our local IP address. It was kinda slow to refresh a page, but it worked very nicely. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2990/"
]
} |
39,391 | If I create an HTTP java.net.URL and then call openConnection() on it, does it necessarily imply that an HTTP post is going to happen? I know that openStream() implies a GET. If so, how do you perform one of the other HTTP verbs without having to work with the raw socket layer? | If you retrieve the URLConnection object using openConnection() it doesn't actually start communicating with the server. That doesn't happen until you get the stream from the URLConnection() . When you first get the connection you can add/change headers and other connection properties before actually opening it. URLConnection's life cycle is a bit odd. It doesn't send the headers to the server until you've gotten one of the streams. If you just get the input stream then I believe it does a GET, sends the headers, then lets you read the output. If you get the output stream then I believe it sends it as a POST, as it assumes you'll be writing data to it (You may need to call setDoOutput(true) for the output stream to work). As soon as you get the input stream the output stream is closed and it waits for the response from the server. For example, this should do a POST: URL myURL = new URL("http://example.com/my/path");URLConnection conn = myURL.openConnection();conn.setDoOutput(true);conn.setDoInput(true);OutputStream os = conn.getOutputStream();os.write("Hi there!");os.close();InputStream is = conn.getInputStream();// read stuff here While this would do a GET: URL myURL = new URL("http://example.com/my/path");URLConnection conn = myURL.openConnection();conn.setDoOutput(false);conn.setDoInput(true);InputStream is = conn.getInputStream();// read stuff here URLConnection will also do other weird things. If the server specifies a content length then URLConnection will keep the underlying input stream open until it receives that much data, even if you explicitly close it . This caused a lot of problems for us as it made shutting our client down cleanly a bit hard, as the URLConnection would keep the network connection open. This probably probably exists even if you just use getStream() though. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39391",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4223/"
]
} |
39,395 | How can I calculate the value of PI using C#? I was thinking it would be through a recursive function, if so, what would it look like and are there any math equations to back it up? I'm not too fussy about performance, mainly how to go about it from a learning point of view. | If you want recursion: PI = 2 * (1 + 1/3 * (1 + 2/5 * (1 + 3/7 * (...)))) This would become, after some rewriting: PI = 2 * F(1); with F(i): double F (int i) { return 1 + i / (2.0 * i + 1) * F(i + 1);} Isaac Newton (you may have heard of him before ;) ) came up with this trick.Note that I left out the end condition, to keep it simple. In real life, you kind of need one. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/39395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383/"
]
} |
39,399 | I have a struts-based webapp, and I would like the default "welcome" page to be an action. The only solutions I have found to this seem to be variations on making the welcome page a JSP that contains a redirect to the action. For example, in web.xml : <welcome-file-list> <welcome-file>index.jsp</welcome-file></welcome-file-list> and in index.jsp : <% response.sendRedirect("/myproject/MyAction.action");%> Surely there's a better way! | Personally, I'd keep the same setup you have now, but change the redirect for a forward. That avoids sending a header back to the client and having them make another request. So, in particular, I'd replace the <% response.sendRedirect("/myproject/MyAction.action");%> in index.jsp with <jsp:forward page="/MyAction.action" /> The other effect of this change is that the user won't see the URL in the address bar change from " http://server/myproject " to " http://server/myproject/index.jsp ", as the forward happens internally on the server. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3913/"
]
} |
39,419 | In Visual C++ a DWORD is just an unsigned long that is machine, platform, and SDK dependent. However, since DWORD is a double word (that is 2 * 16), is a DWORD still 32-bit on 64-bit architectures? | Actually, on 32-bit computers a word is 32-bit, but the DWORD type is a leftover from the good old days of 16-bit. In order to make it easier to port programs to the newer system, Microsoft has decided all the old types will not change size. You can find the official list here: http://msdn.microsoft.com/en-us/library/aa383751(VS.85).aspx All the platform-dependent types that changed with the transition from 32-bit to 64-bit end with _PTR (DWORD_PTR will be 32-bit on 32-bit Windows and 64-bit on 64-bit Windows). | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/39419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/44972/"
]
} |
39,447 | I have a class property exposing an internal IList<> through System.Collections.ObjectModel.ReadOnlyCollection<> How can I pass a part of this ReadOnlyCollection<> without copying elements into a new array (I need a live view, and the target device is short on memory)? I'm targetting Compact Framework 2.0. | Try a method that returns an enumeration using yield: IEnumerable<T> FilterCollection<T>( ReadOnlyCollection<T> input ) { foreach ( T item in input ) if ( /* criterion is met */ ) yield return item;} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39447",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3205/"
]
} |
39,473 | I'm currently implementing a raytracer. Since raytracing is extremely computation heavy and since I am going to be looking into CUDA programming anyway, I was wondering if anyone has any experience with combining the two. I can't really tell if the computational models match and I would like to know what to expect. I get the impression that it's not exactly a match made in heaven, but a decent speed increasy would be better than nothing. | One thing to be very wary of in CUDA is that divergent control flow in your kernel code absolutely KILLS performance, due to the structure of the underlying GPU hardware. GPUs typically have massively data-parallel workloads with highly-coherent control flow (i.e. you have a couple million pixels, each of which (or at least large swaths of which) will be operated on by the exact same shader program, even taking the same direction through all the branches. This enables them to make some hardware optimizations, like only having a single instruction cache, fetch unit, and decode logic for each group of 32 threads. In the ideal case, which is common in graphics, they can broadcast the same instruction to all 32 sets of execution units in the same cycle (this is known as SIMD, or Single-Instruction Multiple-Data). They can emulate MIMD (Multiple-Instruction) and SPMD (Single-Program), but when threads within a Streaming Multiprocessor (SM) diverge (take different code paths out of a branch), the issue logic actually switches between each code path on a cycle-by-cycle basis. You can imagine that, in the worst case, where all threads are on separate paths, your hardware utilization just went down by a factor of 32, effectively killing any benefit you would've had by running on a GPU over a CPU, particularly considering the overhead associated with marshalling the dataset from the CPU, over PCIe, to the GPU. That said, ray-tracing, while data-parallel in some sense, has widely-diverging control flow for even modestly-complex scenes. Even if you manage to map a bunch of tightly-spaced rays that you cast out right next to each other onto the same SM, the data and instruction locality you have for the initial bounce won't hold for very long. For instance, imagine all 32 highly-coherent rays bouncing off a sphere. They will all go in fairly different directions after this bounce, and will probably hit objects made out of different materials, with different lighting conditions, and so forth. Every material and set of lighting, occlusion, etc. conditions has its own instruction stream associated with it (to compute refraction, reflection, absorption, etc.), and so it becomes quite difficult to run the same instruction stream on even a significant fraction of the threads in an SM. This problem, with the current state of the art in ray-tracing code, reduces your GPU utilization by a factor of 16-32, which may make performance unacceptable for your application, especially if it's real-time (e.g. a game). It still might be superior to a CPU for e.g. a render farm. There is an emerging class of MIMD or SPMD accelerators being looked at now in the research community. I would look at these as logical platforms for software, real-time raytracing. If you're interested in the algorithms involved and mapping them to code, check out POVRay. Also look into photon mapping, it's an interesting technique that even goes one step closer to representing physical reality than raytracing. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/39473",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4055/"
]
} |
39,474 | Does anyone know how to get IntelliSense to work reliably when working in C/C++ projects? It seems to work for about 1 in 10 files. Visual Studio 2005 seems to be a lot better than 2008. Edit: Whilst not necessarily a solution, the work-around provided here: How to get IntelliSense to reliably work in Visual Studio 2008 Is probably the best bet if I want a decent IntelliSense system. | I've also realized than Intellisense is sometime 'lost', on some big project. Why? No idea. This is why we have bought Visual Assist (from Tomato software ) and disabled Intellisense by deleting the dll feacp.dll in the Visual studio subdirectory (C:\Program Files\Microsoft Visual Studio 8\VC\vcpackages) This is not a solution, just a workaround. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/39474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/986/"
]
} |
39,476 | In the How Can I Expose Only a Fragment of IList<> question one of the answers had the following code snippet: IEnumerable<object> FilteredList(){ foreach(object item in FullList) { if(IsItemInPartialList(item)) yield return item; }} What does the yield keyword do there? I've seen it referenced in a couple places, and one other question, but I haven't quite figured out what it actually does. I'm used to thinking of yield in the sense of one thread yielding to another, but that doesn't seem relevant here. | The yield contextual keyword actually does quite a lot here. The function returns an object that implements the IEnumerable<object> interface. If a calling function starts foreach ing over this object, the function is called again until it "yields". This is syntactic sugar introduced in C# 2.0 . In earlier versions you had to create your own IEnumerable and IEnumerator objects to do stuff like this. The easiest way understand code like this is to type-in an example, set some breakpoints and see what happens. Try stepping through this example: public void Consumer(){ foreach(int i in Integers()) { Console.WriteLine(i.ToString()); }}public IEnumerable<int> Integers(){ yield return 1; yield return 2; yield return 4; yield return 8; yield return 16; yield return 16777216;} When you step through the example, you'll find the first call to Integers() returns 1 . The second call returns 2 and the line yield return 1 is not executed again. Here is a real-life example: public IEnumerable<T> Read<T>(string sql, Func<IDataReader, T> make, params object[] parms){ using (var connection = CreateConnection()) { using (var command = CreateCommand(CommandType.Text, sql, connection, parms)) { command.CommandTimeout = dataBaseSettings.ReadCommandTimeout; using (var reader = command.ExecuteReader()) { while (reader.Read()) { yield return make(reader); } } } }} | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/39476",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1409/"
]
} |
39,533 | Is there a way to identify, from within a VM, that your code is running inside a VM? I guess there are more or less easy ways to identify specific VM systems, especially if the VM has the provider's extensions installed (such as for VirtualBox or VMWare). But is there a general way to identify that you are not running directly on the CPU? | A lot of the research on this is dedicated to detecting so-called "blue pill" attacks, that is, a malicious hypervisor that is actively attempting to evade detection. The classic trick to detect a VM is to populate the ITLB, run an instruction that must be virtualized (which necessarily clears out such processor state when it gives control to the hypervisor), then run some more code to detect if the ITLB is still populated. The first paper on it is located here , and a rather colorful explanation from a researcher's blog and alternative Wayback Machine link to the blog article (images broken) . Bottom line from discussions on this is that there is always a way to detect a malicious hypervisor, and it's much simpler to detect one that isn't trying to hide. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/39533",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/107/"
]
} |
39,561 | Trying to get my css / C# functions to look like this: body { color:#222;} instead of this: body { color:#222;} when I auto-format the code. | C# In the Tools Menu click Options Click Show all Parameters (checkbox at the bottom left) ( Show all settings in VS 2010) Text Editor C# Formatting New lines And there check when you want new lines with brackets Css: almost the same, but fewer options In the Tools Menu click Options Click Show all Parameters (checkbox at the bottom left) ( Show all settings in VS 2010) Text Editor CSS Format And than you select the formatting you want (in your case second radio button) For Visual Studio 2015: Tools → Options In the sidebar, go to Text Editor → C# → Formatting → New Lines and uncheck every checkbox in the section "New line options for braces" For Mac OS users: Preferences → Source Code → Code Formatting → choose what ever you want to change (like C# source code) → C# Format → Edit -→ New Lines | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/39561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/26/"
]
} |
39,567 | In Ruby, given an array in one of the following forms... [apple, 1, banana, 2][[apple, 1], [banana, 2]] ...what is the best way to convert this into a hash in the form of... {apple => 1, banana => 2} | NOTE : For a concise and efficient solution, please see Marc-André Lafortune's answer below. This answer was originally offered as an alternative to approaches using flatten, which were the most highly upvoted at the time of writing. I should have clarified that I didn't intend to present this example as a best practice or an efficient approach. Original answer follows. Warning! Solutions using flatten will not preserve Array keys or values! Building on @John Topley's popular answer, let's try: a3 = [ ['apple', 1], ['banana', 2], [['orange','seedless'], 3] ]h3 = Hash[*a3.flatten] This throws an error: ArgumentError: odd number of arguments for Hash from (irb):10:in `[]' from (irb):10 The constructor was expecting an Array of even length (e.g. ['k1','v1,'k2','v2']). What's worse is that a different Array which flattened to an even length would just silently give us a Hash with incorrect values. If you want to use Array keys or values, you can use map : h3 = Hash[a3.map {|key, value| [key, value]}]puts "h3: #{h3.inspect}" This preserves the Array key: h3: {["orange", "seedless"]=>3, "apple"=>1, "banana"=>2} | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/39567",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4142/"
]
} |
39,576 | I'm looking for a good way to perform multi-row inserts into an Oracle 9 database. The following works in MySQL but doesn't seem to be supported in Oracle. INSERT INTO TMP_DIM_EXCH_RT (EXCH_WH_KEY, EXCH_NAT_KEY, EXCH_DATE, EXCH_RATE, FROM_CURCY_CD, TO_CURCY_CD, EXCH_EFF_DATE, EXCH_EFF_END_DATE, EXCH_LAST_UPDATED_DATE) VALUES (1, 1, '28-AUG-2008', 109.49, 'USD', 'JPY', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'), (2, 1, '28-AUG-2008', .54, 'USD', 'GBP', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'), (3, 1, '28-AUG-2008', 1.05, 'USD', 'CAD', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'), (4, 1, '28-AUG-2008', .68, 'USD', 'EUR', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'), (5, 1, '28-AUG-2008', 1.16, 'USD', 'AUD', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'), (6, 1, '28-AUG-2008', 7.81, 'USD', 'HKD', '28-AUG-2008', '28-AUG-2008', '28-AUG-2008'); | This works in Oracle: insert into pager (PAG_ID,PAG_PARENT,PAG_NAME,PAG_ACTIVE) select 8000,0,'Multi 8000',1 from dualunion all select 8001,0,'Multi 8001',1 from dual The thing to remember here is to use the from dual statement. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/39576",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3734/"
]
} |
39,615 | I have a set of base filenames, for each name 'f' there are exactly two files, 'f.in' and 'f.out'. I want to write a batch file (in Windows XP) which goes through all the filenames, for each one it should: Display the base name 'f' Perform an action on 'f.in' Perform another action on 'f.out' I don't have any way to list the set of base filenames, other than to search for *.in (or *.out) for example. | Assuming you have two programs that process the two files, process_in.exe and process_out.exe: for %%f in (*.in) do ( echo %%~nf process_in "%%~nf.in" process_out "%%~nf.out") %%~nf is a substitution modifier, that expands %f to a file name only.See other modifiers in https://technet.microsoft.com/en-us/library/bb490909.aspx (midway down the page) or just in the next answer. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/39615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3974/"
]
} |
39,648 | Is there a good way to debug errors in the Visual Studio Designer? In our project we have tons of UserControls and many complex forms. For the complex ones, the Designer often throws various exceptions which doesn't help much, and I was wondering if there's some nice way to figure out what has gone wrong. The language is C#, and we're using Visual Studio 2005. | I've been able to debug some control designer issues by running a second instance of VS, then from your first VS instance do a "Debug -> Attach to Process" and pick "devenv". The first VS instance is where you'll set your breakpoints. Use the second instance to load up the designer to cause the "designer" code to run. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4037/"
]
} |
39,651 | In a previous Git question , Daniel Benamy was talking about a workflow in Git: I was working on master and committed some stuff and then decided I wanted to put that work on hold. I backed up a few commits and then branched from before I started my crap work. He wanted to restore his working state to a previous point in time without losing his current changes. All of the answers revolved around, in various ways, something like git branch -m master crap_workgit branch -m previous_master master How does this compare to git stash ? I'm a bit confused trying to see what the different use case here when it seems like everything git stash does is already handled by branching… @ Jordi Bunster : Thanks, that clears things up. I guess I'd kind of consider "stashing" to be like a lightweight, nameless, branch. So anything stash can do, branch can as well but with more words. Nice! | 'stash' takes the uncommitted, " dirty " stuff on your working copy, and stashes it away, leaving you with a clean working copy. It doesn't really branch at all. You can then apply the stash on top of any other branch. Or, as of Git 1.6, you can do: git stash branch <branchname> [<stash>] to apply the stash on top of a new branch, all in one command. So, stash works great if you have not committed to the " wrong " branch yet. If you've already committed, then the workflow you describe in your question is a better alternative. And by the way, you're right: Git is very flexible, and with that flexibility comes overlapping functionality. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/39651",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4161/"
]
} |
39,663 | I'm reading some MPEG Transport Stream protocol over UDP and it has some funky bitfields in it (length 13 for example). I'm using the "struct" library to do the broad unpacking, but is there a simple way to say "Grab the next 13 bits" rather than have to hand-tweak the bit manipulation? I'd like something like the way C does bit fields (without having to revert to C). Suggestions? | The bitstring module is designed to address just this problem. It will let you read, modify and construct data using bits as the basic building blocks. The latest versions are for Python 2.6 or later (including Python 3) but version 1.0 supported Python 2.4 and 2.5 as well. A relevant example for you might be this, which strips out all the null packets from a transport stream (and quite possibly uses your 13 bit field?): from bitstring import Bits, BitStream # Opening from a file means that it won't be all read into memorys = Bits(filename='test.ts')outfile = open('test_nonull.ts', 'wb')# Cut the stream into 188 byte packetsfor packet in s.cut(188*8): # Take a 13 bit slice and interpret as an unsigned integer PID = packet[11:24].uint # Write out the packet if the PID doesn't indicate a 'null' packet if PID != 8191: # The 'bytes' property converts back to a string. outfile.write(packet.bytes) Here's another example including reading from bitstreams: # You can create from hex, binary, integers, strings, floats, files...# This has a hex code followed by two 12 bit integerss = BitStream('0x000001b3, uint:12=352, uint:12=288')# Append some other bitss += '0b11001, 0xff, int:5=-3'# read back as 32 bits of hex, then two 12 bit unsigned integersstart_code, width, height = s.readlist('hex:32, 2*uint:12')# Skip some bits then peek at next bit values.pos += 4if s.peek(1): flags = s.read(9) You can use standard slice notation to slice, delete, reverse, overwrite, etc. at the bit level, and there are bit level find, replace, split etc. functions. Different endiannesses are also supported. # Replace every '1' bit by 3 bitss.replace('0b1', '0b001')# Find all occurrences of a bit sequencebitposlist = list(s.findall('0b01000'))# Reverse bits in places.reverse() The full documentation is here . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39663",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2587612/"
]
} |
39,691 | What are some good resources to learn best practices for Javascript? I'm mainly concerned about when something should be an object vs. when it should just be tracked in the DOM. Also I would like to better learn how to organize my code so it's easy to unit test. | Seconding Javascript: The Good Parts and Resig's book Secrets of the Javascript Ninja . Here are some tips for Javascript: Don't pollute the global namespace (put all functions into objects/closures) Take a look at YUI , it's a huge codebase with only 2 global objects: YAHOO and YAHOO_config Use the Module pattern for singletons ( http://yuiblog.com/blog/2007/06/12/module-pattern/ ) Make your JS as reusable as possible (jQuery plugins, YUI modules, basic JS objects.) Don't write tons of global functions. Don't forget to var your variables Use JSlint : http://www.jslint.com/ If you need to save state, it's probably best to use objects instead of the DOM. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/39691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4114/"
]
} |
39,739 | So, I've read that it is not a good idea to install VS2008 on my test server machine as it changes the run time environment too much. I've never attempted remote debugging with Visual Studio before, so what is the "best" way to get line by line remote debugging of server side web app code. I'd like to be able to set a breakpoint, attach, and start stepping line by line to verify code flow and, you know, debug and stuff :). I'm sure most of the answers will pertain to ASP.NET code, and I'm interested in that, but my current code base is actually Classic ASP and ISAPI Extensions, so I care about that a little more. Also, my test server is running in VMWare, I've noticed in the latest VMWare install it mentioning something about debugging support, but I'm unfamiliar with what that means...anyone using it, what does it do for you? | First, this is MUCH easier if both the server and your workstation are on the same domain (the server needs access to connect to your machine). In your C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\Remote Debugger\x86 (or x64, or ia64) directory are the files you need to copy to your server. There are different versions between Visual Studio versions, so make sure they match on the client and server side. On the server, fire up msvsmon. It will say something like "Msvsmon started a new server named xxx@yyyy". This is the name you'll use in Visual Studio to connect to this server. You can go into Tools > Options to set the server name and to set the authentication mode (hopefully Windows Authentication) - BTW No Authentication doesn't work for managed code. On the client side, open up Visual Studio and load the solution you're going to debug. Then go to Debug > Attach to Process. In the "Qualifier" field enter the name of the server as you saw it appear earlier. Click on the Select button and select the type of code you want to debug, then hit OK. Hopefully you'll see a list of the processes on the server that you can attach to (you should also see on the server that the debugging monitor just said you connected). Find the process to attach to (start up the app if necessary). If it's an ASP.NET website, you'd select w3wp.exe, then hit Attach. Set your breakpoints and hopefully you're now remotely debugging the code. AFAIK - the VMWare option lets you start up code inside of a VM but debug it from your workstation. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39739",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/282/"
]
} |
39,742 | Browsing through the git documentation, I can't see anything analogous to SVN's commit hooks or the "propset" features that can, say, update a version number or copyright notice within a file whenever it is committed to the repository. Are git users expected to write external scripts for this sort of functionality (which doesn't seem out of the question) or have I just missed something obvious? Edit : Just to be clear, I'm more interested in, e.g., svn propset svn:keywords "Author Date Id Revision" expl3.dtx where a string like this: $Id: expl3.dtx 780 2008-08-30 12:32:34Z morten $ is kept up-to-date with the relevant info whenever a commit occurs. | Quoting from the Git FAQ : Does git have keyword expansion? Not recommended. Keyword expansion causes all sorts of strange problems andisn't really useful anyway, especially within the context of an SCM. Outsidegit you may perform keyword expansion using a script. The Linux kernel exportscript does this to set the EXTRA_VERSION variable in the Makefile. See gitattributes(5) if you really want to do this. If your translation is notreversible (eg SCCS keyword expansion) this may be problematic. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39742",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4161/"
]
} |
39,746 | I installed TortoiseHg (Mercurial) in my Vista 64-bit and the context menu is not showing up when I right click a file or folder. Is there any workaround for this problem? | Update: TortoiseHg 0.8 (released 2009-07-01) now includes both 32 and 64 bit shell extensions in the installer, and also works with Windows 7. The workaround described below is no longer necessary. A workaround to getting the context menus in Windows Explorer is buried in the TortoiseHg development mailing list archives. One of the posts provides this very handy tip on how to run 32-bit Explorer on 64-bit Windows: TortoiseHG context menus will show up if you run 32-bit windows explorer; create a shortcut with this (or use Start > Run): %Systemroot%\SysWOW64\explorer.exe /separate (Source: http://www.mail-archive.com/[email protected]/msg01055.html ) It works fairly well and is minimally invasive, but unfortunately this doesn't seem to make the icon overlays appear. I don't know of any workaround for that, but file status can still be viewed through TortoiseHg menu commands at least. All other TortoiseHg functionality seems intact. The icon overlays are now working with TortoiseHg 0.6 in 32-bit explorer! Not sure if this is a new fix or if I had some misconfiguration in 0.5; regardless this means TortoiseHg is fully functional in 64-bit Windows. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4264/"
]
} |
39,753 | Is it possible to cache database connections when using PHP like you would in a J2EE container? If so, how? | There is no connection pooling in php. mysql_pconnect and connection pooling are two different things.There are many problems connected with mysql_pconnect and first you should read the manual and carefully use it, but this is not connection pooling. Connection pooling is a technique where the application server manages the connections. When the application needs a connection it asks the application server for it and the application server returns one of the pooled connections if there is one free. We can do connection scaling in php for that please go through following link: http://www.oracle.com/technetwork/articles/dsl/white-php-part1-355135.html So no connection pooling in php. As Julio said apache releases all resources when the request ends for the current reques. You can use mysql_pconnect but you are limited with that function and you must be very careful. Other choice is to use singleton pattern, but none of this is pooling. This is a good article: https://blogs.oracle.com/opal/highly-scalable-connection-pooling-in-php Also read this one http://www.apache2.es/2.2.2/mod/mod_dbd.html | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/39753",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4263/"
]
} |
39,758 | From a web developer point of view, what changes are expected in the development arena when Google Chrome is released? Are the developments powerful enough to make another revolution in the web? Will the way we see web programming change? Or is it just another web browser? | I think this is just another web browser. The most impact I expect to be improved Javascript performance, and the usability perspective. The first will benefit developers, especially when using Google Gears. I think the users will benefit the most from an enhanced user experience, the safety features, and ease of use. I can only hope other browser vendors (MS) will follow Mozilla and Google to create a faster Javascript implementation, since this is the only thing that can truly impact web development. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39758",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/184/"
]
} |
39,771 | Is a GUID unique 100% of the time? Will it stay unique over multiple threads? | While each generated GUID is notguaranteed to be unique, the totalnumber of unique keys (2 128 or3.4×10 38 ) is so large that the probability of the same number beinggenerated twice is very small. Forexample, consider the observableuniverse, which contains about 5×10 22 stars; every star could then have6.8×10 15 universally unique GUIDs. From Wikipedia . These are some good articles on how a GUID is made (for .NET) and how you could get the same guid in the right situation. https://ericlippert.com/2012/04/24/guid-guide-part-one/ https://ericlippert.com/2012/04/30/guid-guide-part-two/ https://ericlippert.com/2012/05/07/guid-guide-part-three/ | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/39771",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2469/"
]
} |
39,824 | I'm debugging a production application that has a rash of empty catch blocks sigh : try {*SOME CODE*}catch{} Is there a way of seeing what the exception is when the debugger hits the catch in the IDE? | In VS, if you look in the Locals area of your IDE while inside the catch block, you will have something to the effect of $EXCEPTION which will have all of the information for the exception that was just caught. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39824",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4271/"
]
} |
39,843 | I have decided that all my WPF pages need to register a routed event. Rather than include public static readonly RoutedEvent MyEvent= EventManager.RegisterRoutedEvent("MyEvent", RoutingStrategy.Bubble, typeof(RoutedEventHandler), typeof(BasePage)); on every page, I decided to create a base page (named BasePage). I put the above line of code in my base page and then changed a few of my other pages to derive from BasePage. I can't get past this error: Error 12 'CTS.iDocV7.BasePage' cannot be the root of a XAML file because it was defined using XAML. Line 1 Position 22. C:\Work\iDoc7\CTS.iDocV7\UI\Quality\QualityControlQueuePage.xaml 1 22 CTS.iDocV7 Does anyone know how to best create a base page when I can put events, properties, methods, etc that I want to be able to use from any wpf page? | Here's how I've done this in my current project. First I've defined a class (as @Daren Thomas said - just a plain old C# class, no associated XAML file), like this (and yes, this is a real class - best not to ask): public class PigFinderPage : Page{ /* add custom events and properties here */} Then I create a new Page and change its XAML declaration to this: <my:PigFinderPage x:Class="Qaf.PigFM.WindowsClient.PenSearchPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:my="clr-namespace:Qaf.PigFM.WindowsClient" /> So I declare it as a PigFinderPage in the "my" namespace. Any page-wide resources you need have to be declared using a similar syntax: <my:PigFinderPage.Resources> <!-- your resources go here --></my:PigFinderPage.Resources> Lastly, switch to the code-behind for this new page, and change its class declaration so that it derives from your custom class rather than directly from Page, like this: public partial class EarmarkSearchPage : PigFinderPage Remember to keep it as a partial class. That's working a treat for me - I can define a bunch of custom properties and events back in "PigFinderPage" and use them in all the descendants. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/39843",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3047/"
]
} |
39,855 | Is it possible to embed a PowerPoint presentation (.ppt) into a webpage (.xhtml)? This will be used on a local intranet where there is a mix of Internet Explorer 6 and Internet Explorer 7 only, so no need to consider other browsers. I've given up... I guess Flash is the way forward. | Google Docs can serve up PowerPoint (and PDF) documents in it's document viewer. You don't have to sign up for Google Docs, just upload it to your website, and call it from your page: <iframe src="//docs.google.com/gview?url=https://www.yourwebsite.com/powerpoint.ppt&embedded=true" style="width:600px; height:500px;" frameborder="0"></iframe> | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/39855",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383/"
]
} |
39,867 | I have a script that has a part that looks like that: for file in `ls *.tar.gz`; do echo encrypting $file gpg --passphrase-file /home/$USER/.gnupg/backup-passphrase \ --simple-sk-checksum -c $filedone For some reason if I run this script manually, works perfectly fine and all files are encrypted. If I run this as cron job, echo $file works fine (I see "encrypting <file>" in the log), but the file doesn't get encrypted and gpg silent fails with no stdout/stderr output. Any clues? | It turns out that the answer was easier than I expected. There is a --batch parameter missing, gpg tries to read from /dev/tty that doesn't exist for cron jobs. To debug that I have used --exit-on-status-write-error param. But to use that I was inspired by exit status 2, reported by echoing $? as Cd-Man suggested. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/39867",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3105/"
]
} |
39,879 | Is it a deliberate design decision or a problem with our current day browsers which will be rectified in the coming versions? | JavaScript does not support multi-threading because the JavaScript interpreter in the browser is a single thread (AFAIK). Even Google Chrome will not let a single web page’s JavaScript run concurrently because this would cause massive concurrency issues in existing web pages. All Chrome does is separate multiple components (different tabs, plug-ins, etcetera) into separate processes, but I can’t imagine a single page having more than one JavaScript thread. You can however use, as was suggested, setTimeout to allow some sort of scheduling and “fake” concurrency. This causes the browser to regain control of the rendering thread, and start the JavaScript code supplied to setTimeout after the given number of milliseconds. This is very useful if you want to allow the viewport (what you see) to refresh while performing operations on it. Just looping through e.g. coordinates and updating an element accordingly will just let you see the start and end positions, and nothing in between. We use an abstraction library in JavaScript that allows us to create processes and threads which are all managed by the same JavaScript interpreter. This allows us to run actions in the following manner: Process A, Thread 1 Process A, Thread 2 Process B, Thread 1 Process A, Thread 3 Process A, Thread 4 Process B, Thread 2 Pause Process A Process B, Thread 3 Process B, Thread 4 Process B, Thread 5 Start Process A Process A, Thread 5 This allows some form of scheduling and fakes parallelism, starting and stopping of threads, etcetera, but it will not be true multi-threading. I don’t think it will ever be implemented in the language itself, since true multi-threading is only useful if the browser can run a single page multi-threaded (or even more than one core), and the difficulties there are way larger than the extra possibilities. For the future of JavaScript, check this out: https://developer.mozilla.org/presentations/xtech2006/javascript/ | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/39879",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/184/"
]
} |
39,892 | The IE Developer Toolbar is a plugin that can dock or separate from the browser. I understand its much more difficult to do this in IE than in Firefox. How does one create an IE plugin? What languages are available for this task? How can I make a Hello World plugin? | Here are a few resources that might help you in your quest to create browser helper objects (BHO). http://petesearch.com/wiki/ (archived) http://www.hackszine.com/blog/archive/2007/06/howto_port_firefox_extensions.html http://msdn.microsoft.com/en-us/library/ms182554(VS.80).aspx http://www.codeplex.com/TeamTestPlugins | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4239/"
]
} |
39,912 | I was looking at the API documentation for stl vector, and noticed there was no method on the vector class that allowed the removal of an element with a certain value. This seems like a common operation, and it seems odd that there's no built in way to do this. | std::remove does not actually erase elements from the container: it moves the elements to be removed to the end of the container, and returns the new end iterator which can be passed to container_type::erase to do the actual removal of the extra elements that are now at the end of the container: std::vector<int> vec;// .. put in some values ..int int_to_remove = n;vec.erase(std::remove(vec.begin(), vec.end(), int_to_remove), vec.end()); | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/39912",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1546/"
]
} |
39,916 | Is there a programmatic way to build htpasswd files, without depending on OS specific functions (i.e. exec() , passthru() )? | .httpasswd files are just text files with a specific format depending on the hash function specified. If you are using MD5 they look like this: foo:$apr1$y1cXxW5l$3vapv2yyCXaYz8zGoXj241 That's the login, a colon, ,$apr1$, the salt and 1000 times md5 encoded as base64. If you select SHA1 they look like this: foo:{SHA}BW6v589SIg3i3zaEW47RcMZ+I+M= That's the login, a colon, the string {SHA} and the SHA1 hash encoded with base64. If your language has an implementation of either MD5 or SHA1 and base64 you can just create the file like this: <?php$login = 'foo';$pass = 'pass';$hash = base64_encode(sha1($pass, true));$contents = $login . ':{SHA}' . $hash;file_put_contents('.htpasswd', $contents);?> Here's more information on the format: http://httpd.apache.org/docs/2.2/misc/password_encryptions.html | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/39916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115/"
]
} |
39,946 | I'm trying to boil down the concepts of coupling and cohesion to a concise definition. Can someone give me a short and understandable explanation (shorter than the definitions on Wikipedia here and here )? How do they interact? Thanks. Anybody have a good, short example? | Coupling Loose: You and the guy at the convenience store. You communicate through a well-defined protocol to achieve your respective goals - you pay money, he lets you walk out with the bag of Cheetos. Either one of you can be replaced without disrupting the system. Tight: You and your wife. Cohesion Low: The convenience store. You go there for everything from gas to milk to ATM banking. Products and services have little in common, and the convenience of having them all in one place may not be enough to offset the resulting increase in cost and decrease in quality. High: The cheese store. They sell cheese. Nothing else. Can't beat 'em when it comes to cheese though. | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/39946",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1772/"
]
} |
39,960 | In Python one can get a dictionary of all local and global variables in the current scope with the built-in functions locals() and globals() . Is there some equivalent way of doing this in Javascript? For instance, I would like to do something like the following: var foo = function(){ alert('foo'); };var bar = function(){ alert('bar'); };var s = 'foo';locals()[s](); // alerts 'foo' Is this at all possible, or should I just be using a local object for the lookup? | locals() - No. globals() - Yes. window is a reference to the global scope, like globals() in python. globals()["foo"] is the same as: window["foo"] | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/39960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/208/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.