source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
172,854 | I have a Boost unit test case which causes the object under test to throw an exception (that's the test, to cause an exception). How do I specify in the test to expect that particular exception. I can specify that the test should have a certain number of failures by using BOOST_AUTO_TEST_CASE_EXPECTED_FAILURES but that seems rather unspecific. I want to be able to say at a specific point in the test that an exception should be thrown and that it should not be counted as a failure. | Doesn't this work? BOOST_CHECK_THROW (expression, an_exception_type); That should cause the test to pass if the expression throws the given exception type or fail otherwise. If you need a different severity than 'CHECK', you could also use BOOST_WARN_THROW() or BOOST_REQUIRE_THROW() instead. See the documentation | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/172854",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4086/"
]
} |
172,863 | Does anyone have experience with LLVM , llvm-gcc , or Clang ? The whole idea behind llvm seems very intriguing to me and I'm interested in seeing how it performs. I just don't want to dump a whole lot of time into trying the tools out if the tools are not ready for production. If you have experience with the tools, what do you think of them? What major limitations have you encountered? What are the greatest benefits? Many thanks! | I've had an initial play around with LLVM and working through this tutorial left me very very excited about it's potential; the idea that I can use it to build a JIT into an app with relative ease has me stoked. I haven't gone deep enough to be able to offer any kind of useful opinion on it's limitations, stability, performance and suchlike. I understand that it's good on all counts but that's purely hearsay. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/172863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6386/"
]
} |
172,875 | Basically I'm running some performance tests and don't want the external network to be the drag factor. I'm looking into ways of disabling network LAN. What is an effective way of doing it programmatically? I'm interested in c#. If anyone has a code snippet that can drive the point home that would be cool. | Found this thread while searching for the same thing, so, here is the answer :) The best method I tested in C# uses WMI. http://www.codeproject.com/KB/cs/EverythingInWmi02.aspx Win32_NetworkAdapter on msdn C# Snippet : (System.Management must be referenced in the solution, and in using declarations) SelectQuery wmiQuery = new SelectQuery("SELECT * FROM Win32_NetworkAdapter WHERE NetConnectionId != NULL");ManagementObjectSearcher searchProcedure = new ManagementObjectSearcher(wmiQuery);foreach (ManagementObject item in searchProcedure.Get()){ if (((string)item["NetConnectionId"]) == "Local Network Connection") { item.InvokeMethod("Disable", null); }} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/172875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
172,888 | This will hopefully be an easy one. I have an F# project (latest F# CTP) with two files (Program.fs, Stack.fs). In Stack.fs I have a simple namespace and type definition Stack.fs namespace Coltype Stack= ... Now I try to include the namespace in Program.fs by declaring open Col This doesn't work and gives me the error "The namespace or module Col is not defined." Yet it's defined within the same project. I've got to be missing something obvious | What order are the files in the .fsproj file? Stack.fs needs to come before Program.fs for Program.fs to be able to 'see' it. See also the start of http://lorgonblog.spaces.live.com/blog/cns!701679AD17B6D310!444.entry and the end of http://lorgonblog.spaces.live.com/blog/cns!701679AD17B6D310!347.entry | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/172888",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23283/"
]
} |
172,895 | Is there any easy way to retrieve table creation DDL from Microsoft Access (2007) or do I have to code it myself using VBA to read the table structure? I have about 30 tables that we are porting to Oracle and it would make life easier if we could create the tables from the Access definitions. | Thanks for the other suggestions. While I was waiting I wrote some VBA code to do it. It's not perfect, but did the job for me. Option Compare DatabasePublic Function TableCreateDDL(TableDef As TableDef) As String Dim fldDef As Field Dim FieldIndex As Integer Dim fldName As String, fldDataInfo As String Dim DDL As String Dim TableName As String TableName = TableDef.Name TableName = Replace(TableName, " ", "_") DDL = "create table " & TableName & "(" & vbCrLf With TableDef For FieldIndex = 0 To .Fields.Count - 1 Set fldDef = .Fields(FieldIndex) With fldDef fldName = .Name fldName = Replace(fldName, " ", "_") Select Case .Type Case dbBoolean fldDataInfo = "nvarchar2" Case dbByte fldDataInfo = "number" Case dbInteger fldDataInfo = "number" Case dbLong fldDataInfo = "number" Case dbCurrency fldDataInfo = "number" Case dbSingle fldDataInfo = "number" Case dbDouble fldDataInfo = "number" Case dbDate fldDataInfo = "date" Case dbText fldDataInfo = "nvarchar2(" & Format$(.Size) & ")" Case dbLongBinary fldDataInfo = "****" Case dbMemo fldDataInfo = "****" Case dbGUID fldDataInfo = "nvarchar2(16)" End Select End With If FieldIndex > 0 Then DDL = DDL & ", " & vbCrLf End If DDL = DDL & " " & fldName & " " & fldDataInfo Next FieldIndex End With DDL = DDL & ");" TableCreateDDL = DDLEnd FunctionSub ExportAllTableCreateDDL() Dim lTbl As Long Dim dBase As Database Dim Handle As Integer Set dBase = CurrentDb Handle = FreeFile Open "c:\export\TableCreateDDL.txt" For Output Access Write As #Handle For lTbl = 0 To dBase.TableDefs.Count - 1 'If the table name is a temporary or system table then ignore it If Left(dBase.TableDefs(lTbl).Name, 1) = "~" Or _ Left(dBase.TableDefs(lTbl).Name, 4) = "MSYS" Then '~ indicates a temporary table 'MSYS indicates a system level table Else Print #Handle, TableCreateDDL(dBase.TableDefs(lTbl)) End If Next lTbl Close Handle Set dBase = NothingEnd Sub I never claimed to be VB programmer. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/172895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/24355/"
]
} |
172,918 | I have a container div that holds two internal divs; both should take 100% width and 100% height within the container. I set both internal divs to 100% height. That works fine in Firefox, however in IE the divs do not stretch to 100% height but only the height of the text inside them. The following is a simplified version of my style sheet. #container{ height: auto; width: 100%;}#container #mainContentsWrapper{ float: left; height: 100%; width: 70%; margin: 0; padding: 0;}#container #sidebarWrapper{ float: right; height: 100%; width: 29.7%; margin: 0; padding: 0;} Is there something I am doing wrong? Or any Firefox/IE quirks I am missing out? | I think "works fine in Firefox" is in the Quirks mode rendering only.In the Standard mode rendering, that might not work fine in Firefox too. percentage depends on "containing block", instead of viewport. CSS Specification says The percentage is calculated with respect to the height of the generated box's containing block. If the height of the containing block is not specified explicitly (i.e., it depends on content height), and this element is not absolutely positioned, the value computes to 'auto'. so #container { height: auto; }#container #mainContentsWrapper { height: n%; }#container #sidebarWrapper { height: n%; } means #container { height: auto; }#container #mainContentsWrapper { height: auto; }#container #sidebarWrapper { height: auto; } To stretch to 100% height of viewport, you need to specify the height of the containing block (in this case, it's #container).Moreover, you also need to specify the height to body and html, because initial Containing Block is "UA-dependent". All you need is... html, body { height:100%; }#container { height:100%; } | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/172918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/131/"
]
} |
172,925 | I've been working with databases for the last few years and I'd like to think that I've gotten fairly competent with using them. However I was reading recently about Joel's Law of Leaky Abstractions and I realised that even though I can write a query to get pretty much anything I want out of a database, I have no idea how the database actually interprets the query. Does anyone know of any good articles or books that explain how databases work internally? Some specific things I'm interested in are: What does a database actually do to find out what matches a select statement? How does a database interpret a join differently to a query with several "where key1 = key2" statements? How does the database store all its memory? How are indexes stored? | What does a database actually do to find out what matches a select statement? To be blunt, it's a matter of brute force. Simply, it reads through each candidate record in the database and matches the expression to the fields. So, if you have "select * from table where name = 'fred'", it literally runs through each record, grabs the "name" field, and compares it to 'fred'. Now, if the "table.name" field is indexed, then the database will (likely, but not necessarily) use the index first to locate the candidate records to apply the actual filter to. This reduces the number of candidate records to apply the expression to, otherwise it will just do what we call a "table scan", i.e. read every row. But fundamentally, however it locates the candidate records is separate from how it applies the actual filter expression, and, obviously, there are some clever optimizations that can be done. How does a database interpret a join differently to a query with several "where key1 = key2" statements? Well, a join is used to make a new "pseudo table", upon which the filter is applied. So, you have the filter criteria and the join criteria. The join criteria is used to build this "pseudo table" and then the filter is applied against that. Now, when interpreting the join, it's again the same issue as the filter -- brute force comparisons and index reads to build the subset for the "pseudo table". How does the database store all its memory? One of the keys to good database is how it manages its I/O buffers. But it basically matches RAM blocks to disk blocks. With the modern virtual memory managers, a simpler database can almost rely on the VM as its memory buffer manager. The high end DB'S do all this themselves. How are indexes stored? B+Trees typically, you should look it up. It's a straight forward technique that has been around for years. It's benefit is shared with most any balanced tree: consistent access to the nodes, plus all the leaf nodes are linked so you can easily traverse from node to node in key order. So, with an index, the rows can be considered "sorted" for specific fields in the database, and the database can leverage that information to it benefit for optimizations. This is distinct from, say, using a hash table for an index, which only lets you get to a specific record quickly. In a B-Tree you can quickly get not just to a specific record, but to a point within a sorted list. The actual mechanics of storing and indexing rows in the database are really pretty straight forward and well understood. The game is managing buffers, and converting SQL in to efficient query paths to leverage these basic storage idioms. Then, there's the whole multi-users, locking, logging, and transactions complexity on top of the storage idiom. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/172925",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20925/"
]
} |
172,934 | I need to retrieve a set of Widgets from my data access layer, grouped by widget.Manufacturer, to display in a set of nested ASP.NET ListViews. The problem is that (as far as I can tell) the nested ListView approach requires me to shape the data before using it, and I can't figure out the best approach to take. The best I've been able to come up with so far is to put a LINQ query in my data access layer like so: var result = from widget in GetAllWidgets(int widgetTypeID) group widget by widget.Manufacturer into groupedWidgets let widgets = from widgetGroup in groupedWidgets select widgetGroup select new { Manufacturer = groupedWidgets.Key, Widgets = widgets }; Of course, anonymous types can't be passed around, so that doesn't work. Defining a custom class to enclose data seems like the wrong way to go. Is there some way I can perform the grouping on the ASP.NET side of things? I'm using ObjectDataSources to access the DAL. Updated : OK, I'm not creating an anonymous type anymore, and instead my DAL passes an IEnumerable<IGrouping<Manufacturer, Widget>> to the ASP.NET page, but how can I use this in my ListViews? I need to render the following HTML (or something pretty much like it) <ul> <li>Foo Corp. <ol> <li>Baz</li> <li>Quux</li> </ol> </li> <li>Bar Corp. <ol> <li>Thinger</li> <li>Whatsit</li> </ol> </li></ul> Originally, I had a ListView within a ListView like so: <asp:ListView ID="ManufacturerListView"> <LayoutTemplate> <ul> <asp:Placeholder ID="itemPlaceholder" runat="server" /> </ul> </LayoutTemplate> <ItemTemplate> <li><asp:Label Text='<%# Eval("Manufacturer.Name") %>' /> <li> <asp:ListView ID="WidgetsListView" runat="server" DataSource='<%# Eval("Widgets") %>'> <LayoutTemplate> <ol> <asp:PlaceHolder runat="server" ID="itemPlaceholder" /> </ol> </LayoutTemplate> <ItemTemplate> <li><asp:Label Text='<%# Eval("Name") %>'></li> </ItemTemplate> </asp:ListView> </li> </ItemTemplate></asp:ListView> Note how the DataSource property of WidgetsListView is itself databound. How can I duplicate this functionality without reshaping the data? This is getting kind of complicated, sorry if I should have just made a separate question instead. | Ok, I'm going to contradict my prior statement. Since eval wants some kind of property name in the nested control, we should probably shape that data. public class CustomGroup<TKey, TValue>{ public TKey Key {get;set;} public IEnumerable<TValue> Values {get;set;}} // and use it thusly... IEnumerable<CustomGroup<Manufacturer, Widget>> result = GetAllWidgets(widgetTypeId) .GroupBy(w => w.Manufacturer) .Select(g => new CustomGroup<Manufacturer, Widget>(){Key = g.Key, Values = g}; /// and even later... <asp:ListView ID="ManufacturerListView"><LayoutTemplate> <ul> <asp:Placeholder ID="itemPlaceholder" runat="server" /> </ul></LayoutTemplate><ItemTemplate> <li><asp:Label Text='<%# Eval("Key.Name") %>' /> <li> <asp:ListView ID="WidgetsListView" runat="server" DataSource='<%# Eval("Values") %>'> <LayoutTemplate> <ol> <asp:PlaceHolder runat="server" ID="itemPlaceholder" /> </ol> </LayoutTemplate> <ItemTemplate> <li><asp:Label Text='<%# Eval("Name") %>'></li> </ItemTemplate> </asp:ListView> </li></ItemTemplate></asp:ListView> | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/172934",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4160/"
]
} |
172,935 | After understanding (quote), I'm curious as to how one might cause the statement to execute. My first thought was (defvar x '(+ 2 21))`(,@x) but that just evaluates to (+ 2 21) , or the contents of x . How would one run code that was placed in a list? | (eval '(+ 2 21)) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/172935",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1256/"
]
} |
172,957 | I just set up my new homepage at http://ritter.vg . I'm using jQuery, but very minimally. It loads all the pages using AJAX - I have it set up to allow bookmarking by detecting the hash in the URL. //general functions function getUrl(u) { return u + '.html'; } function loadURL(u) { $.get(getUrl(u), function(r){ $('#main').html(r); } ); } //allows bookmarking var hash = new String(document.location).indexOf("#"); if(hash > 0) { page = new String(document.location).substring(hash + 1); if(page.length > 1) loadURL(page); else loadURL('news'); } else loadURL('news'); But I can't get the back and forward buttons to work. Is there a way to detect when the back button has been pressed (or detect when the hash changes) without using a setInterval loop? When I tried those with .2 and 1 second timeouts, it pegged my CPU. | Use the jQuery hashchange event plugin instead. Regarding your full ajax navigation, try to have SEO friendly ajax . Otherwise your pages shown nothing in browsers with JavaScript limitations. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/172957",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8435/"
]
} |
173,005 | I want to display the TIME field from my mysql table on my website, but rather than showing 21:00:00 etc I want to show 8:00 PM. I need a function/code to do this or even any pointers in the right direction. Will mark the first reply with some code as the correct reply. | Check this out: http://dev.mysql.com/doc/refman/5.0/en/date-and-time-functions.html I'd imagine you'd want date_format(). Example: DATE_FORMAT($date, "%r") | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173005",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23019/"
]
} |
173,009 | All too often I want a WPF slider that behaves like the System.Windows.Forms.TrackBar of old. That is, I want a slider that goes from X to Y but only allows the user to move it in discrete integer positions. How does one do this in WPF since the Value property on the Slider is double? | The simple answer is that you take advantage of the IsSnapToTickEnabled and TickFrequency properties. That is, turn snapping to ticks on and set the tick frequency to 1. Or, in other words ... take advantage of ticks ... but you don't necessarily have to show the ticks that you are snapping to. Check out the following piece of xaml: <Slider Orientation="Vertical" Height="200" Minimum="0" Maximum="10" Value="0" IsSnapToTickEnabled="True" TickFrequency="1"/> | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/173009",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/22294/"
]
} |
173,041 | Which of these queries is the faster? NOT EXISTS: SELECT ProductID, ProductName FROM Northwind..Products pWHERE NOT EXISTS ( SELECT 1 FROM Northwind..[Order Details] od WHERE p.ProductId = od.ProductId) Or NOT IN: SELECT ProductID, ProductName FROM Northwind..Products pWHERE p.ProductID NOT IN ( SELECT ProductID FROM Northwind..[Order Details]) The query execution plan says they both do the same thing. If that is the case, which is the recommended form? This is based on the NorthWind database. [Edit] Just found this helpful article: http://weblogs.sqlteam.com/mladenp/archive/2007/05/18/60210.aspx I think I'll stick with NOT EXISTS. | I always default to NOT EXISTS . The execution plans may be the same at the moment but if either column is altered in the future to allow NULL s the NOT IN version will need to do more work (even if no NULL s are actually present in the data) and the semantics of NOT IN if NULL s are present are unlikely to be the ones you want anyway. When neither Products.ProductID or [Order Details].ProductID allow NULL s the NOT IN will be treated identically to the following query. SELECT ProductID, ProductNameFROM Products pWHERE NOT EXISTS (SELECT * FROM [Order Details] od WHERE p.ProductId = od.ProductId) The exact plan may vary but for my example data I get the following. A reasonably common misconception seems to be that correlated sub queries are always "bad" compared to joins. They certainly can be when they force a nested loops plan (sub query evaluated row by row) but this plan includes an anti semi join logical operator. Anti semi joins are not restricted to nested loops but can use hash or merge (as in this example) joins too. /*Not valid syntax but better reflects the plan*/ SELECT p.ProductID, p.ProductNameFROM Products p LEFT ANTI SEMI JOIN [Order Details] od ON p.ProductId = od.ProductId If [Order Details].ProductID is NULL -able the query then becomes SELECT ProductID, ProductNameFROM Products pWHERE NOT EXISTS (SELECT * FROM [Order Details] od WHERE p.ProductId = od.ProductId) AND NOT EXISTS (SELECT * FROM [Order Details] WHERE ProductId IS NULL) The reason for this is that the correct semantics if [Order Details] contains any NULL ProductId s is to return no results. See the extra anti semi join and row count spool to verify this that is added to the plan. If Products.ProductID is also changed to become NULL -able the query then becomes SELECT ProductID, ProductNameFROM Products pWHERE NOT EXISTS (SELECT * FROM [Order Details] od WHERE p.ProductId = od.ProductId) AND NOT EXISTS (SELECT * FROM [Order Details] WHERE ProductId IS NULL) AND NOT EXISTS (SELECT * FROM (SELECT TOP 1 * FROM [Order Details]) S WHERE p.ProductID IS NULL) The reason for that one is because a NULL Products.ProductId should not be returned in the results except if the NOT IN sub query were to return no results at all (i.e. the [Order Details] table is empty). In which case it should. In the plan for my sample data this is implemented by adding another anti semi join as below. The effect of this is shown in the blog post already linked by Buckley . In the example there the number of logical reads increase from around 400 to 500,000. Additionally the fact that a single NULL can reduce the row count to zero makes cardinality estimation very difficult. If SQL Server assumes that this will happen but in fact there were no NULL rows in the data the rest of the execution plan may be catastrophically worse, if this is just part of a larger query, with inappropriate nested loops causing repeated execution of an expensive sub tree for example . This is not the only possible execution plan for a NOT IN on a NULL -able column however. This article shows another one for a query against the AdventureWorks2008 database. For the NOT IN on a NOT NULL column or the NOT EXISTS against either a nullable or non nullable column it gives the following plan. When the column changes to NULL -able the NOT IN plan now looks like It adds an extra inner join operator to the plan. This apparatus is explained here . It is all there to convert the previous single correlated index seek on Sales.SalesOrderDetail.ProductID = <correlated_product_id> to two seeks per outer row. The additional one is on WHERE Sales.SalesOrderDetail.ProductID IS NULL . As this is under an anti semi join if that one returns any rows the second seek will not occur. However if Sales.SalesOrderDetail does not contain any NULL ProductID s it will double the number of seek operations required. | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/173041",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9825/"
]
} |
173,046 | I need to create a custom volume slider for a WMP object. The current slider is complicated to modify, and use, is there a simple way to generate a slider on an HTML page that can have it's value passed to a javascript function? | jQuery UI Slider ( API docs ) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/173046",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/115/"
]
} |
173,056 | I've been trying to solve this, and have been getting stuck, so I thought I'd ask. Imagine two ActionBeans, A and B. A.jsp has this section in it: ...<jsp:include page="/B.action"> <jsp:param name="ponies" value="on"/></jsp:include><jsp:include page="/B.action"> <jsp:param name="ponies" value="off"/></jsp:include>... Take it as read that the B ActionBean does some terribly interesting stuff depending on whether the "ponies" parameter is set to either on or off. The parameter string "ponies=on" is visible when you debug into the request, but it's not what's getting bound into the B ActionBean. Instead what's getting bound are the parameters to the original A.action. Is there some way of getting the behaviour I want, or have I missed something fundamental? | jQuery UI Slider ( API docs ) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/173056",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/22419/"
]
} |
173,070 | I heard that you could right-shift a number by .5 instead of using Math.floor(). I decided to check its limits to make sure that it was a suitable replacement, so I checked the following values and got the following results in Google Chrome: 2.5 >> .5 == 2;2.9999 >> .5 == 2;2.999999999999999 >> .5 == 2; // 15 9s2.9999999999999999 >> .5 == 3; // 16 9s After some fiddling, I found out that the highest possible value of two which, when right-shifted by .5, would yield 2 is 2.9999999999999997779553950749686919152736663818359374999999¯ (with the 9 repeating) in Chrome and Firefox. The number is 2.9999999999999997779¯ in IE. My question is: what is the significance of the number .0000000000000007779553950749686919152736663818359374? It's a very strange number and it really piqued my curiosity. I've been trying to find an answer or at least some kind of pattern, but I think my problem lies in the fact that I really don't understand the bitwise operation. I understand the idea in principle, but shifting a bit sequence by .5 doesn't make any sense at all to me. Any help is appreciated. For the record, the weird digit sequence changes with 2^x. The highest possible values of the following numbers that still truncate properly: for 0: 0.9999999999999999444888487687421729788184165954589843749¯for 1: 1.9999999999999999888977697537484345957636833190917968749¯for 2-3: x+.99999999999999977795539507496869191527366638183593749¯for 4-7: x+.9999999999999995559107901499373838305473327636718749¯for 8-15: x+.999999999999999111821580299874767661094665527343749¯...and so forth | Actually, you're simply ending up doing a floor() on the first operand, without any floating point operations going on. Since the left shift and right shift bitwise operations only make sense with integer operands, the JavaScript engine is converting the two operands to integers first: 2.999999 >> 0.5 Becomes: Math.floor(2.999999) >> Math.floor(0.5) Which in turn is: 2 >> 0 Shifting by 0 bits means "don't do a shift" and therefore you end up with the first operand, simply truncated to an integer. The SpiderMonkey source code has: switch (op) { case JSOP_LSH: case JSOP_RSH: if (!js_DoubleToECMAInt32(cx, d, &i)) // Same as Math.floor() return JS_FALSE; if (!js_DoubleToECMAInt32(cx, d2, &j)) // Same as Math.floor() return JS_FALSE; j &= 31; d = (op == JSOP_LSH) ? i << j : i >> j; break; Your seeing a "rounding up" with certain numbers is due to the fact the JavaScript engine can't handle decimal digits beyond a certain precision and therefore your number ends up getting rounded up to the next integer. Try this in your browser: alert(2.999999999999999); You'll get 2.999999999999999. Now try adding one more 9: alert(2.9999999999999999); You'll get a 3. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/173070",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/25357/"
]
} |
173,080 | What are some of the new features that can be used in .NET 2.0 that are specific to C# 3.0/3.5 after upgrading to Visual Studio 2008? Also, what are some of the features that aren't available? Available Lambdas Extension methods (by declaring an empty System.Runtime.CompilerServices.ExtensionAttribute) Automatic properties Object initializers Collection Initializers LINQ to Objects (by implementing IEnumerable extension methods, see LinqBridge ) Not Available Expression trees WPF/Silverlight Libraries | You can use any new C# 3.0 feature that is handled by the compiler by emitting 2.0-compatible IL and doesn't reference any of the new 3.5 assemblies: Lambdas (used as Func<..> , not Expression<Func<..>> ) Extension methods (by declaring an empty System.Runtime.CompilerServices.ExtensionAttribute ) Automatic properties Object Initializers Collection Initializers LINQ to Objects (by implementing IEnumerable<T> extension methods, see LinqBridge ) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173080",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18194/"
]
} |
173,159 | I know what ViewData is and use it all the time, but in ASP.NET Preview 5 they introduced something new called TempData. I normally strongly type my ViewData, instead of using the dictionary of objects approach. So, when should I use TempData instead of ViewData? Are there any best practices for this? | In one sentence: TempData are like ViewData with one difference: They only contain data between two successive requests, after that they are destroyed. You can use TempData to pass error messages or something similar. Although outdated, this article has good description of the TempData lifecycle. As Ben Scheirman said here : TempData is a session-backed temporary storage dictionary that is available for one single request. It’s great to pass messages between controllers. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/173159",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4481/"
]
} |
173,202 | Imagine you don't have the problem of feature creep, you have a motivated and stable team, clear defined problems to solve, AND you know the domain/language/tools related to your project. How do you stick to a schedule and accomplish that 1.0 milestone? What is your approach to an iterative shipping ? I'd like recommendations specially for a small team, where there are few or almost none communication problems. | Focus on features not implementation tasks. Work in iterations (like weekly or biweekly). Release working features to your staging environment in order of priority. Unit test your code as you go, so you're not slowed down by a buglist that increases geometrically as you approach the release date. Be prepared to cut scope from the less important features. Stuff always takes longer than you think it will. Make sure you sketch out the UI in advance (if there is a UI), and show it to potential users. Test, test, and test some more. This seems counter-intuitive, but it saves more time than takes. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173202",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/861/"
]
} |
173,207 | I am exploring ASP.NET MVC and I wanted to add jQuery to make the site interactive. I used StringTemplate, ported to .Net, as my template engine to generate html and to send JSON. However, when I view the page, I could not see it. After debugging, I've realized that the $ is used by the StringTemplate to access property, etc and jQuery uses it too to manipulate the DOM. Gee, I've looked on other template engines and most of them uses the dollar sign :(. Any alternative template engine for ASP.Net MVC? I wanted to retain jQuery because MSFT announced that it will used in the Visual Studio (2008?) Thanks in Advance :) Update Please go to the answer in ASP.NET MVC View Engine Comparison question for a comprehensive list of Template engine for ASP.NET MVC, and their pros and cons Update 2 At the end I'll just put the JavaScript code, including JQuery, in a separate script file, hence I wouldn't worry about the $ mingling in the template file. Update 3 Changed the Title to reflect what I need to resolve. After all "The Best X in Y" is very subjective question. | You can of course move your js logic into a .js file. But if you want it inline with your StringTemplate views, you can escape it using the \$ construct. In addition, you can simply use the jQuery("selector"), instead of $("selector") construct if you want to avoid the escaping syntax. Here's a good article on using StringTemplate as a View Engine in MVC . There's also an accompanying OpenSource engine, along with some samples . Also, as mentioned above, you can modify your Type Lexer. (make it an alternate character to the $). | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173207",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/24755/"
]
} |
173,246 | I have a web project where I must import text and images from a user-supplied document, and one of the possible formats is Microsoft Office 2007. There's also a need to generate documents in this format. The server runs CentOS 5.2 and has PHP/Perl/Python installed. I can execute local binaries and shell scripts if I must. We use Apache 2.2 but will be switching over to Nginx once it goes live. What are my options? Anyone had experience with this? | The Office 2007 file formats are open and well documented . Roughly speaking, all of the new file formats ending in "x" are zip compressed XML documents. For example: To open a Word 2007 XML file Create a temporary folder in which to store the file and its parts. Save a Word 2007 document, containing text, pictures, and other elements, as a .docx file. Add a .zip extension to the end of the file name. Double-click the file. It will open in the ZIP application. You can see the parts that comprise the file. Extract the parts to the folder that you created previously. The other file formats are roughly similar. I don't know of any open source libraries for interacting with them as yet - but depending on your exact requirements, it doesn't look too difficult to read and write simple documents. Certainly it should be a lot easier than with the older formats. If you need to read the older formats, OpenOffice has an API and can read and write Office 2003 and older documents with more or less success. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173246",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18406/"
]
} |
173,247 | I've got an iPhone app with icon file Icon.png. This icon shows up properly when the app is on the phone itself, but it doesn't show up in the applications pane in iTunes. What do I need to do to get it to show up properly? | The cleanest way to do this is described in the official Apple documentation, in a section called Publishing Applications for Testing . Below is the exact instructions given to you on that page: The iTunes artwork your testers see should be your application’s icon. This artwork must be a 512 x 512 JPEG or PNG file named iTunesArtwork . Note that the file must not have an extension . After generating the file of your application’s icon, follow these steps to add it to your application: Open your project in Xcode. In the Groups & Files list, select the Resources group. Choose Project > Add to Project, navigate to your iTunesArtwork file, and click Add. In the dialog that appears, select the ”Copy items” option and click Add. Note that the PNG or JPEG file is just 'iTunesArtwork', with no suffix. If you try to copy the file into the application bundle after you have built it, it will break the app signing, and you will get a verification error when trying to sync it to your device. Ensure that the artwork file is included in the "Copy Bundle Resources" folder, within your project's target in XCode (step 4, above). | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/173247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1512/"
]
} |
173,278 | The docs say that calling sys.exit() raises a SystemExit exception which can be caught in outer levels. I have a situation in which I want to definitively and unquestionably exit from inside a test case, however the unittest module catches SystemExit and prevents the exit. This is normally great, but the specific situation I am trying to handle is one where our test framework has detected that it is configured to point to a non-test database. In this case I want to exit and prevent any further tests from being run. Of course since unittest traps the SystemExit and continues happily on it's way, it is thwarting me. The only option I have thought of so far is using ctypes or something similar to call exit(3) directly but this seems like a pretty fugly hack for something that should be really simple. | You can call os._exit() to directly exit, without throwing an exception: import osos._exit(1) This bypasses all of the python shutdown logic, such as the atexit module, and will not run through the exception handling logic that you're trying to avoid in this situation. The argument is the exit code that will be returned by the process. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/173278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2168/"
]
} |
173,290 | I have a list string tag. I am trying to initialize a dictionary with the key as the tag string and values as the array index. for i, ithTag in enumerate(tag): tagDict.update(ithTag=i) The above returns me {'ithTag': 608} 608 is the 608th index My problem is that while the i is being interpreted as a variable, Python is treating the "ithTag" as a string instead of a variable. I'm confused, it is kind of hard to google these kind of specific questions. I hope I worded the title of this question correctly, Thanks! | You actually want to do this: for i, tag in enumerate(tag): tagDict[tag] = i The .update() method is used for updating a dictionary using another dictionary, not for changing a single key/value pair. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/173290",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17085/"
]
} |
173,295 | How do you select a datagridview row on a right-click? | Make it behave similarly to the left mouse button? e.g. private void dataGridView_CellMouseDown(object sender, DataGridViewCellMouseEventArgs e){ if (e.Button == MouseButtons.Right) { dataGridView.CurrentCell = dataGridView[e.ColumnIndex, e.RowIndex]; }} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173295",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/25078/"
]
} |
173,307 | I'm not the best programmer so I found it was much easier to write a program as several separate executables, which occasionally call each other. But now I need an easy way to actually run them without writing detailed instructions like Run file one, wait until its completed and no longer in process manager before running file two, file three can be executed 15 seconds after file two has been created. Then Add a key to your registry. Etc. I figure there must be a good software out there where I can just drop all my exes in, tell it when to run them, and output one file for my clients to run. Any ideas? | Inno Setup is easy to use, free, open source and scriptable if you need it. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173307",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
173,329 | I have this query in sql server 2000: select pwdencrypt('AAAA') which outputs an encrypted string of 'AAAA': 0x0100CF465B7B12625EF019E157120D58DD46569AC7BF4118455D12625EF019E157120D58DD46569AC7BF4118455D How can I convert (decrypt) the output from its origin (which is 'AAAA')? | I believe pwdencrypt is using a hash so you cannot really reverse the hashed string - the algorithm is designed so it's impossible. If you are verifying the password that a user entered the usual technique is to hash it and then compare it to the hashed version in the database. This is how you could verify a usered entered table SELECT password_field FROM mytable WHERE password_field=pwdencrypt(userEnteredValue) Replace userEnteredValue with (big surprise) the value that the user entered :) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173329",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21963/"
]
} |
173,332 | I know there are HTML entities for 1/2, 1/4, and 3/4, but are there others? Like 1/3 or 1/8? Is there a good way to encode arbitrary fractions? | how about 15 ⁄ 16 ? (<sup>15</sup>⁄<sub>16</sub>) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/173332",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7598/"
]
} |
173,366 | V8's documentation explains how to create a Javascript object that wraps a C++ object . The Javascript object holds on to a pointer to a C++ object instance. My question is, let's say you create the C++ object on the heap, how can you get a notification when the Javascript object is collected by the gc, so you can free the heap allocated C++ object? | The trick is to create a Persistent handle (second bullet point from the linked-to API reference: " Persistent handles are not held on a stack and are deleted only when you specifically remove them. ... Use a persistent handle when you need to keep a reference to an object for more than one function call, or when handle lifetimes do not correspond to C++ scopes."), and call MakeWeak() on it, passing a callback function that will do the necessary cleanup ("A persistent handle can be made weak, using Persistent::MakeWeak , to trigger a callback from the garbage collector when the only references to an object are from weak persistent handles." -- that is, when all "regular" handles have gone out of scope and when the garbage collector is about to delete the object). The Persistent::MakeWeak method signature is: void MakeWeak(void* parameters, WeakReferenceCallback callback); Where WeakReferenceCallback is defined as a pointer-to-function taking two parameters: typedef void (*WeakReferenceCallback)(Persistent<Object> object, void* parameter); These are found in the v8.h header file distributed with V8 as the public API. You would want the function you pass to MakeWeak to clean up the Persistent<Object> object parameter that will get passed to it when it's called as a callback. The void* parameter parameter can be ignored (or the void* parameter can point to a C++ structure that holds the objects that need cleaning up): void CleanupV8Point(Persistent<Object> object, void*){ // do whatever cleanup on object that you're looking for object.destroyCppObjects();}Parameter<ObjectTemplate> my_obj(ObjectTemplate::New());// when the Javascript part of my_obj is about to be collected// we'll have V8 call CleanupV8Point(my_obj)my_obj.MakeWeak(NULL, &CleanupV8Point); | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/173366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1892/"
]
} |
173,400 | PHP treats all arrays as associative, so there aren't any built in functions. Can anyone recommend a fairly efficient way to check if an array "is a list" (contains only numeric keys starting from 0)? Basically, I want to be able to differentiate between this: $sequentialArray = [ 'apple', 'orange', 'tomato', 'carrot']; and this: $assocArray = [ 'fruit1' => 'apple', 'fruit2' => 'orange', 'veg1' => 'tomato', 'veg2' => 'carrot']; | You have asked two questions that are not quite equivalent: Firstly, how to determine whether an array has only numeric keys Secondly, how to determine whether an array has sequential numeric keys, starting from 0 Consider which of these behaviours you actually need. (It may be that either will do for your purposes.) The first question (simply checking that all keys are numeric) is answered well by Captain kurO . For the second question (checking whether the array is zero-indexed and sequential), you can use the following function: function isAssoc(array $arr){ if (array() === $arr) return false; return array_keys($arr) !== range(0, count($arr) - 1);}var_dump(isAssoc(['a', 'b', 'c'])); // falsevar_dump(isAssoc(["0" => 'a', "1" => 'b', "2" => 'c'])); // falsevar_dump(isAssoc(["1" => 'a', "0" => 'b', "2" => 'c'])); // truevar_dump(isAssoc(["a" => 'a', "b" => 'b', "c" => 'c'])); // true | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/173400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5291/"
]
} |
173,409 | I'm trying to find a way to get the execution time of a section of code in C. I've already tried both time() and clock() from time.h, but it seems that time() returns seconds and clock() seems to give me milliseconds (or centiseconds?) I would like something more precise though. Is there a way I can grab the time with at least microsecond precision? This only needs to be able to compile on Linux. | You referred to clock() and time() - were you looking for gettimeofday() ?That will fill in a struct timeval , which contains seconds and microseconds. Of course the actual resolution is up to the hardware. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/25295/"
]
} |
173,504 | Sorry for my ignorance here, but when I hear the word webserver, I immediately imagine Apache, although I know people use Microsoft's IIS too. However since I've been hanging out here at Stackoverflow I've noticed lots of people use Glassfish. Which made me wonder, why would I want to use Glassfish (in the sense that I'm interested, but I don't really understand why it might make my life easier). From what I read it's Sun's open-source derivate of Apache's Tomcat, thus I imagine it's a good (or great) quality product. But since I don't know its strengths and weaknesses, I don't know when it would be wise to choose Glassfish over another server. Could anyone elaborate ? | GlassFish is an Application Server which can also be used as a Web Server (Http Server). A web Server means: Handling HTTP requests (usually from browsers). A Servlet Container (e.g. Tomcat) means: It can handle servlets & JSP. An Application Server (e.g. GlassFish) means: It can manage Java EE applications (usually both servlet/JSP and EJBs). You should use GlassFish for Java EE enterprise applications. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/173504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15124/"
]
} |
173,618 | In MSVC, DebugBreak() or __debugbreak cause a debugger to break. On x86 it is equivalent to writing "_asm int 3", on x64 it is something different. When compiling with gcc (or any other standard compiler) I want to do a break into debugger, too. Is there a platform independent function or intrinsic? I saw the XCode question about that, but it doesn't seem portable enough. Sidenote: I mainly want to implement ASSERT with that, and I understand I can use assert() for that, but I also want to write DEBUG_BREAK or something into the code. | A method that is portable to most POSIX systems is: raise(SIGTRAP); | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/173618",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23740/"
]
} |
173,641 | I'm trying to use TestDriven.Net not only to test my code, but to call a function on my code whose purpose is to print out the internal state of the code to the Debug window. Here's a very simplified example of what I'm trying to do.. <TestFixture()> _Public Class UnitTest <Test()> _ Public Sub TestDebug() Dim oClass1 As New Class1 Assert.AreEqual(True, oClass1.IsTrue) Debug.WriteLine("About to call .PrintDebug()") oClass1.PrintToDebug() End SubEnd ClassPublic Class Class1 Private _IsTrue As Boolean = True Public ReadOnly Property IsTrue() As Boolean Get Return _IsTrue End Get End Property Public Sub PrintToDebug() Debug.WriteLine("Internal state of Class1: " & _IsTrue) End SubEnd Class I'm trying to test the Public interface of Class1, and somehow view the output from the Class1.PrintToDebug() function. I've looked through the TestDriven.Net quickstart , which shows examples of using the Debug.WriteLine in a unit test, but strangely this doesn't work for me either - i.e. the only Output in my 'Test' window is: ------ Test started: Assembly: ClassLibrary1.dll ------1 passed, 0 failed, 0 skipped, took 1.19 seconds. I've tried looking in the other windows (Debug and Build), the Debug window has the 'Program Output' and 'Exception Messages' options enabled. I've looked for options or preferences and can't find any! Thanks for your help! Edit: I'm using VB.Net 2.0, TestDriven.Net 2.14.2190 and NUnit 2.4.8.0 | I found that while Debug.Writeline() doesn't work with unit tests, Console.WriteLine() does. The reason is that when you run tests, the debugger process isn't invoked, and Debug.WriteLine() is ignored. However, if you use "Test with Debugger", I think (haven't tried) Debug.WriteLine() will work. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173641",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5662/"
]
} |
173,652 | I have a WPF Window which has a among other controls hosts a Frame. In that frame I display different pages. Is there way to make a dialog modal to only a page? When I'm showing the dialog it should not be possible to click on any control on the page but it should be possible to click on a control on the same window that is not on the page. | If I am correct in interpreting your message, you want something that works similar to what Billy Hollis demonstrates in his StaffLynx application . I recently built a similar control and it turns out that this sort of idea is relatively simple to implement in WPF. I created a custom control called DialogPresenter. In the control template for the custom control, I added markup similar to the following: <ControlTemplate TargetType="{x:Type local=DialogPresenter}"> <Grid> <ContentControl> <ContentPresenter /> </ContentControl> <!-- The Rectangle is what simulates the modality --> <Rectangle x:Name="Overlay" Visibility="Collapsed" Opacity="0.4" Fill="LightGrey" /> <Grid x:Name="Dialog" Visibility="Collapsed"> <!-- The template for the dialog goes here (borders and such...) --> <ContentPresenter x:Name="PART_DialogView" /> </Grid> </Grid> <ControlTemplate.Triggers> <!-- Triggers to change the visibility of the PART_DialogView and Overlay --> </ControlTemplate.Triggers></ControlTemplate> I also added a Show(Control view) method, which finds the the 'PART_DialogView', and adds the passed in view to the Content property. This then allows me to use the DialogPresenter as follows: <controls:DialogPresenter x:Name="DialogPresenter"> <!-- Normal parent view content here --> <TextBlock>Hello World</TextBlock> <Button>Click Me!</Button></controls:DialogPresenter> To the buttons event handler (or bound command), I simply call the Show() method of the DialogPresenter . You can also easily add ScaleTransform markup to the DialogPresenter template to get scaling effects shown in the video. This solution has neat and tidy custom control code, and a very simple interface for your UI programming team. Hope this helps! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/173652",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/143/"
]
} |
173,670 | Being primarily a C++ developer the absence of RAII (Resource Acquisition Is Initialization) in Java and .NET has always bothered me. The fact that the onus of cleaning up is moved from the class writer to its consumer (by means of try finally or .NET's using construct ) seems to be markedly inferior. I see why in Java there is no support for RAII since all objects are located on the heap and the garbage collector inherently doesn't support deterministic destruction, but in .NET with the introduction of value-types ( struct ) we have the (seemingly) perfect candidate for RAII. A value type that's created on the stack has a well defined scope and C++ destructor semantics can be used. However the CLR does not permit a value-type to have a destructor. My random searches found one argument that if a value-type is boxed it falls under the jurisdiction of the garbage collector and therefore its destruction becomes non-deterministic. I feel that this argument isn't strong enough, the benefits of RAII are big enough to say that a value-type with a destructor cannot be boxed (or used as a class member). To cut a long story short my question is : are there any other reasons value types can not be used in order to introduce RAII to .NET? (or do you think my argument about RAII's obvious advantages are flawed?) Edit: I must have not phrased the question clearly since the first four answers have missed the point. I know about Finalize and its non-deterministic characteristics, I know about the using construct and I feel these two options are inferior to RAII. using is one more thing the consumer of a class must remember (how many people forgot to put a StreamReader in a using block?). My question is a philosophical one about the language design, why is it the way it is and can it be improved? For instance with a generic deterministically destructible value-type I can make the using and lock keywords redundant (achievable by library classes): public struct Disposer<T> where T : IDisposable { T val; public Disposer(T t) { val = t; } public T Value { get { return val; } } ~Disposer() // Currently illegal { if (val != default(T)) val.Dispose(); } } I can't help but end with a apropos quotation which I once saw but can't currently find its origin. You can take my deterministic destruction when my cold dead hand goes out of scope. -- Anon | A better title would be "Why is there no RAII in C#/VB". C++/CLI (The evolution of the abortion that was Managed C++) has RAII in the exact same sense as C++. It's all just syntax sugar for the same finalisation pattern that the rest of the CLI languages use (Destructors in managed objects for C++/CLI are effectively finalisers), but it is there. You might like http://blogs.msdn.com/hsutter/archive/2004/07/31/203137.aspx | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173670",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848/"
]
} |
173,687 | I haven't found an answer elsewhere and this doesn't appear to have been asked yet on SO. When creating an event binding in wxPython, is it possible to pass additional arguments to the event? For example, this is the normal way: b = wx.Button(self, 10, "Default Button", (20, 20)) self.Bind(wx.EVT_BUTTON, self.OnClick, b)def OnClick(self, event): self.log.write("Click! (%d)\n" % event.GetId()) But is it possible to have another argument passed to the method? Such that the method can tell if more than one widget is calling it but still return the same value? It would greatly reduce copy & pasting the same code but with different callers. | You can always use a lambda or another function to wrap up your method and pass another argument, not WX specific. b = wx.Button(self, 10, "Default Button", (20, 20)) self.Bind(wx.EVT_BUTTON, lambda event: self.OnClick(event, 'somevalue'), b)def OnClick(self, event, somearg): self.log.write("Click! (%d)\n" % event.GetId()) If you're out to reduce the amount of code to type, you might also try a little automatism like: class foo(whateverwxobject): def better_bind(self, type, instance, handler, *args, **kwargs): self.Bind(type, lambda event: handler(event, *args, **kwargs), instance) def __init__(self): self.better_bind(wx.EVT_BUTTON, b, self.OnClick, 'somevalue') | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/173687",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18676/"
]
} |
173,717 | What is the easiest way to copy the all the values from a column in a table to another column in the same table? | With a single statement (if the columns have the same datatype) UPDATE <tablename>SET <destination column name> = <source column name> | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/173717",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/25438/"
]
} |
173,726 | I'm doing some research into databases and I'm looking at some limitations of relational DBs. I'm getting that joins of large tables is very expensive, but I'm not completely sure why. What does the DBMS need to do to execute a join operation, where is the bottleneck? How can denormalization help to overcome this expense? How do other optimization techniques (indexing, for example) help? Personal experiences are welcome! If you're going to post links to resources, please avoid Wikipedia. I know where to find that already. In relation to this, I'm wondering about the denormalized approaches used by cloud service databases like BigTable and SimpleDB. See this question . | Denormalising to improve performance? It sounds convincing, but it doesn't hold water. Chris Date, who in company with Dr Ted Codd was the original proponent of the relational data model, ran out of patience with misinformed arguments against normalisation and systematically demolished them using scientific method: he got large databases and tested these assertions. I think he wrote it up in Relational Database Writings 1988-1991 but this book was later rolled into edition six of Introduction to Database Systems , which is the definitive text on database theory and design, in its eighth edition as I write and likely to remain in print for decades to come. Chris Date was an expert in this field when most of us were still running around barefoot. He found that: Some of them hold for special cases All of them fail to pay off for general use All of them are significantly worse for other special cases It all comes back to mitigating the size of the working set. Joins involving properly selected keys with correctly set up indexes are cheap, not expensive, because they allow significant pruning of the result before the rows are materialised. Materialising the result involves bulk disk reads which are the most expensive aspect of the exercise by an order of magnitude. Performing a join, by contrast, logically requires retrieval of only the keys . In practice, not even the key values are fetched: the key hash values are used for join comparisons, mitigating the cost of multi-column joins and radically reducing the cost of joins involving string comparisons. Not only will vastly more fit in cache, there's a lot less disk reading to do. Moreover, a good optimiser will choose the most restrictive condition and apply it before it performs a join, very effectively leveraging the high selectivity of joins on indexes with high cardinality. Admittedly this type of optimisation can also be applied to denormalised databases, but the sort of people who want to denormalise a schema typically don't think about cardinality when (if) they set up indexes. It is important to understand that table scans (examination of every row in a table in the course of producing a join) are rare in practice. A query optimiser will choose a table scan only when one or more of the following holds. There are fewer than 200 rows in the relation (in this case a scan will be cheaper) There are no suitable indexes on the join columns (if it's meaningful to join on these columns then why aren't they indexed? fix it) A type coercion is required before the columns can be compared (WTF?! fix it or go home) SEE END NOTES FOR ADO.NET ISSUE One of the arguments of the comparison is an expression (no index) Performing an operation is more expensive than not performing it. However, performing the wrong operation, being forced into pointless disk I/O and then discarding the dross prior to performing the join you really need, is much more expensive. Even when the "wrong" operation is precomputed and indexes have been sensibly applied, there remains significant penalty. Denormalising to precompute a join - notwithstanding the update anomalies entailed - is a commitment to a particular join. If you need a different join, that commitment is going to cost you big . If anyone wants to remind me that it's a changing world, I think you'll find that bigger datasets on gruntier hardware just exaggerates the spread of Date's findings. For all of you who work on billing systems or junk mail generators (shame on you) and are indignantly setting hand to keyboard to tell me that you know for a fact that denormalisation is faster, sorry but you're living in one of the special cases - specifically, the case where you process all of the data, in-order. It's not a general case, and you are justified in your strategy. You are not justified in falsely generalising it. See the end of the notes section for more information on appropriate use of denormalisation in data warehousing scenarios. I'd also like to respond to Joins are just cartesian products with some lipgloss What a load of bollocks. Restrictions are applied as early as possible, most restrictive first. You've read the theory, but you haven't understood it. Joins are treated as "cartesian products to which predicates apply" only by the query optimiser. This is a symbolic representation (a normalisation, in fact) to facilitate symbolic decomposition so the optimiser can produce all the equivalent transformations and rank them by cost and selectivity so that it can select the best query plan. The only way you will ever get the optimiser to produce a cartesian product is to fail to supply a predicate: SELECT * FROM A,B Notes David Aldridge provides some important additional information. There is indeed a variety of other strategies besides indexes and table scans, and a modern optimiser will cost them all before producing an execution plan. A practical piece of advice: if it can be used as a foreign key then index it, so that an index strategy is available to the optimiser. I used to be smarter than the MSSQL optimiser. That changed two versions ago. Now it generally teaches me . It is, in a very real sense, an expert system, codifying all the wisdom of many very clever people in a domain sufficiently closed that a rule-based system is effective. "Bollocks" may have been tactless. I am asked to be less haughty and reminded that math doesn't lie. This is true, but not all of the implications of mathematical models should necessarily be taken literally. Square roots of negative numbers are very handy if you carefully avoid examining their absurdity (pun there) and make damn sure you cancel them all out before you try to interpret your equation. The reason that I responded so savagely was that the statement as worded says that Joins are cartesian products... This may not be what was meant but it is what was written, and it's categorically untrue. A cartesian product is a relation. A join is a function. More specifically, a join is a relation-valued function. With an empty predicate it will produce a cartesian product, and checking that it does so is one correctness check for a database query engine, but nobody writes unconstrained joins in practice because they have no practical value outside a classroom. I called this out because I don't want readers falling into the ancient trap of confusing the model with the thing modelled. A model is an approximation, deliberately simplified for convenient manipulation. The cut-off for selection of a table-scan join strategy may vary between database engines. It is affected by a number of implementation decisions such as tree-node fill-factor, key-value size and subtleties of algorithm, but broadly speaking high-performance indexing has an execution time of k log n + c . The C term is a fixed overhead mostly made of setup time, and the shape of the curve means you don't get a payoff (compared to a linear search) until n is in the hundreds. Sometimes denormalisation is a good idea Denormalisation is a commitment to a particular join strategy. As mentioned earlier, this interferes with other join strategies. But if you have buckets of disk space, predictable patterns of access, and a tendency to process much or all of it, then precomputing a join can be very worthwhile. You can also figure out the access paths your operation typically uses and precompute all the joins for those access paths. This is the premise behind data warehouses, or at least it is when they're built by people who know why they're doing what they're doing, and not just for the sake of buzzword compliance. A properly designed data warehouse is produced periodically by a bulk transformation out of a normalised transaction processing system. This separation of the operations and reporting databases has the very desirable effect of eliminating the clash between OLTP and OLAP (online transaction processing ie data entry, and online analytical processing ie reporting). An important point here is that apart from the periodic updates, the data warehouse is read only . This renders moot the question of update anomalies. Don't make the mistake of denormalising your OLTP database (the database on which data entry happens). It might be faster for billing runs but if you do that you will get update anomalies. Ever tried to get Reader's Digest to stop sending you stuff? Disk space is cheap these days, so knock yourself out. But denormalising is only part of the story for data warehouses. Much bigger performance gains are derived from precomputed rolled-up values: monthly totals, that sort of thing. It's always about reducing the working set. ADO.NET problem with type mismatches Suppose you have a SQL Server table containing an indexed column of type varchar, and you use AddWithValue to pass a parameter constraining a query on this column. C# strings are Unicode, so the inferred parameter type will be NVARCHAR, which doesn't match VARCHAR. VARCHAR to NVARCHAR is a widening conversion so it happens implicitly - but say goodbye to indexing, and good luck working out why. "Count the disk hits" (Rick James) If everything is cached in RAM, JOINs are rather cheap. That is, normalization does not have much performance penalty . If a "normalized" schema causes JOINs to hit the disk a lot, but the equivalent "denormalized" schema would not have to hit the disk, then denormalization wins a performance competition. Comment from original author: Modern database engines are very good at organising access sequencing to minimise cache misses during join operations. The above, while true, might be miscontrued as implying that joins are necessarily problematically expensive on large data. This would lead to cause poor decision-making on the part of inexperienced developers. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/173726",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5409/"
]
} |
173,757 | We run a medium-size site that gets a few hundred thousand pageviews a day. Up until last weekend we ran with a load usually below 0.2 on a virtual machine. The OS is Ubuntu. When deploying the latest version of our application, we also did an apt-get dist-upgrade before deploying. After we had deployed we noticed that the load on the CPU had spiked dramatically (sometimes reaching 10 and stopping to respond to page requests). We tried dumping a full minute of Xdebug profiling data from PHP, but looking through it revealed only a few somewhat slow parts, but nothing to explain the huge jump. We are now pretty sure that nothing in the new version of our website is triggering the problem, but we have no way to be sure. We have rolled back a lot of the changes, but the problem still persists. When look at processes, we see that single Apache processes use quite a bit of CPU over a longer period of time than strictly necessary. However, when using strace on the affected process, we never see anything but accept(3, and it hangs for a while before receiving a new connection, so we can't actually see what is causing the problem. The stack is PHP 5, Apache 2 (prefork), MySQL 5.1. Most things run through Memcached. We've tried APC and eAccelerator. So, what should be our next step? Are there any profiling methods we overlooked/don't know about? | The answer ended up being not-Apache related. As mentioned, we were on a virtual machine. Our user sessions are pretty big (think 500kB per active user), so we had a lot of disk IO. The disk was nearly full, meaning that Ubuntu spent a lot of time moving things around (or so we think). There was no easy way to extend the disk (because it was not set up properly for VMWare). This completely killed performance, and Apache and MySQL would occasionally use 100% CPU (for a very short time), and the system would be so slow to update the CPU usage meters that it seemed to be stuck there. We ended up setting up a new VM (which also gave us the opportunity to thoroughly document everything on the server). On the new VM we allocated plenty of disk space, and moved sessions into memory (using memcached). Our load dropped to 0.2 on off-peak use and around 1 near peak use (on a 2-CPU VM). Moving the sessions into memcached took a lot of disk IO away (we were constantly using about 2MB/s of disk IO, which is very bad). Conclusion; sometimes you just have to start over... :) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1606/"
]
} |
173,786 | I have a UIScrollView that shows vertical data, but where the horizontal component is no wider than the screen of the iPhone. The problem is that the user is still able to drag horizontally, and basically expose blank sections of the UI. I have tried setting: scrollView.alwaysBounceHorizontal = NO;scrollView.directionalLockEnabled = YES; Which helps a little, but still doesn't stop the user from being able to drag horizontally. Surely there is a way to fix this easily? | That's strange, because whenever I create a scroll view with frame and content size within the bounds of the screen on either dimension, the scroll view does not scroll (or bounce) in that direction. // Should scroll vertically but not horizontallyUIScrollView *scrollView = [[UIScrollView alloc] initWithFrame:CGRectMake(0, 0, 320, 480)];scrollView.contentSize = CGSizeMake(320, 1000); Are you sure the frame fits completely within the screen and contentSize's width is not greater than the scroll view's width? | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173786",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6044/"
]
} |
173,814 | How can ALTER be used to drop a column in a MySQL table if that column exists? I know I can use ALTER TABLE my_table DROP COLUMN my_column , but that will throw an error if my_column does not exist. Is there alternative syntax for dropping the column conditionally? I'm using MySQL version 4.0.18. | For MySQL, there is none: MySQL Feature Request . Allowing this is arguably a really bad idea, anyway: IF EXISTS indicates that you're running destructive operations on a database with (to you) unknown structure. There may be situations where this is acceptable for quick-and-dirty local work, but if you're tempted to run such a statement against production data (in a migration etc.), you're playing with fire. But if you insist, it's not difficult to simply check for existence first in the client, or to catch the error. MariaDB also supports the following starting with 10.0.2: DROP [COLUMN] [IF EXISTS] col_name i. e. ALTER TABLE my_table DROP IF EXISTS my_column; But it's arguably a bad idea to rely on a non-standard feature supported by only one of several forks of MySQL. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/173814",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7362/"
]
} |
173,821 | I want to output the function name each time it is called, I can easily copy and paste the function name, however I wondered if there was a shortcut that would do the job for me? At the moment I am doing: SlideInfoHeader* lynxThreeFile::readSlideInfoHeader(QDataStream & in){ qDebug("lynxThreeFile::readSlideInfoHeader");} but what I want is something generic: SlideInfoHeader* lynxThreeFile::readSlideInfoHeader(QDataStream & in){ qDebug(this.className() + "::" + this.functionName());} | " __FUNCTION__ " is supported by both MSVC and GCC and should give you the information you need. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/173821",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/24459/"
]
} |
173,846 | I'm trying to have my Struts2 app redirect to a generated URL. In this case, I want the URL to use the current date, or a date I looked up in a database. So /section/document becomes /section/document/2008-10-06 What's the best way to do this? | Here's how we do it: In Struts.xml, have a dynamic result such as: <result name="redirect" type="redirect">${url}</result> In the action: private String url;public String getUrl(){ return url;}public String execute(){ [other stuff to setup your date] url = "/section/document" + date; return "redirect";} You can actually use this same technology to set dynamic values for any variable in your struts.xml using OGNL. We've created all sorts of dynamic results including stuff like RESTful links. Cool stuff. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/173846",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6400/"
]
} |
173,851 | I have a PHP script that needs to determine if it's been executed via the command-line or via HTTP, primarily for output-formatting purposes. What's the canonical way of doing this? I had thought it was to inspect SERVER['argc'] , but it turns out this is populated, even when using the 'Apache 2.0 Handler' server API. | Use the php_sapi_name() function. if (php_sapi_name() == "cli") { // In cli-mode} else { // Not in cli-mode} Here are some relevant notes from the docs: php_sapi_name — Returns the type of interface between web server and PHP Although not exhaustive, the possible return values include aolserver, apache, apache2filter, apache2handler, caudium, cgi (until PHP 5.3), cgi-fcgi, cli, cli-server, continuity, embed, isapi, litespeed, milter, nsapi, phttpd, pi3web, roxen, thttpd, tux, and webjames. In PHP >= 4.2.0, there is also a predefined constant, PHP_SAPI , that has the same value as php_sapi_name() . | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/173851",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5058/"
]
} |
173,866 | I have a web page which contains a select box. When I open a jQuery Dialog it is displayed partly behind the select box. How should I approach this problem? Should I hide the select box or does jQuery offer some kind of 'shim' solution. (I have Googled but didn't find anything) Here is some code: <!DOCTYPE html><html lang="en"><head><title>testJQuery</title><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><meta name="GENERATOR" content="Rational Application Developer"> <link rel="stylesheet" href="theme/smooth/theme.css" type="text/css" media="screen" /></head><body> <a class="pop" href="nix">Click me</a> <p/> <select size="20"> <option>s jl fjlkdjfldjf l*s ldkjsdlfkjsdl fkdjlfks dfldkfjdfkjlsdkf jdksdjf sd</option> <option>s jl fjlkdjfldjf l*s ldkjsdlfkjsdl fkdjlfks dfldkfjdfkjlsdkf jdksdjf sd</option> <option>s jl fjlkdjfldjf l*s ldkjsdlfkjsdl fkdjlfks dfldkfjdfkjlsdkf jdksdjf sd</option> <option>s jl fjlkdjfldjf l*s ldkjsdlfkjsdl fkdjlfks dfldkfjdfkjlsdkf jdksdjf sd</option> <option>s jl fjlkdjfldjf l*s ldkjsdlfkjsdl fkdjlfks dfldkfjdfkjlsdkf jdksdjf sd</option> </select> <div id="xyz" class="flora hiddenAsset"> <div id="dialog" title="Edit Link"> <p>Enter the link details:</p> <table width="80%" border="1"> <tr><td>URL</td><td><input id="url" style="width:100%" maxlength="200" value="{url}"/></td></tr> <tr><td>Title</td><td><input id="title" style="width:100%" maxlength="200" value="{title}"/></td></tr> <tr><td>Target</td><td><input id="target" size="20" maxlength="200" value="{target}"/></td></tr> </table> </div> </div><script type="text/javascript" src="../script/firebug/firebug.js"></script><script type="text/javascript" src="jquery-1.2.6.js"></script><script type="text/javascript" src="jquery-ui-1.5.2.js"></script><script type="text/javascript" src="jqSOAPClient.js"></script><script type="text/javascript">(function($){ $(document).ready(function(){ console.debug('ready'); $('.hiddenAsset').hide(); $('a.pop').bind('click', showDialog); console.debug('ready - done'); }); var showDialog = function(){ console.debug('show'); $('#dialog').dialog({ modal: true, overlay: { backgroundColor: '#666', opacity: '.3', filter: 'alpha(opacity=30)' }, width: '400px', height: '300px', buttons: { Ok: function() { $(this).dialog('close'); }, Cancel: function() { $(this).dialog('close'); } } }); console.debug('show-done'); return false; };})(jQuery);</script></body></html> | Use the php_sapi_name() function. if (php_sapi_name() == "cli") { // In cli-mode} else { // Not in cli-mode} Here are some relevant notes from the docs: php_sapi_name — Returns the type of interface between web server and PHP Although not exhaustive, the possible return values include aolserver, apache, apache2filter, apache2handler, caudium, cgi (until PHP 5.3), cgi-fcgi, cli, cli-server, continuity, embed, isapi, litespeed, milter, nsapi, phttpd, pi3web, roxen, thttpd, tux, and webjames. In PHP >= 4.2.0, there is also a predefined constant, PHP_SAPI , that has the same value as php_sapi_name() . | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/173866",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11249/"
]
} |
173,868 | This is a question you can read everywhere on the web with various answers: $ext = end(explode('.', $filename));$ext = substr(strrchr($filename, '.'), 1);$ext = substr($filename, strrpos($filename, '.') + 1);$ext = preg_replace('/^.*\.([^.]+)$/D', '$1', $filename);$exts = split("[/\\.]", $filename);$n = count($exts)-1;$ext = $exts[$n]; etc. However, there is always "the best way" and it should be on Stack Overflow. | People from other scripting languages always think theirs is better because they have a built-in function to do that and not PHP (I am looking at Pythonistas right now :-)). In fact, it does exist, but few people know it. Meet pathinfo() : $ext = pathinfo($filename, PATHINFO_EXTENSION); This is fast and built-in. pathinfo() can give you other information, such as canonical path, depending on the constant you pass to it. Remember that if you want to be able to deal with non ASCII characters, you need to set the locale first. E.G: setlocale(LC_ALL,'en_US.UTF-8'); Also, note this doesn't take into consideration the file content or mime-type, you only get the extension. But it's what you asked for. Lastly, note that this works only for a file path, not a URL resources path, which is covered using PARSE_URL. Enjoy | {
"score": 12,
"source": [
"https://Stackoverflow.com/questions/173868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9951/"
]
} |
173,875 | Besides open-sourcing your project and legislation, are there ways to prevent, or at least minimize the damages of code leaking outside your company/group? We obviously can't block Internet access (to prevent emailing the code) because programmer's need their references. We also can't block peripheral devices (USB, Firewire, etc.) The code matters most when it has some proprietary algorithms and in-house developed knowledge (as opposed to regular routine code to draw GUIs, connect to databases, etc.), but some applications (like accounting software and CRMs) are just that: complex collections of routine code that are simple to develop in principle, but will take years to write from scratch. This is where leaked code will come in handy to competitors. As far as I see it, preventing leakage relies almost entirely on human process. What do you think? What precautions and measures are you taking? And has code leakage affected you before? | You can't stop it getting out. So two solutions - stop people wanting to hurt you, and have legal precautions. To stop people hating you treat them right (saying more is probably off topic for stack overflow). I'm not a lawyer, but to give yourself legal protection, if you believe in it, patent the ideas, put a copyright notice in the code, and make sure the contracts for your programmers specify carefully intellectual property rights. But at the end of the day, the answer is run quicker than the competition. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/173875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9629/"
]
} |
173,884 | When I am adding two textbox values that are 1.001 and 0.001 and then I do a parseFloat I get 1.0019999999 . I want it 1.002 . Can you help me? | The Javascript Number class has a toFixed() function that will get you what you want. So you could do parseFloat("1.0019999").toFixed(3) and that would give you 1.002 . The parameter (3 in this case) is the number of digits to show after the decimal point | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173884",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
173,898 | I'm trying to do an informal feasibility study for work on if Mono/Apache/Linux is realistically 'ready' and in any way comparable to a more traditional .NET/IIS/Windows stack. Any comparison of performance would be helpful too. | Take a look at companies using Mono . It even says that Wikipedia uses Mono for its search facilities. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173898",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5777/"
]
} |
173,919 | When merging topic branch "B" into "A" using git merge , I get some conflicts. I know all the conflicts can be solved using the version in "B". I am aware of git merge -s ours . But what I want is something like git merge -s theirs . Why doesn't it exist? How can I achieve the same result after the conflicting merge with existing git commands? ( git checkout every unmerged file from B) The "solution" of just discarding anything from branch A (the merge commit point to B version of the tree) is not what I am looking for. | A similar alternative is the --strategy-option (short form -X ) option, which accepts theirs . For example: git checkout branchAgit merge -X theirs branchB However, this is more equivalent to -X ours than -s ours . The key difference being that -X performs a regular recursive merge, resolving any conflicts using the chosen side, whereas -s ours changes the merge to just completely ignore the other side. In some cases, the main problem using -X theirs instead of the hypothetical -s theirs is deleted files. In this case, just run git rm with the name of any files that were deleted: git rm {DELETED-FILE-NAME} After that, the -X theirs may work as expected. Of course, doing the actual removal with the git rm command will prevent the conflict from happening in the first place. | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/173919",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1277510/"
]
} |
173,974 | In my existing (Pre-SVN 1.5) merge strategy, we create a copy of the Trunk (called BasePoint) at the moment of branch-creation for referencing later during the merge. When we need to merge a branch back into the trunk, we perform 2 operations. Merge from BasePoint to LatestTrunk (Trunk has likely moved on since the original branch) into Working copy of Branch and then commit. At this point we typically check that the merge into the branch has not damaged anything Merge from LatestTrunk to LatestBranch back into Working copy of trunk and then commit. Documentation suggests that I use the new reintegrate merge on the Trunk and Merge from the Branch. Do I need to merge from the trunk into the dev branch first or is this included in the new reintegrate option? To put it another way, does the new merge --reintegrate functionality represent 'each of my previous merges' or 'the whole operation' ? (FWIW I am using TortoiseSVN 1.5.1) | The short answer is, You still have to do both steps. The SVN book explains the process for merging as: svn merge http://trunk/path while in a branch working copy svn merge --reintegrate http://branch/path while in a trunk working copy Notice the lack of revision numbers. This probably doesn't feel like a huge win. The new coolness is the ability to re-run the merge as you are coding in your branch, allowing you to keep the branch up to date with changes in trunk (without recording revision numbers by hand!). SVN keeps track of what needs to be merged in from trunk and what changes are unique to the branch. When you are done with the branch, --reintegrate uses that data to automatically merge only the branch changes back to trunk. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/173974",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11356/"
]
} |
173,995 | As a function argument I get a vector<double>& vec (an output vector, hence non-const) with unknown length and values. I want to initialise this vector to a specific length n with all zeroes. This will work vec.clear();vec.resize( n, 0.0 ); And this will work as well: vec.resize( n );vec.assign( n, 0.0 ); Is the second more efficient (because no memory deallocation/allocation is involved)? Is there a more efficient way to do this? | std::vector<double>(n).swap(vec); After this, vec is guaranteed to have size and capacity n, with all values 0.0. Perhaps the more idiomatic way since C++11 is vec.assign(n, 0.);vec.shrink_to_fit(); with the second line optional. In the case where vec starts off with more than n elements, whether to call shrink_to_fit is a trade-off between holding onto more memory than is required vs performing a re-allocation. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/173995",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19863/"
]
} |
174,008 | I'm currently building a small web application that includes a fair amount of JavaScript. When I was prototyping the initial idea, I just hacked together a few functions to demonstrate how the application would eventually behave intending to go forward re-writing the JavaScript in an object-oriented nature. Now that I'm getting into the implementation phase, I'm finding that creating object-oriented JavaScript for the sake of being object-oriented seems overkill - the project isn't likely to require any major modifications in the future that would warrant and object-oriented design. Instead, I'm finding that a set of concise, cohesive functions are working well. So, with that said and with attempting to adhere to the KISS principle, when a set of functions are providing a suitable solution to a problem, are there any other reasons worth considering to convert my code into an object-oriented design? | No, although I personally find OOP more tasty, it is a means to an end, and not an end in itself. There are many cases where procedural programming makes more sense than OOP, an converting for the sake of converting, could be, as you said, overkill. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174008",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20/"
]
} |
174,024 | Consider the following method signatures: public fooMethod (Foo[] foos) { /*...*/ } and public fooMethod (Foo... foos) { /*...*/ } Explanation: The former takes an array of Foo-objects as an argument - fooMethod(new Foo[]{..}) - while the latter takes an arbitrary amount of arguments of type Foo, and presents them as an array of Foo:s within the method - fooMethod(fooObject1, fooObject2, etc... ). Java throws a fit if both are defined, claiming that they are duplicate methods. I did some detective work, and found out that the first declaration really requires an explicit array of Foo objects, and that's the only way to call that method. The second way actually accepts both an arbitrary amount of Foo arguments AND also accepts an array of Foo objects. So, the question is, since the latter method seems more flexible, are there any reasons to use the first example, or have I missed anything vital? | These methods are actually the same. This feature is called varargs and it is a compiler feature. Behind the scenes is is translates to the former version. There is a pitfall if you define a method that accepts Object... and you sent one parameter of type Object[]! | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174024",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2238/"
]
} |
174,025 | How do you trigger a javascript function using actionscript in flash? The goal is to trigger jQuery functionality from a flash movie | Take a look at the ExternalInterface -Class. From the AS3-Language Reference: The ExternalInterface class is the External API, an application programming interface that enables straightforward communication between ActionScript and the Flash Player container– for example, an HTML page with JavaScript. Adobe recommends using ExternalInterface for all JavaScript-ActionScript communication. And it's work like this: ExternalInterface.addCallback("sendToActionScript", receivedFromJavaScript);ExternalInterface.call("sendToJavaScript", input.text); You can submit parameters and recieve callbacks...pretty cool, right? ;) As I know it will also work on AS2... | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174025",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2908/"
]
} |
174,059 | I'm developing a little project plan and I came to a point when I need to decide what local databse system to use. The input data is going to be stored on webserver (hosting - MySQL DB). The idea is to build a process to download all necessary data (for example at midnight) and process them. However, there are going to be many inputs and stages of processing, so I need to use some kind of local database to store the semi-product of the application What local database system would you recommend to work with C# (.NET) application? edit: The final product (information) should be easily being exported back to Hosting MySQL DB. As Will mentioned in his answer - yes, I'm for a performance AND comfort of use. | I want to say Microsoft Sql 2005 Express, as it (almost) comes as the obvious choice when developing in .NET. But it all depends on what previous db skills you have. If you already know MySql and as you already said, the data should be exported back to MySql. Why not use MySql all the way? | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21209/"
]
} |
174,093 | Assuming I have an ArrayList ArrayList<MyClass> myList; And I want to call toArray, is there a performance reason to use MyClass[] arr = myList.toArray(new MyClass[myList.size()]); over MyClass[] arr = myList.toArray(new MyClass[0]); ? I prefer the second style, since it's less verbose, and I assumed that the compiler will make sure the empty array doesn't really get created, but I've been wondering if that's true. Of course, in 99% of the cases it doesn't make a difference one way or the other, but I'd like to keep a consistent style between my normal code and my optimized inner loops... | Counterintuitively, the fastest version, on Hotspot 8, is: MyClass[] arr = myList.toArray(new MyClass[0]); I have run a micro benchmark using jmh the results and code are below, showing that the version with an empty array consistently outperforms the version with a presized array. Note that if you can reuse an existing array of the correct size, the result may be different. Benchmark results (score in microseconds, smaller = better): Benchmark (n) Mode Samples Score Error Unitsc.a.p.SO29378922.preSize 1 avgt 30 0.025 ▒ 0.001 us/opc.a.p.SO29378922.preSize 100 avgt 30 0.155 ▒ 0.004 us/opc.a.p.SO29378922.preSize 1000 avgt 30 1.512 ▒ 0.031 us/opc.a.p.SO29378922.preSize 5000 avgt 30 6.884 ▒ 0.130 us/opc.a.p.SO29378922.preSize 10000 avgt 30 13.147 ▒ 0.199 us/opc.a.p.SO29378922.preSize 100000 avgt 30 159.977 ▒ 5.292 us/opc.a.p.SO29378922.resize 1 avgt 30 0.019 ▒ 0.000 us/opc.a.p.SO29378922.resize 100 avgt 30 0.133 ▒ 0.003 us/opc.a.p.SO29378922.resize 1000 avgt 30 1.075 ▒ 0.022 us/opc.a.p.SO29378922.resize 5000 avgt 30 5.318 ▒ 0.121 us/opc.a.p.SO29378922.resize 10000 avgt 30 10.652 ▒ 0.227 us/opc.a.p.SO29378922.resize 100000 avgt 30 139.692 ▒ 8.957 us/op For reference, the code: @State(Scope.Thread)@BenchmarkMode(Mode.AverageTime)public class SO29378922 { @Param({"1", "100", "1000", "5000", "10000", "100000"}) int n; private final List<Integer> list = new ArrayList<>(); @Setup public void populateList() { for (int i = 0; i < n; i++) list.add(0); } @Benchmark public Integer[] preSize() { return list.toArray(new Integer[n]); } @Benchmark public Integer[] resize() { return list.toArray(new Integer[0]); }} You can find similar results, full analysis, and discussion in the blog post Arrays of Wisdom of the Ancients . To summarize: the JVM and JIT compiler contains several optimizations that enable it to cheaply create and initialize a new correctly sized array, and those optimizations can not be used if you create the array yourself. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/174093",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7581/"
]
} |
174,119 | I see this often in the build scripts of projects that use autotools (autoconf, automake). When somebody wants to check the value of a shell variable, they frequently use this idiom: if test "x$SHELL_VAR" = "xyes"; then... What is the advantage to this over simply checking the value like this: if test $SHELL_VAR = "yes"; then... I figure there must be some reason that I see this so often, but I can't figure out what it is. | If you're using a shell that does simple substitution and the SHELL_VAR variable does not exist (or is blank), then you need to watch out for the edge cases. The following translations will happen: if test $SHELL_VAR = yes; then --> if test = yes; thenif test x$SHELL_VAR = xyes; then --> if test x = xyes; then The first of these will generate an error since the fist argument to test has gone missing. The second does not have that problem. Your case translates as follows: if test "x$SHELL_VAR" = "xyes"; then --> if test "x" = "xyes"; then The x , at least for POSIX-compliant shells, is actually redundant since the quotes ensue that both an empty argument and one containing spaces are interpreted as a single object. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/174119",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/78437/"
]
} |
174,143 | In SQL Server 2005, is there a way of deleting rows and being told how many were actually deleted? I could do a select count(*) with the same conditions, but I need this to be utterly trustworthy. My first guess was to use the @@ROWCOUNT variables - but that isn't set, e.g. delete from mytable where datefield = '5-Oct-2008' select @@ROWCOUNT always returns a 0. MSDN suggests the OUTPUT construction, e.g. delete from mytable where datefield = '5-Oct-2008' output datefield into #doomedselect count(*) from #doomed this actually fails with a syntax error. Any ideas? | Have you tried SET NOCOUNT OFF ? | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/174143",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2902/"
]
} |
174,153 | In C#, does anybody know why the following will compile: int i = 1;++i;i++; but this will not compile? int i = 1;++i++; (Compiler error: The operand of an increment or decrement operator must be a variable, property or indexer.) | you are running one of the operands on the result of the other, the result of a increment/decrement is a value - and you can not use increment/decrement on a value it has to be a variable that can be set. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174153",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
]
} |
174,155 | Switch statement fallthrough is one of my personal major reasons for loving switch vs. if/else if constructs. An example is in order here: static string NumberToWords(int number){ string[] numbers = new string[] { "", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine" }; string[] tens = new string[] { "", "", "twenty", "thirty", "forty", "fifty", "sixty", "seventy", "eighty", "ninety" }; string[] teens = new string[] { "ten", "eleven", "twelve", "thirteen", "fourteen", "fifteen", "sixteen", "seventeen", "eighteen", "nineteen" }; string ans = ""; switch (number.ToString().Length) { case 3: ans += string.Format("{0} hundred and ", numbers[number / 100]); case 2: int t = (number / 10) % 10; if (t == 1) { ans += teens[number % 10]; break; } else if (t > 1) ans += string.Format("{0}-", tens[t]); case 1: int o = number % 10; ans += numbers[o]; break; default: throw new ArgumentException("number"); } return ans;} The smart people are cringing because the string[] s should be declared outside the function: well, they are, this is just an example. The compiler fails with the following error: Control cannot fall through from one case label ('case 3:') to anotherControl cannot fall through from one case label ('case 2:') to another Why? And is there any way to get this sort of behaviour without having three if s? | (Copy/paste of an answer I provided elsewhere ) Falling through switch - case s can be achieved by having no code in a case (see case 0 ), or using the special goto case (see case 1 ) or goto default (see case 2 ) forms: switch (/*...*/) { case 0: // shares the exact same code as case 1 case 1: // do something goto case 2; case 2: // do something else goto default; default: // do something entirely different break;} | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/174155",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15537/"
]
} |
174,163 | Is entity framework just a fancy name for another CRUD code generator? Or is there more to it? | (Copy/paste of an answer I provided elsewhere ) Falling through switch - case s can be achieved by having no code in a case (see case 0 ), or using the special goto case (see case 1 ) or goto default (see case 2 ) forms: switch (/*...*/) { case 0: // shares the exact same code as case 1 case 1: // do something goto case 2; case 2: // do something else goto default; default: // do something entirely different break;} | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/174163",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
174,193 | I'm doing some architectural cleanup that involves moving a bunch of classes into different projects and/or namespaces. Currently I'm moving the files by hand, building, and then manually adding using Foo statements as needed to resolve compilation errors. Anyone know of a smarter way of doing this? (We're a CodeRush and Refactor! shop, but I'd be interested to hear if Resharper has support for this) | As mentioned in the comments, this answer is now outdated. Please see the up-to-date answer below Resharper is the only tool I am aware of what has this ability. There is also a lot of other functionality that it has that is missing in CodeRush and Refactor! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/174193",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/23632/"
]
} |
174,198 | With the new approach of having the get/set within the attribut of the class like that : public string FirstName { get; set; } Why simply not simply put the attribute FirstName public without accessor? | Two of the big problems with direct access to variable inside class (field/attribute) are: 1) You can't easily databind against fields. 2) If you expose public fields from your classes you can't later change them to properties (for example: to add validation logic to the setters) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/174198",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21386/"
]
} |
174,239 | I have inherited a c# class 'Button' (which I can't change) which clashes with the BCL class 'Windows.Forms.Button'. Normally, Id be very happy to go: MyPackage.MyClass.Button; But there are a large number or references to this class which is a pain to have to re-type. Is there any way to get the compiler (linker?) to default to using the customised version of Button over the BCL version? | Add this to the top of the file: using MyButton = MyPackage.MyClass.Button; Now you can reference your custom button using a distinct name. You may need to do something similar for the stock button if you use that anywhere in the same file. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174239",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1816/"
]
} |
174,263 | This may be simple one, but 5 mins of Googling didn't give me the answer.How do you save and load bitmaps using .Net librabries? I have an Image object and I need to save it to disk in some format (preferably png) and load back in later.A C# example would be great. | Here's a really simple example. Top of code file using System.Drawing; In code Image test = new Bitmap("picture.bmp");test.Save("picture.png", System.Drawing.Imaging.ImageFormat.Png); Remember to give write permissions to the ASPNET user for the folder where the image is to be saved. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1660/"
]
} |
174,292 | The array has lots of data and I need to delete two elements. Below is the code snippet I am using, my @array = (1,2,3,4,5,5,6,5,4,9);my $element_omitted = 5;@array = grep { $_ != $element_omitted } @array; | Use splice if you already know the index of the element you want to delete. Grep works if you are searching. If you need to do a lot of these, you will get much better performance if you keep your array in sorted order, since you can then do binary search to find the necessary index. If it makes sense in your context, you may want to consider using a "magic value" for deleted records, rather then deleting them, to save on data movement -- set deleted elements to undef, for example. Naturally, this has its own issues (if you need to know the number of "live" elements, you need to keep track of it separately, etc), but may be worth the trouble depending on your application. Edit Actually now that I take a second look -- don't use the grep code above. It would be more efficient to find the index of the element you want to delete, then use splice to delete it (the code you have accumulates all the non-matching results..) my $index = 0;$index++ until $arr[$index] eq 'foo';splice(@arr, $index, 1); That will delete the first occurrence.Deleting all occurrences is very similar, except you will want to get all indexes in one pass: my @del_indexes = grep { $arr[$_] eq 'foo' } 0..$#arr; The rest is left as an excercise for the reader -- remember that the array changes as you splice it! Edit2 John Siracusa correctly pointed out I had a bug in my example.. fixed, sorry about that. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/174292",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21246/"
]
} |
174,309 | I'm building an interface much like the built-in Weather application's flipside view, or the Alarms view of the Clock application in editing mode. The table view is always in editing mode, so the delete icon appears on the left side of each cell. When the table view is in editing mode, my delegate doesn't receive didSelectRowAtIndexPath notifications. It receives accessoryButtonTappedForRowWithIndexPath notifications, but that's not what I want to do. I want my rows to stay selectable, even when the table view is in editing mode. Any ideas on how I can accomplish this? Thanks, P.S. Hooray for the lifted NDA. =) | Set table.allowsSelectionDuringEditing to YES . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/174309",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2119/"
]
} |
174,319 | What's the difference between the Enabled and the ReadOnly-properties of an asp:TextBox control? | If a control is disabled it cannot be edited and its content is excluded when the form is submitted. If a control is readonly it cannot be edited, but its content (if any) is still included with the submission. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/174319",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11682/"
]
} |
174,322 | I would like to know how much disk space a directory is going to consume before I bring it over from the Perforce server. I don't see any way to do this other than getting the files and looking at the size of the directory in a file manager. This, of course, defeats the purpose. Is there a way to get file size info from Perforce without actually getting the files? | I don't know how I missed this command, but here's how you do it: p4 sizes -s //depot/directory/... | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/174322",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4228/"
]
} |
174,348 | Will content requested over https still be cached by web browsers or do they consider this insecure behaviour? If this is the case is there anyway to tell them it's ok to cache? | By default web browsers should cache content over HTTPS the same as over HTTP, unless explicitly told otherwise via the HTTP Headers received. This link is a good introduction to setting cache setting in HTTP headers. is there anyway to tell them it's ok to cache? This can be achieved by setting the max-age value in the Cache-Control header to a non-zero value, e.g. Cache-Control: max-age=3600 will tell the browser that this page can be cached for 3600 seconds (1 hour) | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/174348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21030/"
]
} |
174,349 | By default, in C++, a single-argument constructor can be used as an implicit conversion operator. This can be suppressed by marking the constructor as explicit. I'd prefer to make "explicit" be the default, so that the compiler cannot silently use these constructors for conversion. Is there a way to do this in standard C++? Failing that, is there a pragma (or similar) that'll work in Microsoft C++ to do this? What about g++ (we don't use it, but it might be useful information)? | Nope, you have to do it all by hand. It's a pain, but you certainly should get in the habit of making single argument constructors explicit. I can't imagine the pain you would have if you did find a solution and then had to port the code to another platform. You should usually shy away from compiler extensions like this because it will make the code less portable. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/174349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8446/"
]
} |
174,356 | I'm tidying up some older code that uses 'magic numbers' all over the place to set hardware registers, and I would like to use constants instead of these numbers to make the code somewhat more expressive (in fact they will map to the names/values used to document the registers). However, I'm concerned that with the volume of changes I might break the magic numbers. Here is a simplified example (the register set is more complex): const short mode0 = 0;const short mode1 = 1;const short mode2 = 2;const short state0 = 0;const short state1 = 4;const short state2 = 8; so instead of : set_register(5); we have: set_register(state1|mode1); What I'm looking for is a build time version of: ASSERT(5==(state1|mode1)); Update @Christian, thanks for the quick response, I'm interested on a C / non-boost environment answer too because this is driver/kernel code. | NEW ANSWER : In my original answer (below), I had to have two different macros to support assertions in a function scope and at the global scope. I wondered if it was possible to come up with a single solution that would work in both scopes. I was able to find a solution that worked for Visual Studio and Comeau compilers using extern character arrays. But I was able to find a more complex solution that works for GCC. But GCC's solution doesn't work for Visual Studio. :( But adding a '#ifdef __ GNUC __', it's easy to choose the right set of macros for a given compiler. Solution: #ifdef __GNUC__#define STATIC_ASSERT_HELPER(expr, msg) \ (!!sizeof \ (struct { unsigned int STATIC_ASSERTION__##msg: (expr) ? 1 : -1; }))#define STATIC_ASSERT(expr, msg) \ extern int (*assert_function__(void)) [STATIC_ASSERT_HELPER(expr, msg)]#else #define STATIC_ASSERT(expr, msg) \ extern char STATIC_ASSERTION__##msg[1]; \ extern char STATIC_ASSERTION__##msg[(expr)?1:2]#endif /* #ifdef __GNUC__ */ Here are the error messages reported for STATIC_ASSERT(1==1, test_message); at line 22 of test.c: GCC: line 22: error: negative width in bit-field `STATIC_ASSERTION__test_message' Visual Studio: test.c(22) : error C2369: 'STATIC_ASSERTION__test_message' : redefinition; different subscripts test.c(22) : see declaration of 'STATIC_ASSERTION__test_message' Comeau: line 22: error: declaration is incompatible with "char STATIC_ASSERTION__test_message[1]" (declared at line 22) ORIGINAL ANSWER : I do something very similar to what Checkers does. But I include a message that'll show up in many compilers: #define STATIC_ASSERT(expr, msg) \{ \ char STATIC_ASSERTION__##msg[(expr)?1:-1]; \ (void)STATIC_ASSERTION__##msg[0]; \} And for doing something at the global scope (outside a function) use this: #define GLOBAL_STATIC_ASSERT(expr, msg) \ extern char STATIC_ASSERTION__##msg[1]; \ extern char STATIC_ASSERTION__##msg[(expr)?1:2] | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/174356",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4071/"
]
} |
174,365 | It seems that when I use a tool (such as winmerge) to update my codebase... my Visual Studio Team System (VSTS) integration with Team Foundation Server (TFS) doesn't seem to pick it up. How do I know which files to check out and check back in? Is there something I am missing? Is this a feature that isn't part of VSTS & TFS? | First, this is probably because the files have not yet been checked out. If you do that first before running your update, TFS will see those changes. Second, you can use TFS Power Tools (available from MS) to review local repository for changes that are not recognized. If there are found differences, power toys resets the status of the file so Pending Changes window sees the change. this does not require you to check-out the files, it will do that for you if there are differences. Pretty nifty. Power Tools for 2008 is here: http://www.microsoft.com/en-us/download/details.aspx?id=15836 and you are looking for the "Online" command: "Online Command - Use the online command to create pending edits on writable files that do not have pending edits." | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4481/"
]
} |
174,393 | This PHP code... 207 if (getenv(HTTP_X_FORWARDED_FOR)) {208 $ip = getenv('HTTP_X_FORWARD_FOR');209 $host = gethostbyaddr($ip);210 } else {211 $ip = getenv('REMOTE_ADDR');212 $host = gethostbyaddr($ip);213 } Throws this warning... Warning: gethostbyaddr() [function.gethostbyaddr]: Address is not in a.b.c.d form in C:\inetpub...\filename.php on line 212 It seems that $ip is blank. | on php.net it says the following: The function getenv does not work if your Server API is ASAPI (IIS). So, try to don't use getenv('REMOTE_ADDR') , but $_SERVER["REMOTE_ADDR"] . Did you maybe try to do it with $_SERVER ? | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174393",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/83/"
]
} |
174,403 | In my vb.net program, I am using a webbrowser to show the user an HTML preview. I was previously hitting a server to grab the HTML, then returning on an asynchronous thread and raising an event to populate the WebBrowser.DocumentText with the HTML string I was returning. Now I set it up to grab all of the information on the client, without ever having to hit the server, and I'm trying to raise the same event. I watch the code go through, and it has the HTML string correct and everything, but when I try to do browser.DocumentText = _emailHTML the contents of DocumentText remain as " <HTML></HTML> " I was just wondering why the DocumentText was not being set. Anyone have any suggestions? | Try the following: browser.Navigate("about:blank");HtmlDocument doc = browser.Document;doc.Write(String.Empty);browser.DocumentText = _emailHTML; I've found that the WebBrowser control usually needs to be initialized to about:blank anyway. The same needs to be done between navigates to different types of content (like text/xml to text/html) because the renderer is different (mshtml for text/html, something else for text/xml). See Also : C# 2.0 WebBrowser control - bug in DocumentText? | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/174403",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13244/"
]
} |
174,430 | I decided to use log4net as a logger for a new webservice project. Everything is working fine, but I get a lot of messages like the one below, for every log4net tag I am using in my web.config : Could not find schema information for the element 'log4net'... Below are the relevant parts of my web.config : <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <log4net> <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="C:\log.txt" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="100KB" /> <staticLogFileName value="true" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level: %message%newline" /> </layout> </appender> <logger name="TIMServerLog"> <level value="DEBUG" /> <appender-ref ref="RollingFileAppender" /> </logger> </log4net> Solved: Copy every log4net specific tag to a separate xml -file. Make sure to use .xml as file extension. Add the following line to AssemblyInfo.cs : [assembly: log4net.Config.XmlConfigurator(ConfigFile = "xmlFile.xml", Watch = true)] nemo added: Just a word of warning to anyone follow the advice of the answers in this thread. There is a possible security risk by having the log4net configuration in an xml off the root of the web service, as it will be accessible to anyone by default. Just be advised if your configuration contains sensitive data, you may want to put it else where. @wcm: I tried using a separate file. I added the following line to AssemblyInfo.cs [assembly: log4net.Config.XmlConfigurator(ConfigFile = "log4net.config", Watch = true)] and put everything dealing with log4net in that file, but I still get the same messages. | You can bind in a schema to the log4net element. There are a few floating around, most do not fully provide for the various options available. I created the following xsd to provide as much verification as possible: http://csharptest.net/downloads/schema/log4net.xsd You can bind it into the xml easily by modifying the log4net element: <log4net xsi:noNamespaceSchemaLocation="http://csharptest.net/downloads/schema/log4net.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/174430",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11387/"
]
} |
174,449 | Unquestionably, I would choose to use the STL for most C++ programming projects. The question was presented to me recently however, "Are there any cases where you wouldn't use the STL?"... The more I thought about it, the more I realized that perhaps there SHOULD be cases where I choose not to use the STL... For example, a really large, long term project whose codebase is expected to last years... Perhaps a custom container solution that precisely fits the projects needs is worth the initial overhead? What do you think, are there any cases where you would choose NOT to STL? | The main reasons not to use STL are that: Your C++ implementation is old and has horrible template support. You can't use dynamic memory allocation. Both are very uncommon requirements in practice. For a longterm project rolling your own containers that overlap in functionality with the STL is just going to increase maintenance and development costs. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/174449",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3886/"
]
} |
174,498 | I am coming from Java and am currently working on a C# project. What is the recommended way to go about a) unit testing existing C# code and b) accomplishing TDD for C# development? Also is there an equivalent to EMMA / EclEmma (free yet powerful code coverage tool) for Visual Studio and C# code? | 1 Nunit 2 NCover or 3 PartCover (I never used it) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6583/"
]
} |
174,502 | Seeing as Java doesn't have nullable types, nor does it have a TryParse(),how do you handle input validation without throwing an exceptions? The usual way: String userdata = /*value from gui*/int val;try{ val = Integer.parseInt(userdata);}catch (NumberFormatException nfe){ // bad data - set to sentinel val = Integer.MIN_VALUE;} I could use a regex to check if it's parseable, but that seems like a lot of overhead as well. What's the best practice for handling this situation? EDIT: Rationale:There's been a lot of talk on SO about exception handling, and the general attitude is that exceptions should be used for unexpected scenarios only. However, I think bad user input is EXPECTED, not rare. Yes, it really is an academic point. Further Edits: Some of the answers demonstrate exactly what is wrong with SO. You ignore the question being asked, and answer another question that has nothing to do with it. The question isn't asking about transition between layers. The question isn't asking what to return if the number is un-parseable. For all you know, val = Integer.MIN_VALUE; is exactly the right option for the application that this completely context free code snippet was take from. | That's pretty much it, although returning MIN_VALUE is kind of questionable, unless you're sure it's the right thing to use for what you're essentially using as an error code. At the very least I'd document the error code behavior, though. Might also be useful (depending on the application) to log the bad input so you can trace. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174502",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18907/"
]
} |
174,531 | What is the simplest way (least error-prone, least lines of code, however you want to interpret it) to open a file in C and read its contents into a string (char*, char[], whatever)? | I tend to just load the entire buffer as a raw memory chunk into memory and do the parsing on my own. That way I have best control over what the standard lib does on multiple platforms. This is a stub I use for this. you may also want to check the error-codes for fseek, ftell and fread. (omitted for clarity). char * buffer = 0;long length;FILE * f = fopen (filename, "rb");if (f){ fseek (f, 0, SEEK_END); length = ftell (f); fseek (f, 0, SEEK_SET); buffer = malloc (length); if (buffer) { fread (buffer, 1, length, f); } fclose (f);}if (buffer){ // start to process your data / extract strings here...} | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/174531",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/422/"
]
} |
174,532 | I recently inherited a database on which one of the tables has the primary key composed of encoded values (Part1*1000 + Part2). I normalized that column, but I cannot change the old values.So now I have select ID from table order by IDID100001100002101001... I want to find the "holes" in the table (more precisely, the first "hole" after 100000) for new rows. I'm using the following select, but is there a better way to do that? select /* top 1 */ ID+1 as newID from tablewhere ID > 100000 andID + 1 not in (select ID from table)order by IDnewID100003101029... The database is Microsoft SQL Server 2000. I'm ok with using SQL extensions. | select ID +1 From Table t1where not exists (select * from Table t2 where t1.id +1 = t2.id); not sure if this version would be faster than the one you mentioned originally. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174532",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/25324/"
]
} |
174,560 | How can you depend on test code from another module in Maven? Example, I have 2 modules: Base Main I would like a test case in Main to extend a base test class in Base. Is this possible? Update: Found an acceptable answer , which involves creating a test jar. | I recommend using type instead of classifier (see also: classifier ). It tells Maven a bit more explicitly what you are doing (and I've found that m2eclipse and q4e both like it better). <dependency> <groupId>com.myco.app</groupId> <artifactId>foo</artifactId> <version>1.0-SNAPSHOT</version> <type>test-jar</type> <scope>test</scope></dependency> | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/174560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12880/"
]
} |
174,582 | If I wish to simply rename a column (not change its type or constraints, just its name) in an SQL database using SQL, how do I do that? Or is it not possible? This is for any database claiming to support SQL, I'm simply looking for an SQL-specific query that will work regardless of actual database implementation. | On PostgreSQL (and many other RDBMS), you can do it with regular ALTER TABLE statement: => SELECT * FROM Test1; id | foo | bar ----+-----+----- 2 | 1 | 2=> ALTER TABLE Test1 RENAME COLUMN foo TO baz;ALTER TABLE=> SELECT * FROM Test1; id | baz | bar ----+-----+----- 2 | 1 | 2 | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/174582",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8026/"
]
} |
174,595 | What is the difference between ROWNUM and ROW_NUMBER ? | ROWNUM is a "pseudocolumn" that assigns a number to each row returned by a query: SQL> select rownum, ename, deptno 2 from emp; ROWNUM ENAME DEPTNO---------- ---------- ---------- 1 SMITH 99 2 ALLEN 30 3 WARD 30 4 JONES 20 5 MARTIN 30 6 BLAKE 30 7 CLARK 10 8 SCOTT 20 9 KING 10 10 TURNER 30 11 FORD 20 12 MILLER 10 ROW_NUMBER is an analytic function that assigns a number to each row according to its ordering within a group of rows: SQL> select ename, deptno, row_number() over (partition by deptno order by ename) rn 2 from emp;ENAME DEPTNO RN---------- ---------- ----------CLARK 10 1KING 10 2MILLER 10 3FORD 20 1JONES 20 2SCOTT 20 3ALLEN 30 1BLAKE 30 2MARTIN 30 3TURNER 30 4WARD 30 5SMITH 99 1 | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/174595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9581/"
]
} |
174,600 | In SQL Server 2005, is there a way for a trigger to find out what object is responsible for firing the trigger? I would like to use this to disable the trigger for one stored procedure. Is there any other way to disable the trigger only for the current transaction? I could use the following code, but if I'm not mistaken, it would affect concurrent transactions as well - which would be a bad thing. DISABLE TRIGGER { [ schema_name . ] trigger_name [ ,...n ] | ALL } ON { object_name | DATABASE | ALL SERVER } [ ; ]ENABLE TRIGGER { [ schema_name . ] trigger_name [ ,...n ] | ALL } ON { object_name | DATABASE | ALL SERVER } [ ; ] If possible, I would like to avoid the technique of having a "NoTrigger" field in my table and doing a NoTrigger = null , because I would like to keep the table as small as possible. The reason I would like to avoid the trigger is because it contains logic that is important for manual updates to the table, but my stored procedure will take care of this logic. Because this will be a highly used procedure, I want it to be fast. Triggers impose additional overhead on the server because they initiate an implicit transaction. As soon as a trigger is executed, a new implicit transaction is started, and any data retrieval within a transaction will hold locks on affected tables. From: http://searchsqlserver.techtarget.com/tip/1,289483,sid87_gci1170220,00.html#trigger | I just saw this article recently highlighted on the SQL Server Central newsletter and it appears to offer a way which you may find useful using the Context_Info on the connection: http://www.mssqltips.com/tip.asp?tip=1591 EDIT by Terrapin: The above link includes the following code: USE AdventureWorks; GO -- creating the table in AdventureWorks database IF OBJECT_ID('dbo.Table1') IS NOT NULL DROP TABLE dbo.Table1 GO CREATE TABLE dbo.Table1(ID INT) GO -- Creating a trigger CREATE TRIGGER TR_Test ON dbo.Table1 FOR INSERT,UPDATE,DELETE AS DECLARE @Cinfo VARBINARY(128) SELECT @Cinfo = Context_Info() IF @Cinfo = 0x55555 RETURN PRINT 'Trigger Executed' -- Actual code goes here -- For simplicity, I did not include any code GO If you want to prevent the trigger from being executed you can do the following: SET Context_Info 0x55555 INSERT dbo.Table1 VALUES(100) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/174600",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/357/"
]
} |
174,653 | I'd like to use the Wiimote (accelerometers, gyroscopes, infrared camera, etc, etc, etc) on various applications. It's a bluetooth device, and I know others have connected it to their computer. What's the easiest way to start using it in my software - are there libraries for C#, for instance? I want my software to be usable and easily installable - what's the current easiest way to connect a wiimote to the computer? Can I make that process part of my software installation? -Adam | Have you seen Johnny Chung Lee's 'Procrastineering' Blog ? He's written a lot on the subject of using wii remotes and has some fantastic demonstration videos. [Edit] I just found out Mr Lee did a TED talk which gives a good introduction to the stuff he's done too... There's a wealth of information over on Wiibrew.org - check out their Wiimote Library page for some other APIs if you want to look beyond c#. As an avid Python fan, I'm quite curious to have a play with the pyWiimote library :-) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2915/"
]
} |
174,659 | I'm writing a little tile-based game, for which I'd like to support light sources. But my algorithm-fu is too weak, hence I come to you for help. The situation is like this: There is a tile-based map (held as a 2D array), containing a single light source and several items standing around. I want to calculate which tiles are lit up by the light source, and which are in shadow. A visual aid of what it would look like, approximately. The L is the light source, the Xs are items blocking the light, the 0s are lit tiles, and the -s are tiles in shadow. 0 0 0 0 0 0 - - 00 0 0 0 0 0 - 0 00 0 0 0 0 X 0 0 00 0 0 0 0 0 0 0 00 0 0 0 L 0 0 0 00 0 0 0 0 0 0 0 00 0 0 X X X X 0 00 0 0 - - - - - 00 0 - - - - - - - A fractional system would be even better, of course, where a tile can be in half-shadow due to being partially obscured. The algorithm wouldn't have to be perfect - just not obviously wrong and reasonably fast. (Of course, there would be multiple light sources, but that's just a loop.) Any takers? | The roguelike development community has a bit of an obsession with line-of-sight, field-of-view algorithms. Here's a link to a roguelike wiki article on the subject: http://roguebasin.roguelikedevelopment.org/index.php?title=Field_of_Vision For my roguelike game, I implemented a shadow casting algorithm ( http://roguebasin.roguelikedevelopment.org/index.php?title=Shadow_casting ) in Python. It was a bit complicated to put together, but ran reasonably efficiently (even in pure Python) and generated nice results. The "Permissive Field of View" seems to be gaining popularity as well: http://roguebasin.roguelikedevelopment.org/index.php?title=Permissive_Field_of_View | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/174659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15255/"
]
} |
174,664 | I need to evaluate a mathmatical expression that is presented to me as a string in C#. Example noddy but gets the point across that the string as the expression. I need the evaluate to then populate an int. There is no Eval() in C# like in others langugaes... String myString = "3*4"; Edit: I am on VS2008 Tried the Microsoft.JScript. = Its deprecated method (but still complies - warning) However the Microsoft.JScript dll that I have doens work on public object InvokeMember(string name, BindingFlags invokeAttr, Binder binder, object target, object[] args); Complains that there is a missing ";" go figure... EDIT 2 Solution - was the codeDom one - it worked for as there are no security issue - only me ever going to be running the code. Many thanks for the replies ... And the link to the new Dragon Book awesome EDIT 3 Matt dataTable.Compute() also works - even better for the security conscious. (parameter checking noted) | The way I see it, you have two options - use an expression evaluator or construct, compile and run C# code on the fly. I would go with an expression evaluator library, as you do not have to worry about any security issues. That is, you might not be able to use code generation in medium trust environments, such as most shared hosting servers. Here is an example for generating code to evaluate expressions: http://www.vbforums.com/showthread.php?t=397264 | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174664",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
174,727 | Oracle FAQ defines temp table space as follows: Temporary tablespaces are used to manage space for database sort operations and for storing global temporary tables. For example, if you join two large tables, and Oracle cannot do the sort in memory, space will be allocated in a temporary tablespace for doing the sort operation. That's great, but I need more detail about what exactly is using the space. Due to quirks of the application design most queries do some kind of sorting, so I need to narrow it down to client executable, target table, or SQL statement. Essentially, I'm looking for clues to tell me more precisely what might be wrong with this (rather large application). Any sort of clue might be useful, so long as it is more precise than "sorting". | I'm not sure exactly what information you have to hand already, but using the following query will point out which program/user/sessions etc are currently using your temp space. SELECT b.TABLESPACE , b.segfile# , b.segblk# , ROUND ( ( ( b.blocks * p.VALUE ) / 1024 / 1024 ), 2 ) size_mb , a.SID , a.serial# , a.username , a.osuser , a.program , a.status FROM v$session a , v$sort_usage b , v$process c , v$parameter p WHERE p.NAME = 'db_block_size' AND a.saddr = b.session_addr AND a.paddr = c.addrORDER BY b.TABLESPACE , b.segfile# , b.segblk# , b.blocks; Once you find out which session is doing the damage, then have a look at the SQL being executed, and you should be on the right path. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174727",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13935/"
]
} |
174,730 | Given a credit card number and no additional information, what is the best way in PHP to determine whether or not it is a valid number? Right now I need something that will work with American Express, Discover, MasterCard, and Visa, but it might be helpful if it will also work with other types. | There are three parts to the validation of the card number: PATTERN - does it match an issuers pattern (e.g. VISA/Mastercard/etc.) CHECKSUM - does it actually check-sum (e.g. not just 13 random numbers after "34" to make it an AMEX card number) REALLY EXISTS - does it actually have an associated account (you are unlikely to get this without a merchant account) Pattern MASTERCARD Prefix=51-55, Length=16 (Mod10 checksummed) VISA Prefix=4, Length=13 or 16 (Mod10) AMEX Prefix=34 or 37, Length=15 (Mod10) Diners Club/Carte Prefix=300-305, 36 or 38, Length=14 (Mod10) Discover Prefix=6011,622126-622925,644-649,65, Length=16, (Mod10) etc. ( detailed list of prefixes ) Checksum Most cards use the Luhn algorithm for checksums: Luhn Algorithm described on Wikipedia There are links to many implementations on the Wikipedia link, including PHP: <?/* Luhn algorithm number checker - (c) 2005-2008 shaman - www.planzero.org * * This code has been released into the public domain, however please * * give credit to the original author where possible. */function luhn_check($number) { // Strip any non-digits (useful for credit card numbers with spaces and hyphens) $number=preg_replace('/\D/', '', $number); // Set the string length and parity $number_length=strlen($number); $parity=$number_length % 2; // Loop through each digit and do the maths $total=0; for ($i=0; $i<$number_length; $i++) { $digit=$number[$i]; // Multiply alternate digits by two if ($i % 2 == $parity) { $digit*=2; // If the sum is two digits, add them together (in effect) if ($digit > 9) { $digit-=9; } } // Total up the digits $total+=$digit; } // If the total mod 10 equals 0, the number is valid return ($total % 10 == 0) ? TRUE : FALSE;}?> | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/174730",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18986/"
]
} |
174,773 | I have TurtoiseSVN and ankhSVN installed. I created a repository on my computer.. "C:\Documents and Settings\user1\My Documents\Subversion\Repository\" I am trying to connect to this repository from my co-workers computer. What should this URL be? Any help would be great. Thanks. | You will need to run the svnserve daemon on your computer, or run an apache server with the necessary modules, to allow your colleague to access this locally stored repository. For a simple case like this I would recommend svnserve, it should be simpler to configure and run. The url would then be: svn://<your_ip>/<repository_name> As opposed to an http or file protocol URL for apache and local filesystem based repositories. Read this page for details on how to set up svnserve it on Windows: http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-serversetup-svnserve.html | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174773",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1316/"
]
} |
174,841 | Why is it that when I use a converter in my binding expression in WPF, the value is not updated when the data is updated. I have a simple Person data model: class Person : INotifyPropertyChanged{ public string FirstName { get; set; } public string LastName { get; set; }} My binding expression looks like this: <TextBlock Text="{Binding Converter={StaticResource personNameConverter}" /> My converter looks like this: class PersonNameConverter : IValueConverter{ public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { Person p = value as Person; return p.FirstName + " " + p.LastName; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); }} If I bind the data without a converter it works great: <TextBlock Text="{Binding Path=FirstName}" /><TextBlock Text="{Binding Path=LastName}" /> What am I missing? EDIT:Just to clarify a few things, both Joel and Alan are correct regarding the INotifyPropertyChanged interface that needs to be implemented. In reality I do actually implement it but it still doesn't work. I can't use multiple TextBlock elements because I'm trying to bind the Window Title to the full name, and the Window Title does not take a template. Finally, it is an option to add a compound property "FullName" and bind to it, but I'm still wondering why updating does not happen when the binding uses a converter. Even when I put a break point in the converter code, the debugger just doesn't get there when an update is done to the underlying data :-( Thanks,Uri | (see edits below; latest: #2) It isn't updating because your Person object is not capable of notifying anything that the value of FirstName or LastName has changed. See this Question . And here's how you implement INotifyPropertyChanged . ( Updated, see Edit 2 ) using System.ComponentModel;class Person : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; string _firstname; public string FirstName { get { return _firstname; } set { _firstname = value; onPropertyChanged( "FirstName", "FullName" ); } } string _lastname; public string LastName { get { return _lastname; } set { _lastname = value; onPropertyChanged( "LastName", "FullName" ); } } public string FullName { get { return _firstname + " " + _lastname; } } void onPropertyChanged( params string[] propertyNames ) { PropertyChangedEventHandler handler = PropertyChanged; if ( handler != null ) { foreach ( var pn in propertyNames ) { handler( this, new PropertyChangedEventArgs( pn ) ); } } }} Edit 1 Actually, since you're after the first name and last name updating, and Path=FirstName and such works just fine, I don't think you'll need the converter at all. Multiple TextBlocks are just as valid, and can actually work better when you're localizing to a right-to-left language. Edit 2 I've figured it out. It's not being notified that the properties have updated because it is binding to the object itself, not one of those properties. Even when I made Person a DependencyObject and made FirstName and LastName DependencyProperties , it wouldn't update. You will have to use a FullName property, and I've update the code of the Person class above to reflect that. Then you can bind the Title . ( Note: I've set the Person object as the Window 's DataContext .) Title="{Binding Path=FullName, Mode=OneWay}" If you're editing the names in a TextBox and want the name changed reflected immediately instead of when the TextBox loses focus, you can do this: <TextBox Name="FirstNameEdit" Text="{Binding Path=FirstName, UpdateSourceTrigger=PropertyChanged}" /> I know you didn't want to use a FullName property, but anything that would accomplish what you want would probably be a bit of a Rube Goldberg device. Such as implementing INotifyPropertyChanged and a Person property on the Window class itself, having the Window listen on the PropertyChanged event in order to fire the Window 's PropertyChanged event, and using a relative binding like the following. You'd also have set the Person property before InitializeComponent() or fire PropertyChanged after setting the Person property so that it shows up, of course. (Otherwise it will be null during InitializeComponent() and needs to know when it's a Person .) <Window.Resources> <loc:PersonNameConverter x:Key="conv" /></Window.Resources><Window.Title> <Binding RelativeSource="{RelativeSource Self}" Converter="{StaticResource conv}" Path="Person" Mode="OneWay" /></Window.Title> | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174841",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/373/"
]
} |
174,853 | A python script need to spawn multiple sub-processes via fork(). All of those child processes should run simultaneously and the parent process should be waiting for all of them to finish. Having an ability to set some timeout on a "slow" child would be nice.The parent process goes on processing the rest of the script after all kids are collected. What is the best way to work it out? Thanks. | Simple example: import oschidren = []for job in jobs: child = os.fork() if child: children.append(child) else: pass # really should exec the jobfor child in children: os.waitpid(child, 0) Timing out a slow child is a little more work; you can use wait instead of waitpid , and cull the returned values from the list of children, instead of waiting on each one in turn (as here). If you set up an alarm with a SIGALRM handler, you can terminate the waiting after a specified delay. This is all standard UNIX stuff, not Python-specific... | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174853",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/140995/"
]
} |
174,890 | I've discovered that cElementTree is about 30 times faster than xml.dom.minidom and I'm rewriting my XML encoding/decoding code. However, I need to output XML that contains CDATA sections and there doesn't seem to be a way to do that with ElementTree. Can it be done? | After a bit of work, I found the answer myself. Looking at the ElementTree.py source code, I found there was special handling of XML comments and preprocessing instructions. What they do is create a factory function for the special element type that uses a special (non-string) tag value to differentiate it from regular elements. def Comment(text=None): element = Element(Comment) element.text = text return element Then in the _write function of ElementTree that actually outputs the XML, there's a special case handling for comments: if tag is Comment: file.write("<!-- %s -->" % _escape_cdata(node.text, encoding)) In order to support CDATA sections, I create a factory function called CDATA , extended the ElementTree class and changed the _write function to handle the CDATA elements. This still doesn't help if you want to parse an XML with CDATA sections and then output it again with the CDATA sections, but it at least allows you to create XMLs with CDATA sections programmatically, which is what I needed to do. The implementation seems to work with both ElementTree and cElementTree. import elementtree.ElementTree as etree#~ import cElementTree as etreedef CDATA(text=None): element = etree.Element(CDATA) element.text = text return elementclass ElementTreeCDATA(etree.ElementTree): def _write(self, file, node, encoding, namespaces): if node.tag is CDATA: text = node.text.encode(encoding) file.write("\n<![CDATA[%s]]>\n" % text) else: etree.ElementTree._write(self, file, node, encoding, namespaces)if __name__ == "__main__": import sys text = """ <?xml version='1.0' encoding='utf-8'?> <text> This is just some sample text. </text> """ e = etree.Element("data") cdata = CDATA(text) e.append(cdata) et = ElementTreeCDATA(e) et.write(sys.stdout, "utf-8") | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/174890",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15109/"
]
} |
174,916 | I'm specifying my doctype as xhtml strict, but it's being sent over the wire as a content type of text/html. I'd like to specify that the content type is application/xhtml+xm, but I can't figure out where, or if, I can configure this from within my application | You can specify it in the @ page attributes section, like this: <%@ Page ContentType="application/xhtml+xm" %> ...more on MSDN . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/174916",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
174,933 | How can I get a single keyboard character from the terminal with Ruby without pressing enter?I tried Curses::getch , but that didn't really work for me. | http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/2999 #!/usr/bin/rubybegin system("stty raw -echo") str = STDIN.getcensure system("stty -raw echo")endp str.chr (Tested on my OS X system, may not be portable to all Ruby platforms). See http://www.rubyquiz.com/quiz5.html for some additional suggestions, including for Windows. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/174933",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/25017/"
]
} |
174,942 | A colleague once told me that the last option when everything has failed to debug on Linux was to use strace . I tried to learn the science behind this strange tool, but I am not a system admin guru and I didn’t really get results. So, What is it exactly and what does it do? How and in which cases should it be used? How should the output be understood and processed? In brief, in simple words , how does this stuff work? | Strace Overview strace can be seen as a light weight debugger. It allows a programmer / user to quickly find out how a program is interacting with the OS. It does this by monitoring system calls and signals. Uses Good for when you don't have source code or don't want to be bothered to really go through it. Also, useful for your own code if you don't feel like opening up GDB, but are just interested in understanding external interaction. A good little introduction Here is a gentle introduction to using strace to debug process hangs: strace introduction | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/174942",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9951/"
]
} |
174,968 | Routines can have parameters, that's no news. You can define as many parameters as you may need, but too many of them will make your routine difficult to understand and maintain. Of course, you could use a structured variable as a workaround: putting all those variables in a single struct and passing it to the routine. In fact, using structures to simplify parameter lists is one of the techniques described by Steve McConnell in Code Complete . But as he says: Careful programmers avoid bundling data any more than is logically necessary. So if your routine has too many parameters or you use a struct to disguise a big parameter list, you're probably doing something wrong. That is, you're not keeping coupling loose. My question is, when can I consider a parameter list too big? I think that more than 5 parameters, are too many. What do you think? | When is something considered so obscene as to be something that can be regulated despite the 1st Amendment guarantee to free speech? According to Justice Potter Stewart, "I know it when I see it." The same holds here. I hate making hard and fast rules like this because the answer changes not only depending on the size and scope of your project, but I think it changes even down to the module level. Depending on what your method is doing, or what the class is supposed to represent, it's quite possible that 2 arguments is too many and is a symptom of too much coupling. I would suggest that by asking the question in the first place, and qualifying your question as much as you did, that you really know all of this. The best solution here is not to rely on a hard and fast number, but instead look towards design reviews and code reviews among your peers to identify areas where you have low cohesion and tight coupling. Never be afraid to show your colleagues your work. If you are afraid to, that's probably the bigger sign that something is wrong with your code, and that you already know it . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/174968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1679/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.