source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
126,524 | I could swear I've seen the function (or method) that takes a list, like this [3, 7, 19] and makes it into iterable list of tuples, like so: [(0,3), (1,7), (2,19)] to use it instead of: for i in range(len(name_of_list)): name_of_list[i] = something but I can't remember the name and googling "iterate list" gets nothing. | >>> a = [3,4,5,6]>>> for i, val in enumerate(a):... print i, val...0 31 42 53 6>>> | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/126524",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/37141/"
]
} |
126,562 | I want to replace a single file inside a msi. How to do it? | Use msi2xml . This command extracts the MSI files: msi2xml -c OutputDir TestMSI.MSI Open OutputDir and modify the file. To rebuild the MSI run: xml2msi.exe -m TestMSI.xml You need the -m to ignore the 'MD5 checksum test' that fails when an MSIs file(s) are modified. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/126562",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5978/"
]
} |
126,611 | I am not too familiar with .NET desktop applications (using Visual Studio 2005 ). Is it possible to have the entire application run from a single .exe file? | Yes, you can use the ILMerge tool. It is also available as a NuGet package . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/126611",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1368/"
]
} |
126,653 | Please respond with one by one . If you explain why it is not true then try to avoid general statements and provide particular examples. | That all the parentheses make code unreadable. After about two weeks, and with a decent text editor, you just stop noticing them. [ETA - just found a quote by long-time lisper Kenny Tilton: "Parentheses? What parentheses? I haven't noticed any parentheses since my first month of Lisp programming. I like to ask people who complain about parentheses in Lisp if they are bothered by all the spaces between words in a newspaper..."] | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/126653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20993/"
]
} |
126,737 | After watching The Dark Knight I became rather enthralled with the concept of the Prisoner's Dilemma. There must be an algorithm that that maximizes one's own gain given a situation. For those that find this foreign: http://en.wikipedia.org/wiki/Prisoner%27s_dilemma Very, very interesting stuff. Edit: The question is, what is, if any, the most efficient algorithm that exists for the Prisoner's Dilemma? | Since there is only one choice to make, and in the absence of any changeable inputs, your algorithm is either going to be: cooperate = true; ...or... cooperate = false It's more interesting to find a strategy for the Iterated Prisoner's Dilemma, which is something many people have done. For example http://www.iterated-prisoners-dilemma.info/ Even then it's not 'solvable' since the other player is not predictable. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/126737",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14877/"
]
} |
126,751 | During a long compilation with Visual Studio 2005 (version 8.0.50727.762), I sometimes get the following error in several files in some project: fatal error C1033: cannot open program database 'v:\temp\apprtctest\win32\release\vc80.pdb' (The file mentioned is either vc80.pdb or vc80.idb in the project's temp dir.) The next build of the same project succeeds. There is no other Visual Studio open that might access the same files. This is a serious problem because it makes nightly compilation impossible. | It is possible that an antivirus or a similar program is touching the pdb file on write - an antivirus is the most likely suspect in this scenario. I'm afraid that I can only give you some general pointers, based on my past experience in setting nightly builds in our shop. Some of these may sound trivial, but I'm including them for the sake of completion. First and foremost: make sure you start up with a clean slate. That is, force-delete the output directory of the build before you start your nightly. If you have an antivirus, antispyware or other such programs on your nightly machine, consider removing them. If that's not an option, add your obj folder to the exclusion list of the program. (optional) Consider using tools such as VCBuild or MSBuild as part of your nightly. I think it's better to use MSBuild if you're on a multicore machine. We use IncrediBuild for nightlies and MSBuild for releases, and never encountered the problem you describe. If nothing else works, you can schedule a watchdog script a few hours after the build starts and check its status; if the build fails, the watchdog should restart it. This is an ugly hack, but it's better than nothing. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/126751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7224/"
]
} |
126,756 | Can anybody suggest programming examples that illustrate recursive functions? There are the usual old horses such as Fibonacci series and Towers of Hanoi , but anything besides them would be fun. | This illustration is in English, rather than an actual programming language, but is useful for explaining the process in a non-technical way: A child couldn't sleep, so her mother told a story about a little frog, who couldn't sleep, so the frog's mother told a story about a little bear, who couldn't sleep, so bear's mother told a story about a little weasel ...who fell asleep. ...and the little bear fell asleep; ...and the little frog fell asleep;...and the child fell asleep. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/126756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4021/"
]
} |
126,759 | I've created an implementation of the QAbstractListModel class in Qt Jambi 4.4 and am finding that using the model with a QListView results in nothing being displayed, however using the model with a QTableView displays the data correctly. Below is my implementation of QAbstractListModel : public class FooListModel extends QAbstractListModel{ private List<Foo> _data = new Vector<Foo>(); public FooListModel(List<Foo> data) { if (data == null) { return; } for (Foo foo : data) { _data.add(Foo); } reset(); } public Object data(QModelIndex index, int role) { if (index.row() < 0 || index.row() >= _data.size()) { return new QVariant(); } Foo foo = _data.get(index.row()); if (foo == null) { return new QVariant(); } return foo; } public int rowCount(QModelIndex parent) { return _data.size(); }} And here is how I set the model: Foo foo = new Foo();foo.setName("Foo!");List<Foo> data = new Vector<Foo>();data.add(foo);FooListModel fooListModel = new FooListModel(data);ui.fooListView.setModel(fooListModel);ui.fooTableView.setModel(fooListModel); Can anyone see what I'm doing wrong? I'd like to think it was a problem with my implementation because, as everyone says, select ain't broken! | This illustration is in English, rather than an actual programming language, but is useful for explaining the process in a non-technical way: A child couldn't sleep, so her mother told a story about a little frog, who couldn't sleep, so the frog's mother told a story about a little bear, who couldn't sleep, so bear's mother told a story about a little weasel ...who fell asleep. ...and the little bear fell asleep; ...and the little frog fell asleep;...and the child fell asleep. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/126759",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13678/"
]
} |
126,772 | Background I am writing and using a very simple CGI-based (Perl) content management tool for two pro-bono websites. It provides the website administrator with HTML forms for events where they fill the fields (date, place, title, description, links, etc.) and save it. On that form I allow the administrator to upload an image related to the event. On the HTML page displaying the form, I am also showing a preview of the picture uploaded (HTML img tag). The Problem The problem happens when the administrator wants to change the picture. He would just have to hit the "browse" button, pick a new picture and press ok. And this works fine. Once the image is uploaded, my back-end CGI handles the upload and reloads the form properly. The problem is that the image shown does not get refreshed. The old image is still shown, even though the database holds the right image. I have narrowed it down to the fact that the IMAGE IS CACHED in the web browser. If the administrator hits the RELOAD button in Firefox/Explorer/Safari, everything gets refreshed fine and the new image just appears. My Solution - Not Working I am trying to control the cache by writing a HTTP Expires instruction with a date very far in the past. Expires: Mon, 15 Sep 2003 1:00:00 GMT Remember that I am on the administrative side and I don't really care if the pages takes a little longer to load because they are always expired. But, this does not work either. Notes When uploading an image, its filename is not kept in the database. It is renamed as Image.jpg (to simply things out when using it). When replacing the existing image with a new one, the name doesn't change either. Just the content of the image file changes. The webserver is provided by the hosting service/ISP. It uses Apache. Question Is there a way to force the web browser to NOT cache things from this page, not even images? I am juggling with the option to actually "save the filename" with the database. This way, if the image is changed, the src of the IMG tag will also change. However, this requires a lot of changes throughout the site and I rather not do it if I have a better solution. Also, this will still not work if the new image uploaded has the same name (say the image is photoshopped a bit and re-uploaded). | Armin Ronacher has the correct idea. The problem is random strings can collide. I would use: <img src="picture.jpg?1222259157.415" alt=""> Where "1222259157.415" is the current time on the server. Generate time by Javascript with performance.now() or by Python with time.time() | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/126772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7984/"
]
} |
126,781 | I'm translating a library written in C++ to C#, and the keyword 'union' exists once. In a struct. What's the correct way of translating it into C#? And what does it do? It looks something like this; struct Foo { float bar; union { int killroy; float fubar; } as;} | You can use explicit field layouts for that: [StructLayout(LayoutKind.Explicit)] public struct SampleUnion{ [FieldOffset(0)] public float bar; [FieldOffset(4)] public int killroy; [FieldOffset(4)] public float fubar;} Untested. The idea is that two variables have the same position in your struct. You can of course only use one of them. More informations about unions in struct tutorial | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/126781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15067/"
]
} |
126,790 | What is the added value for learning F# when you are already familiar with LISP? | Static typing (with type inference) Algebraic data types Pattern matching Extensible pattern matching with active patterns. Currying (with a nice syntax) Monadic programming , called 'workflows', provides a nice way to do asynchronous programming. A lot of these are relatively recent developments in the programming language world. This is something you'll see in F# that you won't in Lisp, especially Common Lisp, because the F# standard is still under development. As a result, you'll find there is a quite a bit to learn. Of course things like ADTs, pattern matching, monads and currying can be built as a library in Lisp, but it's nicer to learn how to use them in a language where they are conveniently built-in. The biggest advantage of learning F# for real-world use is its integration with .NET. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/126790",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6264/"
]
} |
126,794 | I'm trying to write a query that will pull back the two most recent rows from the Bill table where the Estimated flag is true. The catch is that these need to be consecutive bills. To put it shortly, I need to enter a row in another table if a Bill has been estimated for the last two bill cycles. I'd like to do this without a cursor, if possible, since I am working with a sizable amount of data and this has to run fairly often. Edit There is an AUTOINCREMENT(1,1) column on the table. Without giving away too much of the table structure, the table is essentially of the structure: CREATE TABLE Bills ( BillId INT AUTOINCREMENT(1,1,) PRIMARY KEY, Estimated BIT NOT NULL, InvoiceDate DATETIME NOT NULL) So you might have a set of results like: BillId AccountId Estimated InvoiceDate-------------------- -------------------- --------- -----------------------1111196 1234567 1 2008-09-03 00:00:00.0001111195 1234567 0 2008-08-06 00:00:00.0001111194 1234567 0 2008-07-03 00:00:00.0001111193 1234567 0 2008-06-04 00:00:00.0001111192 1234567 1 2008-05-05 00:00:00.0001111191 1234567 0 2008-04-04 00:00:00.0001111190 1234567 1 2008-03-05 00:00:00.0001111189 1234567 0 2008-02-05 00:00:00.0001111188 1234567 1 2008-01-07 00:00:00.0001111187 1234567 1 2007-12-04 00:00:00.0001111186 1234567 0 2007-11-01 00:00:00.0001111185 1234567 0 2007-10-01 00:00:00.0001111184 1234567 1 2007-08-30 00:00:00.0001111183 1234567 0 2007-08-01 00:00:00.0001111182 1234567 1 2007-07-02 00:00:00.0001111181 1234567 0 2007-06-01 00:00:00.0001111180 1234567 1 2007-05-02 00:00:00.0001111179 1234567 0 2007-03-30 00:00:00.0001111178 1234567 1 2007-03-02 00:00:00.0001111177 1234567 0 2007-02-01 00:00:00.0001111176 1234567 1 2007-01-03 00:00:00.0001111175 1234567 0 2006-11-29 00:00:00.000 In this case, only records 1111188 and 1111187 would be consecutive. | Assuming the rows have sequential IDs, something like this may be what you're looking for: select top 1 * from Bills b1inner join Bills b2 on b1.id = b2.id - 1whereb1.IsEstimate = 1 and b2.IsEstimate = 1order byb1.BillDate desc | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/126794",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11780/"
]
} |
126,801 | I'm confused about what the various testing appliances in Ruby on Rails are for. I have been using the framework for about 6 months but I've never understood the testing part of it. The only testing I've used is JUnit3 in Java and that only briefly. Everything I've read about it just shows testing validations. Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations? Furthermore, the tests seem super fragile to any change in your code. So if you change anything in your models, you have to change your tests and fixtures to match. Doesn't this violate the DRY principle? Third, writing test code seems to take alot of time. Is that normal? Wouldn't it just be faster to refresh my browser and see if it worked? I already have to play with my application just to see if it flows correctly and make sure my CSS hasn't exploded. Why wouldn't manual testing be enough? I've asked these questions before and I haven't gotten more than "automated testing is automated". I am smart enough to figure out the advantages of automating a task. My problem is that costs of writing tests seem absurdly high compared to the benefits. That said, any detailed response is welcome because I probably missed a benefit or two. | Shouldn't the validations in rails just work? It seems more like testing the framework than testing the your code. Why would you need to test validations? The validations in Rails do work -- in fact, there are unit tests in the Rails codebase to ensure it. When you test a model's validation, you're testing the specifics of the validation: the length, the accepted values, etc. You're making sure the code was written as intended. Some validations are simple helpers and you may opt not to test them on the notion that "no one can mess up a validates_numericality_of call." Is that true? Does every developer always remember to write it in the first place? Does every developer never accidentally delete a line on a bad copy paste? In my personal opinion, you don't need to test every last combination of values for a Rails' validation helper, but you need a line to test that it's there with the right values passed, just in case some punk changes it in the future without proper forethought. Further, other validations are more complex, requiring lots of custom code -- they may warrant more thorough testing. Furthermore, the tests seem super fragile to any change in your code. So if you change anything in your models, you have to change your tests and fixtures to match. Doesn't this violate the DRY principle? I don't believe it violates DRY. They're communicating (that's what programming is, communication) two very different things. The test says the code should do something. The code says what it actually does. Testing is extremely important when there is a disconnect between those things. Test code and application code are intimately linked, obviously. I think of them as two sides of a coin. You wouldn't want a front without a back, or a back without a front. Good test code reinforces good application code, and vice versa. The two together are used to understand the whole problem that you're trying to solve. And well written test code is documentation -- it shows how the application code should be used. Third, writing test code seems to take alot of time. Is that normal? Wouldn't it just be faster to refresh my browser and see if it worked? I already have to play with my application just to see if it flows correctly and make sure my CSS hasn't exploded. Why wouldn't manual testing be enough? You've only worked on very small projects, for which that testing is arguably sufficient. However, when you work on a project with several developers, thousands or tens of thousands of lines of code, integration points with web services, third party libraries, multiple databases, months of development and requirements changes, etc, there are a lot of other factors in play. Manual testing is simply not enough. In a project of any real complexity, changes in one place can often have unforeseen results in others. Proper architecture helps mitigate this problem, but automated testing helps as well (and helps identify points where the architecture can be improved) by identifying when a change in one place breaks another. My problem is that costs of writing tests seem absurdly high compared to the benefits. That said, any detailed response is welcome because I probably missed a benefit or two. I'll list a few more benefits. If you test first (Test Driven Development) your code will probably be better. I haven't met a programmer who gave it a solid shot for whom this wasn't the case. Testing first forces you to think about the problem and actually design your solution, versus hacking it out. Further, it forces you to understand the problem domain well enough to where if you do have to hack it out, you know your code works within the limitations you've defined. If you have full test coverage, you can refactor with NO RISK. If a software problem is very complicated (again, real world projects that last for months tend to be complicated) then you may wish to simplify code that has previously been written. So, you can write new code to replace the old code, and if it passes all of your tests, you're done. It does exactly what the old code did with respect to the tests. For a project that plans to use an agile development method, refactoring is absolutely essential. Changes will always need to be made. To sum up, automated testing, especially test driven development, is basically a method of managing the complexity of software development. If your project isn't very complex, the cost may outweigh the benefits (although I doubt it). However, real world projects tend to be very complex, and the results of testing and TDD speak for themselves: they work. (If you're curious, I find Dan North's article on Behavior Driven Development to be very helpful in understanding a lot of the value in testing: http://dannorth.net/introducing-bdd ) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/126801",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16204/"
]
} |
126,804 | I have used "traditional" version control systems to maintain source code repositories on past projects. I am starting a new project with a distributed team and I can see advantages to using a distributed system. Given that I understand SourceSafe, CVS, and Subversion; what suggestions do you have for a Git newbie? | The Git - SVN Crash Course is a good read for getting going. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/126804",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3014/"
]
} |
126,853 | I saw this on reddit, and it reminded me of one of my vim gripes: It shows the UI in German. I want English. But since my OS is set up in German (the standard at our office), I guess vim is actually trying to be helpful. What magic incantations must I perform to get vim to switch the UI language? I have tried googling on various occasions, but can't seem to find an answer. | For reference, in Windows (7) I just deleted the directory C:\Program Files (x86)\Vim\vim72\lang . That made it fallback to en_US. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/126853",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2260/"
]
} |
126,868 | I'm developing a C# assembly which is to be called via COM from a Delphi 7 (iow, native win32, not .net) application. So far, it seems to work. I've exported a TLB file, imported that into my Delphi project, and I can create my C# object and call its functions. So that's great, but soon I'm going to really want to use Visual Studio to debug the C# code while it's running. Set breakpoints, step through code, all that stuff. I've tried breaking in the Delphi code after the COM object is created, then looking for a process for VS to attach to, but I can't find one. Is there a way to set VS2008 up to do this? I'd prefer to just be able to hit f5 and have VS start the Delphi executable, wait for the C# code to be called, and then attach itself to it.. But I could live with manually attaching to a process, I suppose. Just please don't tell me I have to make do with MessageBox.Show etc. | In the VS2008 project properties page, on the Debug tab, there's an option to set a different Start Action. This can be used to run an external program (e.g. your Delphi app) when you press F5. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/126868",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/369/"
]
} |
126,876 | During a complicated update I might prefer to display all the changes at once. I know there is a method that allows me to do this, but what is it? | I think this.SuspendLayout() & ResumeLayout() should do it | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/126876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4694/"
]
} |
126,896 | we are using git-svn to manage branches of an SVN repo. We are facing the following problem: after a number of commits by user X in the branch, user Y would like to use git-svn to merge the changes in branch to trunk. The problem we're seeing is that the commit messages for all the individual merge operations look as if they were made by user Y, whereas the actual change in branch was made by user X. Is there a way to indicate to git-svn that when merging, use the original commit message/author for a given change rather than the person doing the merge? | The git-svn man page recommends that you don't use merge . ""It is recommended that you run git-svn fetch and rebase (not pull or merge)"". Having said that, you can do what you like :-) There are 2 issues here. First is that svn only stores the commiter , not the author of a patch as git does. So when Y commits the merges to trunk, svn only records her name, even though the patches were authored by X. This is an amazing feature of git, stunningly simple yet vital for open source projects were attributing changes to the author can avoid legal problems down the road. Secondly, git doesn't seem to use the relatively new svn merge features. This may be a temporary thing, as git is actively developed and new features are added all the time. But for now, it doesn't use them. I've just tried with git 1.6.0.2 and it "loses" information compared to doing the same operation with svn merge. In svn 1.5, a new feature was added to the logging and annotation methods, so that svn log -g on the trunk would output something like this for a merge: ------------------------------------------------------------------------r5 | Y | 2008-09-24 15:17:12 +0200 (Wed, 24 Sep 2008) | 1 lineMerged release-1.0 into trunk------------------------------------------------------------------------r4 | X | 2008-09-24 15:16:13 +0200 (Wed, 24 Sep 2008) | 1 lineMerged via: r5Return 1------------------------------------------------------------------------r3 | X | 2008-09-24 15:15:48 +0200 (Wed, 24 Sep 2008) | 2 linesMerged via: r5Create a branch Here, Y commits r5, which incorporates the changes from X on the branch into the trunk. The format of the log is not really that great, but it comes into its own on svn blame -g: 2 Y int main() 2 Y {G 4 X return 1; 2 Y } Here assuming Y only commits to trunk, we can see that one line was editted by X (on the branch) and merged. So, if you are using svn 1.5.2, you are possibly better of merging with the real svn client for now. Although you would lose merge info in git, it is usually clever enough not to complain. Update: I've just tried this with git 1.7.1 to see if there has been any advances in the interim. The bad news is that merge within git still does not populate the svn:mergeinfo values, so git merge followed by git svn dcommit will not set svn:mergeinfo and you will lose merge information if the Subversion repository is the canonical source, which it probably is. The good news is that git svn clone does read in svn:mergeinfo properties to construct a better merge history, so if you use svn merge correctly (it requires merging full branches) then the git clone will look correct to git users. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/126896",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
126,917 | In ASPNET, I grew to love the Application and Cache stores. They're awesome. For the uninitiated, you can just throw your data-logic objects into them, and hey-presto, you only need query the database once for a bit of data. By far one of the best ASPNET features, IMO. I've since ditched Windows for Linux, and therefore PHP, Python and Ruby for webdev. I use PHP most because I dev several open source projects, all using PHP. Needless to say, I've explored what PHP has to offer in terms of caching data-objects. So far I've played with: Serializing to file (a pretty slow/expensive process) Writing the data to file as JSON/XML/plaintext/etc (even slower for read ops) Writing the data to file as pure PHP (the fastest read, but quite a convoluted write op) I should stress now that I'm looking for a solution that doesn't rely on a third party app (eg memcached) as the apps are installed in all sorts of scenarios, most of which don't have install rights (eg: a cheap shared hosting account). So back to what I'm doing now, is persisting to file secure? Rule 1 in production server security has always been disable file-writing, but I really don't see any way PHP could cache if it couldn't write. Are there any tips and/or tricks to boost the security? Is there another persist-to-file method that I'm forgetting? Are there any better methods of caching in "limited" environments? | Serializing is quite safe and commonly used. There is an alternative however, and that is to cache to memory. Check out memcached and APC , they're both free and highly performant. This article on different caching techniques in PHP might also be of interest. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/126917",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12870/"
]
} |
127,001 | I need to compress portions of our application's network traffic for performance. I presume this means I need to stay away from some of the newer algorithms like bzip2, which I think I have heard is slower. | You can use Deflater / Inflater which is built into the JDK. There are also GZIPInputStream and GZIPOutputStream, but it really depends on your exact use. Edit: Reading further comments it looks like the network taffic is HTTP. Depending on the server, it probably has support for compression (especially with deflate/gzip). The problem then becomes on the client. If the client is a browser it probably already supports it. If your client is a webservices client or an http client check the documentation for that package to see if it is supported. It looks like jakarta-commons httpclient may require you to manually do the compression. To enable this on the client side you will need to do something like .addRequestHeader("Accept-Encoding","gzip,deflate"); | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127001",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/18103/"
]
} |
127,027 | I use a byte to store some flag like 10101010 , and I would like to know how to verify that a specific bit is at 1 or 0 . | Here's a function that can be used to test any bit : bool is_bit_set(unsigned value, unsigned bitindex){ return (value & (1 << bitindex)) != 0;} Explanation : The left shift operator << creates a bitmask . To illustrate: (1 << 0) equals 00000001 (1 << 1) equals 00000010 (1 << 3) equals 00001000 So a shift of 0 tests the rightmost bit. A shift of 31 would be the leftmost bit of a 32-bit value. The bitwise-and operator ( & ) gives a result where all the bits that are 1 on both sides are set. Examples: 1111 & 0001 equals 0001 1111 & 0010 equals 0010 0000 & 0001 equals 0000 . So, the expression: (value & (1 << bitindex)) will return the bitmask if the associated bit ( bitindex ) contains a 1 in that position, or else it will return 0 (meaning it does not contain a 1 at the assoicated bitindex ). To simplify, the expression tests if the result is greater than zero . If Result > 0 returns true , meaning the byte has a 1 in the tested bitindex position. All else returns false meaning the result was zero, which means there's a 0 in tested bitindex position. Note the != 0 is not required in the statement since it's a bool , but I like to make it explicit. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/127027",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21386/"
]
} |
127,038 | Why is it not a good idea to use SOAP for communicating with the front end? For example, a web browser using JavaScript. | Because it's bloated Because JSON is natively understandable by the JavaScript Because XML isn't fast to manipulate with JavaScript. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/127038",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15985/"
]
} |
127,040 | In Internet Explorer I can use the clipboardData object to access the clipboard. How can I do that in FireFox, Safari and/or Chrome? | For security reasons, Firefox doesn't allow you to place text on the clipboard. However, there is a workaround available using Flash. function copyIntoClipboard(text) { var flashId = 'flashId-HKxmj5'; /* Replace this with your clipboard.swf location */ var clipboardSWF = 'http://appengine.bravo9.com/copy-into-clipboard/clipboard.swf'; if(!document.getElementById(flashId)) { var div = document.createElement('div'); div.id = flashId; document.body.appendChild(div); } document.getElementById(flashId).innerHTML = ''; var content = '<embed src="' + clipboardSWF + '" FlashVars="clipboard=' + encodeURIComponent(text) + '" width="0" height="0" type="application/x-shockwave-flash"></embed>'; document.getElementById(flashId).innerHTML = content;} The only disadvantage is that this requires Flash to be enabled. The source is currently dead: http://bravo9.com/journal/copying-text-into-the-clipboard-with-javascript-in-firefox-safari-ie-opera-292559a2-cc6c-4ebf-9724-d23e8bc5ad8a/ (and so is its Google cache ) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/127040",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11492/"
]
} |
127,055 | Is there a way to determine how many capture groups there are in a given regular expression? I would like to be able to do the follwing: def groups(regexp, s): """ Returns the first result of re.findall, or an empty default >>> groups(r'(\d)(\d)(\d)', '123') ('1', '2', '3') >>> groups(r'(\d)(\d)(\d)', 'abc') ('', '', '') """ import re m = re.search(regexp, s) if m: return m.groups() return ('',) * num_of_groups(regexp) This allows me to do stuff like: first, last, phone = groups(r'(\w+) (\w+) ([\d\-]+)', 'John Doe 555-3456') However, I don't know how to implement num_of_groups . (Currently I just work around it.) EDIT: Following the advice from rslite , I replaced re.findall with re.search . sre_parse seems like the most robust and comprehensive solution, but requires tree traversal and appears to be a bit heavy. MizardX's regular expression seems to cover all bases, so I'm going to go with that. | def num_groups(regex): return re.compile(regex).groups | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/127055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7581/"
]
} |
127,073 | We have a production machine and are trying to slowly introduce some 3.0 and up features on our web application. 3.5 is installed but I would like to upgrade to SP1, I would just like to know if it requires a restart of the machine in the end so I can schedule some down time. Thanks! EDIT: so it did require the restart, thanks guys for the answer...but the hosting company didn't give us the rights to do so...LOL | 3.5 SP1 updates 3.0 to SP2 and 2.0 (which contains the CLR) to SP2. If the CLR is loaded in any process the DLLs will not be writable, and a reboot will be required. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127073",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1841427/"
]
} |
127,095 | I'm used to Atlas where the preferred (from what I know) method is to use XML comments such as: /// <summary>/// Method to calculate distance between two points/// </summary>////// <param name="pointA">First point</param>/// <param name="pointB">Second point</param>///function calculatePointDistance(pointA, pointB) { ... } Recently I've been looking into other third-party JavaScript libraries and I see syntax like: /* * some comment here * another comment here * ... */ function blahblah() { ... } As a bonus, are there API generators for JavaScript that could read the 'preferred' commenting style? | There's JSDoc /** * Shape is an abstract base class. It is defined simply * to have something to inherit from for geometric * subclasses * @constructor */function Shape(color){ this.color = color;} | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/127095",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6350/"
]
} |
127,116 | I was wondering if there was an easy way in SQL to convert an integer to its binary representation and then store it as a varchar. For example 5 would be converted to "101" and stored as a varchar. | Following could be coded into a function. You would need to trim off leading zeros to meet requirements of your question. declare @intvalue intset @intvalue=5declare @vsresult varchar(64)declare @inti intselect @inti = 64, @vsresult = ''while @inti>0 begin select @vsresult=convert(char(1), @intvalue % 2)+@vsresult select @intvalue = convert(int, (@intvalue / 2)), @inti=@inti-1 endselect @vsresult | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127116",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4779/"
]
} |
127,118 | I need to test whether various types of database objects exist in a given database, and I don't know how to formulate these tests in Firebird SQL. Each test has the form "Does object of type X with name Y exist?". For example, I need to test whether a table with a given name exists. The object types I need to test are: Table View Domain Trigger Procedure Exception Generate UDF Role One can find how to query for a given table on the Internet, but the other types are more difficult to find ... | I think a lot of what you are asking can be found at this forum post . If you want to dive a little deeper, this site seems to have a graphical representation of the tables. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5782/"
]
} |
127,137 | Does anyone know a good online resource for example of R code? The programs do not have to be written for illustrative purposes, I am really just looking for some places where a bunch of R code has been written to give me a sense of the syntax and capabilities of the language? Edit: I have read the basic documentation on the main site, but was wondering if there was some code samples or even programs that show how R is used by different people. | I just found this question and thought I would add a few resources to it. I really like the Quick-R site: http://www.statmethods.net/ Muenchen has written a book about using R if you come from SAS or SPSS. Originally it was an 80 page online doc that Springer encouraged him to make a 400+ page book out of. The original short form as well as the book are here: http://rforsasandspssusers.com/ You've probably already seen these, but worth listing: http://cran.r-project.org/doc/manuals/R-intro.pdf http://cran.r-project.org/doc/contrib/Owen-TheRGuide.pdf http://cran.r-project.org/doc/contrib/Kuhnert+Venables-R_Course_Notes.zip I don't want to sound like a trite RTFM guy, but the help files generally have great short snips of working code as examples. I'm no R pro so I end up having to deconstruct the examples to understand them. That process, while tedious, is really useful. Good luck! EDIT: well I hesitated to be self linking (it feels a bit masturbatory) but here's my own list of R resources with descriptions and comments on each: http://www.cerebralmastication.com/?page_id=62 | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127137",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/277/"
]
} |
127,156 | How do I check if an index exists on a table field in MySQL? I've needed to Google this multiple times, so I'm sharing my Q/A. | Use SHOW INDEX like so: SHOW INDEX FROM [tablename] Docs: https://dev.mysql.com/doc/refman/5.0/en/show-index.html | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/127156",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5446/"
]
} |
127,188 | Can you explain STA and MTA in your own words? Also, what are apartment threads and do they pertain only to COM? If so, why? | The COM threading model is called an "apartment" model, where the execution context of initialized COM objects is associated with either a single thread (Single Thread Apartment) or many threads (Multi Thread Apartment). In this model, a COM object, once initialized in an apartment, is part of that apartment for the duration of its runtime. The STA model is used for COM objects that are not thread safe. That means they do not handle their own synchronization. A common use of this is a UI component. So if another thread needs to interact with the object (such as pushing a button in a form) then the message is marshalled onto the STA thread. The windows forms message pumping system is an example of this. If the COM object can handle its own synchronization then the MTA model can be used where multiple threads are allowed to interact with the object without marshalled calls. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/127188",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19490/"
]
} |
127,190 | I'm learning Haskell in the hope that it will help me get closer to functional programming. Previously, I've mostly used languages with C-like syntax, like C, Java, and D. I have a little question about the coding style of an if / else control block used by the tutorial on Wikibooks . The code looks like the following: doGuessing num = do putStrLn "Enter your guess:" guess <- getLine if (read guess) < num then do putStrLn "Too low!" doGuessing num else if (read guess) > num then do putStrLn "Too high!" doGuessing num else do putStrLn "You Win!" It makes me confused, because this coding style totally violates the recommended style in C-like languages, where we should indent if , else if , and else at the same column. I know it just does not work in Haskell, because it would be a parse error if I indented else at the same column as if . But what about the following style? I think it is much more clear than the above one. But since the above is used by Wikibooks and Yet Another Haskell Tutorial, which is marked "best tutorial available online" at the official Haskell website, I'm not sure whether this coding style is a convention in Haskell programs. doGuessing num = do putStrLn "Enter your guess:" guess <- getLine if (read guess) < num then do putStrLn "Too low!" doGuessing num else if (read guess) > num then do putStrLn "Too high!" doGuessing num else do putStrLn "You Win!" So, I'm curious about which coding style is used more often—or is there another coding style for this piece of code? | Haskell style is functional, not imperative! Rather than "do this then that," think about combining functions and describing what your program will do, not how. In the game, your program asks the user for a guess. A correct guess is a winner. Otherwise, the user tries again. The game continues until the user guesses correctly, so we write that: main = untilM (isCorrect 42) (read `liftM` getLine) This uses a combinator that repeatedly runs an action ( getLine pulls a line of input and read converts that string to an integer in this case) and checks its result: untilM :: Monad m => (a -> m Bool) -> m a -> m ()untilM p a = do x <- a done <- p x if done then return () else untilM p a The predicate (partially applied in main ) checks the guess against the correct value and responds accordingly: isCorrect :: Int -> Int -> IO BoolisCorrect num guess = case compare num guess of EQ -> putStrLn "You Win!" >> return True LT -> putStrLn "Too high!" >> return False GT -> putStrLn "Too low!" >> return False The action to be run until the player guesses correctly is read `liftM` getLine Why not keep it simple and just compose the two functions? *Main> :type read . getLine<interactive>:1:7: Couldn't match expected type `a -> String' against inferred type `IO String' In the second argument of `(.)', namely `getLine' In the expression: read . getLine The type of getLine is IO String , but read wants a pure String . The function liftM from Control.Monad takes a pure function and “lifts” it into a monad. The type of the expression tells us a great deal about what it does: *Main> :type read `liftM` getLineread `liftM` getLine :: (Read a) => IO a It's an I/O action that when run gives us back a value converted with read , an Int in our case. Recall that readLine is an I/O action that yields String values, so you can think of liftM as allowing us to apply read “inside” the IO monad. Sample game: 1Too low!100Too high!42You Win! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/127190",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/242644/"
]
} |
127,233 | This is in C#, I have a class that I am using from some else's DLL. It does not implement IEnumerable but has 2 methods that pass back a IEnumerator. Is there a way I can use a foreach loop on these. The class I am using is sealed. | foreach does not require IEnumerable , contrary to popular belief. All it requires is a method GetEnumerator that returns any object that has the method MoveNext and the get-property Current with the appropriate signatures. /EDIT: In your case, however, you're out of luck. You can trivially wrap your object, however, to make it enumerable: class EnumerableWrapper { private readonly TheObjectType obj; public EnumerableWrapper(TheObjectType obj) { this.obj = obj; } public IEnumerator<YourType> GetEnumerator() { return obj.TheMethodReturningTheIEnumerator(); }}// Called like this:foreach (var xyz in new EnumerableWrapper(yourObj)) …; /EDIT: The following method, proposed by several people, does not work if the method returns an IEnumerator : foreach (var yz in yourObj.MethodA()) …; | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127233",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3208/"
]
} |
127,290 | Is there a side effect in doing this: C code: struct foo { int k;};int ret_foo(const struct foo* f){ return f.k; } C++ code: class bar : public foo { int my_bar() { return ret_foo( (foo)this ); }}; There's an extern "C" around the C++ code and each code is inside its own compilation unit. Is this portable across compilers? | This is entirely legal. In C++, classes and structs are identical concepts, with the exception that all struct members are public by default. That's the only difference. So asking whether you can extend a struct is no different than asking if you can extend a class. There is one caveat here. There is no guarantee of layout consistency from compiler to compiler. So if you compile your C code with a different compiler than your C++ code, you may run into problems related to member layout (padding especially). This can even occur when using C and C++ compilers from the same vendor. I have had this happen with gcc and g++. I worked on a project which used several large structs. Unfortunately, g++ packed the structs significantly looser than gcc, which caused significant problems sharing objects between C and C++ code. We eventually had to manually set packing and insert padding to make the C and C++ code treat the structs the same. Note however, that this problem can occur regardless of subclassing. In fact we weren't subclassing the C struct in this case. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/127290",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21648/"
]
} |
127,318 | I want to programmatically edit file content using windows command line ( cmd.exe ). In *nix there is sed for this tasks. Is there any useful native equivalent in windows? | Today powershell saved me. For grep there is: get-content somefile.txt | where { $_ -match "expression"} or select-string somefile.txt -pattern "expression" and for sed there is: get-content somefile.txt | %{$_ -replace "expression","replace"} For more detail about replace PowerShell function see this Microsoft article . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/127318",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2361/"
]
} |
127,386 | In Visual Studio, we've all had "baadf00d", have seen seen "CC" and "CD" when inspecting variables in the debugger in C++ during run-time. From what I understand, "CC" is in DEBUG mode only to indicate when a memory has been new() or alloc() and unitilialized. While "CD" represents delete'd or free'd memory. I've only seen "baadf00d" in RELEASE build (but I may be wrong). Once in a while, we get into a situation of tacking memory leaks, buffer overflows, etc and these kind of information comes in handy. Would somebody be kind enough to point out when and in what modes the memory are set to recognizable byte patterns for debugging purpose? | This link has more information: https://en.wikipedia.org/wiki/Magic_number_(programming)#Debug_values * 0xABABABAB : Used by Microsoft's HeapAlloc() to mark "no man's land" guard bytes after allocated heap memory* 0xABADCAFE : A startup to this value to initialize all free memory to catch errant pointers* 0xBAADF00D : Used by Microsoft's LocalAlloc(LMEM_FIXED) to mark uninitialised allocated heap memory* 0xBADCAB1E : Error Code returned to the Microsoft eVC debugger when connection is severed to the debugger* 0xBEEFCACE : Used by Microsoft .NET as a magic number in resource files* 0xCCCCCCCC : Used by Microsoft's C++ debugging runtime library to mark uninitialised stack memory* 0xCDCDCDCD : Used by Microsoft's C++ debugging runtime library to mark uninitialised heap memory* 0xDDDDDDDD : Used by Microsoft's C++ debugging heap to mark freed heap memory* 0xDEADDEAD : A Microsoft Windows STOP Error code used when the user manually initiates the crash.* 0xFDFDFDFD : Used by Microsoft's C++ debugging heap to mark "no man's land" guard bytes before and after allocated heap memory* 0xFEEEFEEE : Used by Microsoft's HeapFree() to mark freed heap memory | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/127386",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7234/"
]
} |
127,426 | I have defined an interface in C++, i.e. a class containing only pure virtual functions. I want to explicitly forbid users of the interface to delete the object through a pointer to the interface, so I declared a protected and non-virtual destructor for the interface, something like: class ITest{public: virtual void doSomething() = 0;protected: ~ITest(){}};void someFunction(ITest * test){ test->doSomething(); // ok // deleting object is not allowed // delete test; } The GNU compiler gives me a warning saying: class 'ITest' has virtual functions but non-virtual destructor Once the destructor is protected, what is the difference in having it virtual or non-virtual? Do you think this warning can be safely ignored or silenced? | It's more or less a bug in the compiler. Note that in more recent versions of the compiler this warning does not get thrown (at least in 4.3 it doesn't). Having the destructor be protected and non-virtual is completely legitimate in your case. See here for an excellent article by Herb Sutter on the subject. From the article: Guideline #4: A base class destructor should be either public and virtual, or protected and nonvirtual. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/127426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15622/"
]
} |
127,477 | In WPF you can setup validation based on errors thrown in your Data Layer during Data Binding using the ExceptionValidationRule or DataErrorValidationRule . Suppose you had a bunch of controls set up this way and you had a Save button. When the user clicks the Save button, you need to make sure there are no validation errors before proceeding with the save. If there are validation errors, you want to holler at them. In WPF, how do you find out if any of your Data Bound controls have validation errors set? | This post was extremely helpful. Thanks to all who contributed. Here is a LINQ version that you will either love or hate. private void CanExecute(object sender, CanExecuteRoutedEventArgs e){ e.CanExecute = IsValid(sender as DependencyObject);}private bool IsValid(DependencyObject obj){ // The dependency object is valid if it has no errors and all // of its children (that are dependency objects) are error-free. return !Validation.GetHasError(obj) && LogicalTreeHelper.GetChildren(obj) .OfType<DependencyObject>() .All(IsValid);} | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/127477",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4407/"
]
} |
127,498 | What guidelines can you give for rich HTML formatting in emails while maintaining good visual stability across many clients and web based email interfaces? An unrelated answer on a question on Stack Overflow suggested: http://www.campaignmonitor.com/blog/archives/2008/05/2008_email_design_guidelines.html Which contains the following guidelines: Place stylesheet in <body> instead of <head> Some email clients will strip CSS out of the head, but leave it if the style block is (invalidly) in the body. Use inline styles where ever possible Gmail will strip any stylesheet, whether in the <head> or in the <body> , but honor inline styles assigned using the style="" attribute Return to tables Email standards have actually taken a giant step backwards in recent years thanks to Outlook 2007 using the Microsoft Word rendering engine. Unlearn most of what you learned about positioning without stylesheets. Don't rely on images Most clients and most web based email clients will not display images unless the user specifically requests them to be displayed. I also have a few "unconfirmed" truths that I don't remember where I read them. Don't use more than two levels of nesting in tables Is this true. What is likely to happen if I do? Is there any particular client/clients that choke on this? Be careful of nesting background images in cells/tables As I understand you may encounter situations where the background image is applied in the descending table/cell completely anew, and not just "shining through". Again, true or not? Which clients? I would like to flesh out this list with more guidelines and experiences from the trenches. Can you offer any further suggestions? Update: I'm specifially asking for guidelines for the design part in HTML and consistency there of. Questions about general guidelines for avoiding spam filters , and common courtesy are already on SO. | It's actually really hard to make a decent HTML email, if you approach it from a 'modern HTML and CSS' perspective. For best results, imagine it's 1999. Go back to tables for layout (or preferably - don't attempt any complex layout) Be afraid of background images (they break in Outlook 2007 and Gmail). The style-tag-in-the-body thing is because Hotmail used to accept it that way - I'm pretty sure they strip it out now though. Use inline styles with the style attribute if you must use CSS. Forget entirely about float Remember your images will probably be blocked - use background and text colour to your advantage - make sure there is some readable text with images disabled Be very careful with links, be especially wary of anything that looks like a URL in the link text - you will anger 'phishing' filters (eg <a href="http://domain.tld">www.someotherdomain.tld</a> is bad ) Remember that the "fold" on webmail clients tends to be extremely high up the page (on a 1024x768 screen, most interfaces won't show more than a hundred pixels or so) - get your identity stuff in right at the top so the recipient knows who you are. Recent version of outlook have a "portrait" preview pane which is significantly narrower than you may be expecting - be very wary of fixed-width layouts, if you must use them, make them as narrow as you can. Don't even think about flash, Javascript, SVG, canvas, or anything like that. Test, a lot. Make sure you test in a recent Outlook (things have changed a lot! It now uses Word as its HTML rendering engine, and it's crippled: Word 2007 HTML/CSS support ). Gmail is pretty finicky also. Surprisingly Yahoo's webmail is extremely good, with nice CSS support. Good luck ;) Update to answer further questions: Don't use more than two levels of nesting in tables I believe this is an older guideline pertaining to Lotus Notes. Nested tables should be okay, but really, if you have a layout that's complicated enough to need them, you're probably going to have trouble anyway. Keep your layout simple . Be careful of nesting background images in cells/tables This may be related to the above, and the same applies, if you're getting that complicated then you will have problems. Recent versions of Outlook don't support background images at all, so you'd be best advised to forget about them entirely. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/127498",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2105/"
]
} |
127,534 | We have a couple of developers asking for allow_url_fopen to be enabled on our server. What's the norm these days and if libcurl is enabled is there really any good reason to allow? Environment is: Windows 2003, PHP 5.2.6, FastCGI | You definitely want allow_url_include set to Off, which mitigates many of the risks of allow_url_fopen as well. But because not all versions of PHP have allow_url_include , best practice for many is to turn off fopen. Like with all features, the reality is that if you don't need it for your application, disable it. If you do need it, the curl module probably can do it better, and refactoring your application to use curl to disable allow_url_fopen may deter the least determined cracker. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127534",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/419/"
]
} |
127,564 | I have come to realize that Windbg is a very powerful debugger for the Windows platform & I learn something new about it once in a while. Can fellow Windbg users share some of their mad skills? ps: I am not looking for a nifty command, those can be found in the documentation. How about sharing tips on doing something that one couldn't otherwise imagine could be done with windbg? e.g. Some way to generate statistics about memory allocations when a process is run under windbg. | My favorite is the command .cmdtree <file> (undocumented, but referenced in previous release notes). This can assist in bringing up another window (that can be docked) to display helpful or commonly used commands. This can help make the user much more productive using the tool. Initially talked about here, with an example for the <file> parameter: http://blogs.msdn.com/debuggingtoolbox/archive/2008/09/17/special-command-execute-commands-from-a-customized-user-interface-with-cmdtree.aspx Example: alt text http://blogs.msdn.com/photos/debuggingtoolbox/images/8954736/original.aspx | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15071/"
]
} |
127,587 | I'm trying to use JVLC but I can't seem to get it work. I've downloaded the jar, I installed VLC and passed the -D argument to the JVM telling it where VLC is installed. I also tried: NativeLibrary.addSearchPath("libvlc", "C:\\Program Files\\VideoLAN\\VLC"); with no luck. I always get: Exception in thread "main" java.lang.UnsatisfiedLinkError: Unable to load library 'libvlc': The specified module could not be found. Has anyone made it work? | My favorite is the command .cmdtree <file> (undocumented, but referenced in previous release notes). This can assist in bringing up another window (that can be docked) to display helpful or commonly used commands. This can help make the user much more productive using the tool. Initially talked about here, with an example for the <file> parameter: http://blogs.msdn.com/debuggingtoolbox/archive/2008/09/17/special-command-execute-commands-from-a-customized-user-interface-with-cmdtree.aspx Example: alt text http://blogs.msdn.com/photos/debuggingtoolbox/images/8954736/original.aspx | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127587",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20459/"
]
} |
127,591 | How do I make Caps Lock work like Esc in Mac OS X? | Edit: As described in this answer , newer versions of MacOS now have native support for rebinding Caps Lock to Escape . Thus it is no longer necessary to install third-party software to achieve this. Here's my attempt at a comprehensive, visual walk-through answer (with links) of how to achieve this using Seil (formerly known as PCKeyboardHack ). First, go into the System Preferences , choose Keyboard , then the Keyboard Tab (first tab), and click Modifier Keys : In the popup dialog set Caps Lock Key to No Action : 2) Now, click here to download Seil and install it: 3) After the installation you will have a new Application installed ( Mountain Lion and newer ) and if you are on an older OS you may have to check for a new System Preferences pane: 4) Check the box that says "Change Caps Lock" and enter "53" as the code for the escape key: And you're done! If it doesn't work immediately, you may need to restart your machine. Impressed? Want More Control? You may also want to check out KeyRemap4MacBook which is actually the flagship keyboard remapping tool from pqrs.org - it's also free. If you like these tools you can make a donation . I have no affiliation with them but I've been using these tools for a long time and have to say the guys over there have been doing an excellent job maintaining these, adding features and fixing bugs. Here's a screenshot to show a few of the (hundreds of) pre-selectable options: PQRS also has a great utility called NoEjectDelay that you can use in combination with KeyRemap4MacBook for reprogramming the Eject key. After a little tweaking I have mine set to toggle the AirPort Wifi. These utilities offer unlimited flexibility when remapping the Mac keyboard. Have fun! | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/127591",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7706/"
]
} |
127,630 | I use MySQL in a fairly complex web site (PHP driven). Ideally, there would be a tool I could use that would help me test the SQL queries I am using and suggest better table indexes that will improve performance and avoid table scans. Failing that, something that will tell me exactly what each query is up to, so I can perform the optimisation myself. Edit: a simple guide to understanding the output from EXPLAIN ... would also be useful. Thank you. | Here's some info about EXPLAIN (referenced from the High Performance MySQL book from O'Reilly): When you run an EXPLAIN on a query, it tells you everything MySQL knows about that query in the form of reports for each table involved in the query. Each of these reports will tell you... the ID of the table (in the query) the table's role in a larger selection (if applicable, might just say SIMPLE if it's only one table) the name of the table (duh) the join type (if applicable, defaults to const ) a list of indexes on the table (or NULL if none), possible_keys the name of the index that MySQL decided to use, key the size of the key value (in bytes) ref shows the cols or values used to match against the key rows is the number of rows that MySQL thinks it needs to examine in order to satisfy the query. This should be kept as close to your calculated minimum as possible! ...then any extra information MySQL wishes to convey The book is completely awesome at providing information like this, so if you haven't already, get your boss to sign off on a purchase. Otherwise, I hope some more knowledgeable SO user can help :) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4012/"
]
} |
127,679 | How do I view the grants (access rights) for a given user in MySQL? | mysql> show grants for 'user'@'host' | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/127679",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14059/"
]
} |
127,692 | My project is currently using a svn repository which gains several hundred new revisions per day.The repository resides on a Win2k3-server and is served through Apache/mod_dav_svn. I now fear that over time the performance will degrade due to too many revisions. Is this fear reasonable? We are already planning to upgrade to 1.5, so having thousands of files in one directory will not be a problem in the long term. Subversion on stores the delta (differences), between 2 revisions, so this helps saving a LOT of space, specially if you only commit code (text) and no binaries (images and docs). Does that mean that in order to check out the revision 10 of the file foo.baz, svn will take revision 1 and then apply the deltas 2-10? | What type of repo do you have? FSFS or BDB? (Let's assume FSFS for now, since that's the default.) In the case of FSFS, each revision is stored as a diff against the previous. So, you would think that yes, after many revisions, it would be very slow. However, this isn't the case. FSFS uses what are called "skip deltas" to avoid having to do too many lookups on previous revs. (So, if you are using an FSFS repo, Brad Wilson's answer is wrong.) In the case of a BDB repo, the HEAD (latest) revision is full-text, but the earlier revisions are built as a series of diffs against the head. This means the previous revs have to be re-calculated after each commit. For more info: http://svn.apache.org/repos/asf/subversion/trunk/notes/skip-deltas P.S. Our repo is about 20GB, with about 35,000 revisions, and we have not noticed any performance degradation. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/127692",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21684/"
]
} |
127,693 | Can a firefox XPCOM component read and write page content across multiple pages? Scenario:A bunch of local HTML and javascript files. A "Main.html" file opens a window "pluginWindow", and creates a plugin using: netscape.security.PrivilegeManager.enablePrivilege('UniversalXPConnect'); var obj = Components.classes[cid].createInstance(); plugin = obj.QueryInterface(Components.interfaces.IPlugin); plugin.addObserver(handleEvent); The plugin that has 3 methods. IPlugin.Read - Read data from plugin IPlugin.Write - Write data to the plugin IPlugin.addObserver - Add a callback handler for reading. The "Main.html" then calls into the pluginWindow and tries to call the plugin method Write. I receive an error: Permission denied to call method UnnamedClass.Write | What type of repo do you have? FSFS or BDB? (Let's assume FSFS for now, since that's the default.) In the case of FSFS, each revision is stored as a diff against the previous. So, you would think that yes, after many revisions, it would be very slow. However, this isn't the case. FSFS uses what are called "skip deltas" to avoid having to do too many lookups on previous revs. (So, if you are using an FSFS repo, Brad Wilson's answer is wrong.) In the case of a BDB repo, the HEAD (latest) revision is full-text, but the earlier revisions are built as a series of diffs against the head. This means the previous revs have to be re-calculated after each commit. For more info: http://svn.apache.org/repos/asf/subversion/trunk/notes/skip-deltas P.S. Our repo is about 20GB, with about 35,000 revisions, and we have not noticed any performance degradation. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/127693",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
127,704 | I want to write a function that takes an array of letters as an argument and a number of those letters to select. Say you provide an array of 8 letters and want to select 3 letters from that. Then you should get: 8! / ((8 - 3)! * 3!) = 56 Arrays (or words) in return consisting of 3 letters each. | Art of Computer Programming Volume 4: Fascicle 3 has a ton of these that might fit your particular situation better than how I describe. Gray Codes An issue that you will come across is of course memory and pretty quickly, you'll have problems by 20 elements in your set -- 20 C 3 = 1140. And if you want to iterate over the set it's best to use a modified gray code algorithm so you aren't holding all of them in memory. These generate the next combination from the previous and avoid repetitions. There are many of these for different uses. Do we want to maximize the differences between successive combinations? minimize? et cetera. Some of the original papers describing gray codes: Some Hamilton Paths and a Minimal Change Algorithm Adjacent Interchange Combination Generation Algorithm Here are some other papers covering the topic: An Efficient Implementation of the Eades, Hickey, Read Adjacent Interchange Combination Generation Algorithm (PDF, with code in Pascal) Combination Generators Survey of Combinatorial Gray Codes (PostScript) An Algorithm for Gray Codes Chase's Twiddle (algorithm) Phillip J Chase, ` Algorithm 382: Combinations of M out of N Objects ' (1970) The algorithm in C ... Index of Combinations in Lexicographical Order (Buckles Algorithm 515) You can also reference a combination by its index (in lexicographical order). Realizing that the index should be some amount of change from right to left based on the index we can construct something that should recover a combination. So, we have a set {1,2,3,4,5,6}... and we want three elements. Let's say {1,2,3} we can say that the difference between the elements is one and in order and minimal. {1,2,4} has one change and is lexicographically number 2. So the number of 'changes' in the last place accounts for one change in the lexicographical ordering. The second place, with one change {1,3,4} has one change but accounts for more change since it's in the second place (proportional to the number of elements in the original set). The method I've described is a deconstruction, as it seems, from set to the index, we need to do the reverse – which is much trickier. This is how Buckles solves the problem. I wrote some C to compute them , with minor changes – I used the index of the sets rather than a number range to represent the set, so we are always working from 0...n.Note: Since combinations are unordered, {1,3,2} = {1,2,3} --we order them to be lexicographical. This method has an implicit 0 to start the set for the first difference. Index of Combinations in Lexicographical Order (McCaffrey) There is another way :, its concept is easier to grasp and program but it's without the optimizations of Buckles. Fortunately, it also does not produce duplicate combinations: The set that maximizes , where . For an example: 27 = C(6,4) + C(5,3) + C(2,2) + C(1,1) . So, the 27th lexicographical combination of four things is: {1,2,5,6}, those are the indexes of whatever set you want to look at. Example below (OCaml), requires choose function, left to reader: (* this will find the [x] combination of a [set] list when taking [k] elements *)let combination_maccaffery set k x = (* maximize function -- maximize a that is aCb *) (* return largest c where c < i and choose(c,i) <= z *) let rec maximize a b x = if (choose a b ) <= x then a else maximize (a-1) b x in let rec iterate n x i = match i with | 0 -> [] | i -> let max = maximize n i x in max :: iterate n (x - (choose max i)) (i-1) in if x < 0 then failwith "errors" else let idxs = iterate (List.length set) x k in List.map (List.nth set) (List.sort (-) idxs) A small and simple combinations iterator The following two algorithms are provided for didactic purposes. They implement an iterator and (a more general) folder overall combinations.They are as fast as possible, having the complexity O( n C k ). The memory consumption is bound by k . We will start with the iterator, which will call a user provided function for each combination let iter_combs n k f = let rec iter v s j = if j = k then f v else for i = s to n - 1 do iter (i::v) (i+1) (j+1) done in iter [] 0 0 A more general version will call the user provided function along with the state variable, starting from the initial state. Since we need to pass the state between different states we won't use the for-loop, but instead, use recursion, let fold_combs n k f x = let rec loop i s c x = if i < n then loop (i+1) s c @@ let c = i::c and s = s + 1 and i = i + 1 in if s < k then loop i s c x else f c x else x in loop 0 0 [] x | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/127704",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9191/"
]
} |
127,739 | $a = '{ "tag": "<b></b>" }'; echo json_encode( json_decode($a) ); This outputs: {"tag":"<b><\/b>"} when you would think it would output exactly the input. For some reason json_encode adds an extra slash. | Because it's part of the JSON standard http://json.org/ char any-Unicode-character- except-"-or-\-or- control-character\"\\\/ <---- see here?\b\f\n\r\t\u four-hex-digits | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127739",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19087/"
]
} |
127,776 | Where can I find the specifications for the various C# languages? (EDIT: it appears people voted down because you could 'google' this, however, my original intent was to put an answer with information not found on google. I've accepted the answer with the best google results, as they are relevant to people who haven't paid for VS) | Microsoft's version (probably what you want) The formal standardised versions (via ECMA, created just so they could say it was "standardised" by some external body. Even though ECMA "standards" are effectively "Insert cash, vend standard"). Further ECMA standards | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127776",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7116/"
]
} |
127,803 | I need to parse RFC 3339 strings like "2008-09-03T20:56:35.450686Z" into Python's datetime type. I have found strptime in the Python standard library, but it is not very convenient. What is the best way to do this? | isoparse function from python-dateutil The python-dateutil package has dateutil.parser.isoparse to parse not only RFC 3339 datetime strings like the one in the question, but also other ISO 8601 date and time strings that don't comply with RFC 3339 (such as ones with no UTC offset, or ones that represent only a date). >>> import dateutil.parser>>> dateutil.parser.isoparse('2008-09-03T20:56:35.450686Z') # RFC 3339 formatdatetime.datetime(2008, 9, 3, 20, 56, 35, 450686, tzinfo=tzutc())>>> dateutil.parser.isoparse('2008-09-03T20:56:35.450686') # ISO 8601 extended formatdatetime.datetime(2008, 9, 3, 20, 56, 35, 450686)>>> dateutil.parser.isoparse('20080903T205635.450686') # ISO 8601 basic formatdatetime.datetime(2008, 9, 3, 20, 56, 35, 450686)>>> dateutil.parser.isoparse('20080903') # ISO 8601 basic format, date onlydatetime.datetime(2008, 9, 3, 0, 0) The python-dateutil package also has dateutil.parser.parse . Compared with isoparse , it is presumably less strict, but both of them are quite forgiving and will attempt to interpret the string that you pass in. If you want to eliminate the possibility of any misreads, you need to use something stricter than either of these functions. Comparison with Python 3.7+’s built-in datetime.datetime.fromisoformat dateutil.parser.isoparse is a full ISO-8601 format parser, but in Python ≤ 3.10 fromisoformat is deliberately not . In Python 3.11, fromisoformat supports almost all strings in valid ISO 8601. See fromisoformat 's docs for this cautionary caveat. (See this answer ). | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/127803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/70293/"
]
} |
127,807 | To quote wikipedia: Scrum is facilitated by a ScrumMaster, whose primary job is to remove impediments to the ability of the team to deliver the sprint goal. The ScrumMaster is not the leader of the team (as they are self-organizing) but acts as a buffer between the team and any distracting influences. The ScrumMaster ensures that the Scrum process is used as intended. The ScrumMaster is the enforcer of rules." Working on this basis, and the fact that most businesses are running 2-3 projects at a time, what actual work tasks does a SM do to fill a full time job? Or, is it not a full time job and that individual do other things such as development, sales etc? Do any SM's out there have anything to share? | Unfortunately we don't have the luxury of having dedicated scrum masters. I am also a team leader and senior developer which more than fills the day. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/127807",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4787/"
]
} |
127,817 | I'm having a little problem and I don't see why, it's easy to go around it, but still I want to understand. I have the following class : public class AccountStatement : IAccountStatement{ public IList<IAccountStatementCharge> StatementCharges { get; set; } public AccountStatement() { new AccountStatement(new Period(new NullDate().DateTime,newNullDate().DateTime), 0); } public AccountStatement(IPeriod period, int accountID) { StatementCharges = new List<IAccountStatementCharge>(); StartDate = new Date(period.PeriodStartDate); EndDate = new Date(period.PeriodEndDate); AccountID = accountID; } public void AddStatementCharge(IAccountStatementCharge charge) { StatementCharges.Add(charge); } } (note startdate,enddate,accountID are automatic property to...) If I use it this way : var accountStatement = new AccountStatement{ StartDate = new Date(2007, 1, 1), EndDate = new Date(2007, 1, 31), StartingBalance = 125.05m }; When I try to use the method "AddStatementCharge: I end up with a "null" StatementCharges list... In step-by-step I clearly see that my list get a value, but as soon as I quit de instantiation line, my list become "null" | This code: public AccountStatement(){ new AccountStatement(new Period(new NullDate().DateTime,newNullDate().DateTime), 0);} is undoubtedly not what you wanted. That makes a second instance of AccountStatement and does nothing with it. I think what you meant was this instead: public AccountStatement() : this(new Period(new NullDate().DateTime, new NullDate().DateTime), 0){} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127817",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7419/"
]
} |
127,932 | I get this error when I do an svn update : Working copy XXXXXXXX locked Please execute "Cleanup" command When I run cleanup, I get Cleanup failed to process the following paths: XXXXXXXX How do I get out of this loop? | One approach would be to: Copy edited items to another location. Delete the folder containing the problem path. Update the containing folder through Subversion. Copy your files back or merge changes as needed. Commit Another option would be to delete the top level folder and check out again. Hopefully it doesn't come to that though. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/127932",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/230/"
]
} |
127,936 | I have an application that has created a number of custom event log sources to help filter its output. How can I delete the custom sources from the machine WITHOUT writing any code as running a quick program using System.Diagnostics.EventLog.Delete is not possible. I've tried using RegEdit to remove the custom sources from [HKEY_LOCAL_MACHINE\SYSTEM\ControlSetXXX\Services\Eventlog] however the application acts as if the logs still exist behind the scenes. What else am I missing? | I also think you're in the right place... it's stored in the registry, under the name of the event log. I have a custom event log, under which are multiple event sources. HKLM\System\CurrentControlSet\Services\Eventlog\LOGNAME\LOGSOURCE1 HKLM\System\CurrentControlSet\Services\Eventlog\LOGNAME\LOGSOURCE2 Those sources have an EventMessageFile key, which is REG_EXPAND_SZ and points to: C:\Windows\Microsoft.NET\Framework\v2.0.50727\EventLogMessages.dll I think if you delete the Key that is the log source, LOGSOURCE1 in my example, that should be all that's needed. For what it's worth, I tried it through .NET and that's what it did. However, it does look like each custom event log also has a source of the same name. If you have a custom log, that could affect your ability to clear it. You'd have to delete the log outright, perhaps. Further, if your app has an installer, I can see that the application name also may be registered as a source in the application event log. One more place to clear. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/127936",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15570/"
]
} |
127,973 | I've been aware of Steve Yegge's advice to swap Ctrl and Caps Lock for a while now, although I don't use Emacs. I've just tried swapping them over as an experiment and I'm finding it difficult to adjust. There are several shortcuts that are second nature to me now and I hadn't realised quite how ingrained they are in how I use the keyboard. In particular, I keep going to the old Ctrl key for Ctrl + Z (undo), and for cut, copy & paste operations ( Ctrl + X , C and V ). Experimenting with going from the home position to Ctrl + Z I don't know which finger to put on Z , as it feels awkward with either my ring, middle or index finger. Is this something I'll get used to the same way I've got used to the original position and I should just give it time or is this arrangement not suited to windows keyboard shortcuts . I'd be interested to hear from people who have successfully made the transition as well as those who have tried it and move back, but particularly from people who were doing it on windows . Will it lead to any improvement in my typing speed or comfort when typing. Do you have any tips for finger positions or typing training to speed up the transition. | I actually don't swap control and caps and just make caps ANOTHER control key. I can't think of a single time in my life when I have ever hit caps-lock on purpose , so I haven't missed it. That way, you get used to using it, but if you slip up and use the old control, things still work. It's worked out very well for me. There's a .reg file to do this here . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/127973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2541/"
]
} |
128,008 | When I iterate over the values or keys are they going to correlate? Will the second key map to the second value? | No, not necessarily. You should really use the entrySet().iterator() for this purpose. With this iterator, you will be walking through all Map.Entry objects in the Map and can access each key and associated value. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128008",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2443/"
]
} |
128,012 | I want to create a UITableView with varying row heights, and I'm trying to accomplish this by creating UILabels inside the UITableViewCells. Here's my code so far: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *MyIdentifier = @"EntryCell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:MyIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:MyIdentifier] autorelease]; } UILabel *textView = [[UILabel alloc] initWithFrame:CGRectMake(10, 0, 300, 40)]; textView.numberOfLines = 0; textView.text = [entries objectAtIndex:[indexPath row]]; [cell.contentView addSubview:textView]; [textView release]; return cell;} This gives me 2 lines of text per cell. However, each "entry" has a different number of lines, and I want the UITableViewCells to resize automatically, wrapping text as necessary, without changing the font size. [textView sizeToFit] and/or [cell sizeToFit] don't seem to work. Here's how I want the UITableView to look: ----------------Lorem ipsum----------------Lorem ipsumLorem ipsum----------------Lorem ipsumLorem ipsumLorem ipsum----------------Lorem ipsum----------------Lorem ipsumLorem ipsum---------------- Does anyone know how to do this properly? Thanks. | The UITableViewDelegate defines an optional method heightForRowAtIndexPath, which will get you started. You then need to use sizeWithFont. There is some discussion of your precise problem here: http://www.v2ex.com/2008/09/18/how-to-make-uitableviewcell-have-variable-height/ Text sizing was also discussed in this thread | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128012",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2119/"
]
} |
128,016 | I'm writing a custom file selection component. In my UI, first the user clicks a button, which pops a JFileChooser ; when it is closed, the absolute path of the selected file is written to a JTextField . The problem is, absolute paths are usually long, which causes the text field to enlarge, making its container too wide. I've tried this, but it didn't do anything, the text field is still too wide: fileNameTextField.setMaximumSize(new java.awt.Dimension(450, 2147483647)); Currently, when it is empty, it is already 400px long, because of GridBagConstraints attached to it. I'd like it to be like text fields in HTML pages, which have a fixed size and do not enlarge when the input is too long. So, how do I set the max size for a JTextField ? | It may depend on the layout manager your text field is in. Some layout managers expand and some do not. Some expand only in some cases, others always. I'm assuming you're doing filedNameTextField = new JTextField(80); // 80 == columns If so, for most reasonable layouts, the field should not change size (at least, it shouldn't grow). Often layout managers behave badly when put into JScrollPane s. In my experience, trying to control the sizes via setMaximumSize and setPreferredWidth and so on are precarious at best. Swing decided on its own with the layout manager and there's little you can do about it. All that being said, I have no had the problem you are experiencing, which leads me to believe that some judicious use of a layout manager will solve the problem. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128016",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15649/"
]
} |
128,028 | We have a project that generates a code snippet that can be used on various other projects. The purpose of the code is to read two parameters from the query string and assign them to the "src" attribute of an iframe. For example, the page at the URL http://oursite/Page.aspx?a=1&b=2 would have JavaScript in it to read the "a" and "b" parameters. The JavaScript would then set the "src" attribute of an iframe based on those parameters. For example, "<iframe src="http://someothersite/Page.aspx?a=1&b=2" />" We're currently doing this with server-side code that uses Microsoft's Anti Cross-Scripting library to check the parameters. However, a new requirement has come stating that we need to use JavaScript, and that it can't use any third-party JavaScript tools (such as jQuery or Prototype). One way I know of is to replace any instances of "<", single quote, and double quote from the parameters before using them, but that doesn't seem secure enough to me. One of the parameters is always a "P" followed by 9 integers.The other parameter is always 15 alpha-numeric characters.(Thanks Liam for suggesting I make that clear). Does anybody have any suggestions for us? Thank you very much for your time. | Upadte Sep 2022: Most JS runtimes now have a URL type which exposes query parameters via the searchParams property.You need to supply a base URL even if you just want to get URL parameters from a relative URL, but it's better than rolling your own. let searchParams/*: URLSearchParams*/ = new URL( myUrl, // Supply a base URL whose scheme allows // query parameters in case `myUrl` is scheme or // path relative. 'http://example.com/').searchParams;console.log(searchParams.get('paramName')); // One valueconsole.log(searchParams.getAll('paramName')); The difference between .get and .getAll is that the second returns an array which can be important if the same parameter name is mentioned multiple time as in /path?foo=bar&foo=baz . Don't use escape and unescape, use decodeURIComponent.E.g. function queryParameters(query) { var keyValuePairs = query.split(/[&?]/g); var params = {}; for (var i = 0, n = keyValuePairs.length; i < n; ++i) { var m = keyValuePairs[i].match(/^([^=]+)(?:=([\s\S]*))?/); if (m) { var key = decodeURIComponent(m[1]); (params[key] || (params[key] = [])).push(decodeURIComponent(m[2])); } } return params;} and pass in document.location.search. As far as turning < into <, that is not sufficient to make sure that the content can be safely injected into HTML without allowing script to run. Make sure you escape the following <, >, &, and ". It will not guarantee that the parameters were not spoofed. If you need to verify that one of your servers generated the URL, do a search on URL signing. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128028",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21732/"
]
} |
128,034 | In a recent conversation, I mentioned that I was using JavaScript for a web application. That comment prompted a response: "You should use Flex instead. It will cut your development time down and JavaScript is too hard to debug and maintain. You need to use the right tool for the right job." Now, I don't know too much about Flex, but I personally don't feel like JavaScript is too hard to debug or maintain, especially if you use a framework. JavaScript is also one of the most used languages right now, so it would seem a better choice in that regard too. However, his reply piqued my interest. Would Flex be a good choice for a distributable web app for which 3rd party developers could build add-ons? What are the advantages of using it vs. a JavaScript framework? What are some of the disadvantages? | I would push you towards standard web development technologies in most cases. Javascript is no longer a great challenge to debug or maintain with good libs like jQuery/Prototype to iron out some of the browser inconsistencies and tools like Firebug and the MS script debugger to help with debugging. There are cases when Flash is a better option, but only in cases where you are doing complex animations. And, if you are willing to invest the effort, most animations can be achieved without resorting to flash. A couple of examples ... Flash content is not as accessible as other content. This will not only affect people with out flash, but also search engine spiders. There may be some hacks to help get around this now, but I think that most flash content will never be indexed by google. Flash breaks the web UI. For example: If I click my mouse wheel on a link,that link is opened in a backgroundtab. In a flash app there is no wayto simulate this behavior. If I select text in my browser andright-click I get options providedby the browser that include thingslike "Search Google for this text". In a flash app those options are nolonger there. If I right click on a link or animage I get a different set ofoptions that are not available in aflash app. This can be veryfrustrating to a user who is not"flash savvy". | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128034",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13281/"
]
} |
128,035 | Note: while the use-case described is about using submodules within a project, the same applies to a normal git clone of a repository over HTTP. I have a project under Git control. I'd like to add a submodule: git submodule add http://github.com/jscruggs/metric_fu.git vendor/plugins/metric_fu But I get ...got 1b0313f016d98e556396c91d08127c59722762d0got 4c42d44a9221209293e5f3eb7e662a1571b09421got b0d6414e3ca5c2fb4b95b7712c7edbf7d2becac7error: Unable to find abc07fcf79aebed56497e3894c6c3c06046f913a under http://github.com/jscruggs/metri...Cannot obtain needed commit abc07fcf79aebed56497e3894c6c3c06046f913awhile processing commit ee576543b3a0820cc966cc10cc41e6ffb3415658.fatal: Fetch failed.Clone of 'http://github.com/jscruggs/metric_fu.git' into submodule path 'vendor/plugins/metric_fu' I have my HTTP_PROXY set up: c:\project> echo %HTTP_PROXY%http://proxy.mycompany:80 I even have a global Git setting for the http proxy: c:\project> git config --get http.proxyhttp://proxy.mycompany:80 Has anybody gotten HTTP fetches to consistently work through a proxy? What's really strange is that a few project on GitHub work fine ( awesome_nested_set for example), but others consistently fail ( rails for example). | You can also set the HTTP proxy that Git uses in global configuration property http.proxy : git config --global http.proxy http://proxy.mycompany:80 To authenticate with the proxy: git config --global http.proxy http://mydomain\\myusername:mypassword@myproxyserver:8080/ (Credit goes to @EugeneKulabuhov and @JaimeReynoso for the authentication format.) | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/128035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1190/"
]
} |
128,038 | I have a Java process that opens a file using a FileReader. How can I prevent another (Java) process from opening this file, or at least notify that second process that the file is already opened? Does this automatically make the second process get an exception if the file is open (which solves my problem) or do I have to explicitly open it in the first process with some sort of flag or argument? To clarify: I have a Java app that lists a folder and opens each file in the listing for processing it. It processes each file after the other. The processing of each file consists of reading it and doing some calculations based on the contents and it takes about 2 minutes. I also have another Java app that does the same thing but instead writes on the file. What I want is to be able to run these apps at the same time so the scenario goes like this. ReadApp lists the folder and finds files A, B, C. It opens file A and starts the reading. WriteApp lists the folder and finds files A, B, C. It opens file A, sees that is is open (by an exception or whatever way) and goes to file B. ReadApp finishes file A and continues to B. It sees that it is open and continues to C. It is crucial that WriteApp doesn't write while ReadApp is reading the same file or vice versa. They are different processes. | FileChannel.lock is probably what you want. try ( FileInputStream in = new FileInputStream(file); java.nio.channels.FileLock lock = in.getChannel().lock(); Reader reader = new InputStreamReader(in, charset)) { ...} (Disclaimer: Code not compiled and certainly not tested.) Note the section entitled "platform dependencies" in the API doc for FileLock . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/128038",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11906/"
]
} |
128,042 | In Java, I use a class in which some fields can be null . For example: class Foo { String bar; //....} I want to write a BarComparator for this class, private static class BarComparator implements Comparator<Foo> { public int compare( final Foo o1, final Foo o2 ) { // Implementation goes here } } Is there a standard way to deal with the fact that any of o1 , o2 , o1.bar , o2.bar can be null , without writing lots of nested if ... else ? Cheers! | I guess you could wrap the call to the field compareTo method with a small static method to sort nulls high or low: static <T extends Comparable<T>> int cp(T a, T b) { return a==null ? (b==null ? 0 : Integer.MIN_VALUE) : (b==null ? Integer.MAX_VALUE : a.compareTo(b));} Simple usage (multiple fields is as you would normally): public int compare( final Foo o1, final Foo o2 ) { return cp(o1.field, o2.field);} | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128042",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2797/"
]
} |
128,043 | I have several log files of events (one event per line). The logs can possibly overlap. The logs are generated on separate client machines from possibly multiple time zones (but I assume I know the time zone). Each event has a timestamp that was normalized into a common time (by instantianting each log parsers calendar instance with the timezone appropriate to the log file and then using getTimeInMillis to get the UTC time). The logs are already sorted by timestamp. Multiple events can occur at the same time, but they are by no means equal events. These files can be relatively large, as in, 500000 events or more in a single log, so reading the entire contents of the logs into a simple Event[] is not feasible. What I am trying do is merge the events from each of the logs into a single log. It is kinda like a mergesort task, but each log is already sorted, I just need to bring them together. The second component is that the same event can be witnessed in each of the separate log files, and I want to "remove duplicate events" in the file output log. Can this be done "in place", as in, sequentially working over some small buffers of each log file? I can't simply read in all the files into an Event[], sort the list, and then remove duplicates, but so far my limited programming capabilities only enable me to see this as the solution. Is there some more sophisticated approach that I can use to do this as I read events from each of the logs concurrently? | Read the first line from each of the log files LOOP a. Find the "earliest" line. b. Insert the "earliest" line into the master log file c. Read the next line from the file that contained the earliest line You could check for duplicates between b and c, advancing the pointer for each of those files. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128043",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2204759/"
]
} |
128,057 | What do you think the benefits of functional programming are? And how do they apply to programmers today? What are the greatest differences between functional programming and OOP? | The style of functional programming is to describe what you want, rather than how to get it. ie: instead of creating a for-loop with an iterator variable and marching through an array doing something to each cell, you'd say the equivalent of "this label refers to a version of this array where this function has been done on all the elements." Functional programming moves more basic programming ideas into the compiler, ideas such as list comprehensions and caching. The biggest benefit of Functional programming is brevity, because code can be more concise. A functional program doesn't create an iterator variable to be the center of a loop, so this and other kinds of overhead are eliminated from your code. The other major benefit is concurrency, which is easier to do with functional programming because the compiler is taking care of most of the operations which used to require manually setting up state variables (like the iterator in a loop). Some performance benefits can be seen in the context of a single-processor as well, depending on the way the program is written, because most functional languages and extensions support lazy evaluation. In Haskell you can say "this label represents an array containing all the even numbers". Such an array is infinitely large, but you can ask for the 100,000th element of that array at any moment without having to know--at array initialization time--just what the largest value is you're going to need. The value will be calculated only when you need it, and no further. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/128057",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21734/"
]
} |
128,104 | What is a good implementation of a IsLeapYear function in VBA? Edit: I ran the if-then and the DateSerial implementation with iterations wrapped in a timer, and the DateSerial was quicker on the average by 1-2 ms (5 runs of 300 iterations, with 1 average cell worksheet formula also working). | Public Function isLeapYear(Yr As Integer) As Boolean ' returns FALSE if not Leap Year, TRUE if Leap Year isLeapYear = (Month(DateSerial(Yr, 2, 29)) = 2) End Function I originally got this function from Chip Pearson's great Excel site. Pearson's site | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128104",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13295/"
]
} |
128,129 | The Database Tuning Advisor is recommending that I create a bunch of statistics in my Database. I'm something of a SQL n00b, so this was the first time I'd ever come across such a creature. The entry in MSDN was a little obtuse - could someone explain what exactly this does, and why it's a good idea? | Cost Based Query Optimisation is a technique that uses histograms and row counts to heuristically estimate the cost of executing a query plan. When you submit a query to SQL Server, it evaluates it and generates a series of Query Plans for which it uses heuristics to estimate the costs. It then selects the cheapest query plan. Statistics are used by the query optimiser to calculate the cost of the query plans. If the statistics are missing or out of date it does not have correct data to estimate the plan. In this case it can generate query plans that are moderately or highly sub-optimal. SQL Server will (under most circumstances) generate statistics on most tables and indexes automatically but you can supplement these or force refreshes. The query tuning wizard has presumably found some missing statistics or identified joins within the query that statistics should be added for. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128129",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19074/"
]
} |
128,162 | My program generates relatively simple PDF documents on request, but I'm having trouble with unicode characters, like kanji or odd math symbols. To write a normal string in PDF, you place it in brackets: (something) There is also the option to escape a character with octal codes: (\527) but this only goes up to 512 characters. How do you encode or escape higher characters? I've seen references to byte streams and hex-encoded strings, but none of the references I've read seem to be willing to tell me how to actually do it. Edit: Alternatively, point me to a good Java PDF library that will do the job for me. The one I'm currently using is a version of gnujpdf (which I've fixed several bugs in, since the original author appears to have gone AWOL), that allows you to program against an AWT Graphics interface, and ideally any replacement should do the same. The alternatives seem to be either HTML -> PDF, or a programmatic model based on paragraphs and boxes that feels very much like HTML. iText is an example of the latter. This would mean rewriting my existing code, and I'm not convinced they'd give me the same flexibility in laying out. Edit 2: I didn't realise before, but the iText library has a Graphics2D API and seems to handle unicode perfectly, so that's what I'll be using. Though it isn't an answer to the question as asked, it solves the problem for me. Edit 3: iText is working nicely for me. I guess the lesson is, when faced with something that seems pointlessly difficult, look for somebody who knows more about it than you. | The simple answer is that there's no simple answer. If you take a look at the PDF specification, you'll see an entire chapter — and a long one at that — devoted to the mechanisms of text display. I implemented all of the PDF support for my company, and handling text was by far the most complex part of exercise. The solution you discovered — use a 3rd party library to do the work for you — is really the best choice, unless you have very specific, special-purpose requirements for your PDF files. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128162",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1000/"
]
} |
128,277 | UPDATE I'm basically binding the query to a WinForms DataGridView . I want the column headers to be appropriate and have spaces when needed. For example, I would want a column header to be First Name instead of FirstName . How do you create your own custom column names in LINQ? For example: Dim query = From u In db.Users _ Select u.FirstName AS 'First Name' | As CQ states, you can't have a space for the field name, you can return new columns however. var query = from u in db.Users select new { FirstName = u.FirstName, LastName = u.LastName, FullName = u.FirstName + " " + u.LastName }; Then you can bind to the variable query from above or loop through it whatever.... foreach (var u in query){ // Full name will be available now Debug.Print(u.FullName); } If you wanted to rename the columns, you could, but spaces wouldn't be allowed. var query = from u in db.Users select new { First = u.FirstName, Last = u.LastName }; Would rename the FirstName to First and LastName to Last. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128277",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/299/"
]
} |
128,342 | For a project of mine I would love to provide auto completion for a specific textarea. Similar to how intellisense/omnicomplete works. For that however I have to find out the absolute cursor position so that I know where the DIV should appear. Turns out: that's (nearly I hope) impossible to achieve. Does anyone has some neat ideas how to solve that problem? | Version 2 of My Hacky Experiment This new version works with any font, which can be adjusted on demand, and any textarea size. After noticing that some of you are still trying to get this to work, I decided to try a new approach. My results are FAR better this time around - at least on google chrome on linux. I no longer have a windows PC available to me, so I can only test on chrome / firefox on Ubuntu. My results work 100% consistently on Chrome, and let's say somewhere around 70 - 80% on Firefox, but I don't imagine it would be incredibly difficult to find the inconsistencies. This new version relies on a Canvas object. In my example , I actually show that very canvas - just so you can see it in action, but it could very easily be done with a hidden canvas object. This is most certainly a hack, and I apologize ahead of time for my rather thrown together code. At the very least, in google chrome, it works consistently, no matter what font I set it to, or size of textarea. I used Sam Saffron 's example to show cursor coordinates (a gray-background div). I also added a "Randomize" link, so you can see it work in different font / texarea sizes and styles, and watch the cursor position update on the fly. I recommend looking at the full page demo so you can better see the companion canvas play along. I'll summarize how it works ... The underlying idea is that we're trying to redraw the textarea on a canvas, as closely as possible. Since the browser uses the same font engine for both and texarea, we can use canvas's font measurement functionality to figure out where things are. From there, we can use the canvas methods available to us to figure out our coordinates. First and foremost, we adjust our canvas to match the dimensions of the textarea. This is entirely for visual purposes since the canvas size doesn't really make a difference in our outcome. Since Canvas doesn't actually provide a means of word wrap, I had to conjure (steal / borrow / munge together) a means of breaking up lines to as-best-as-possible match the textarea. This is where you'll likely find you need to do the most cross-browser tweaking. After word wrap, everything else is basic math. We split the lines into an array to mimic the word wrap, and now we want to loop through those lines and go all the way down until the point where our current selection ends. In order to do that, we're just counting characters and once we surpass selection.end , we know we have gone down far enough. Multiply the line count up until that point with the line-height and you have a y coordinate. The x coordinate is very similar, except we're using context.measureText . As long as we're printing out the right number of characters, that will give us the width of the line that's being drawn to Canvas, which happens to end after the last character written out, which is the character before the currentl selection.end position. When trying to debug this for other browsers, the thing to look for is where the lines don't break properly. You'll see in some places that the last word on a line in canvas may have wrapped over on the textarea or vice-versa. This has to do with how the browser handles word wraps. As long as you get the wrapping in the canvas to match the textarea, your cursor should be correct. I'll paste the source below. You should be able to copy and paste it, but if you do, I ask that you download your own copy of jquery-fieldselection instead of hitting the one on my server. I've also upped a new demo as well as a fiddle . Good luck! <!DOCTYPE html><html lang="en-US"> <head> <meta charset="utf-8" /> <title>Tooltip 2</title> <script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script type="text/javascript" src="http://enobrev.info/cursor/js/jquery-fieldselection.js"></script> <style type="text/css"> form { float: left; margin: 20px; } #textariffic { height: 400px; width: 300px; font-size: 12px; font-family: 'Arial'; line-height: 12px; } #tip { width:5px; height:30px; background-color: #777; position: absolute; z-index:10000 } #mock-text { float: left; margin: 20px; border: 1px inset #ccc; } /* way the hell off screen */ .scrollbar-measure { width: 100px; height: 100px; overflow: scroll; position: absolute; top: -9999px; } #randomize { float: left; display: block; } </style> <script type="text/javascript"> var oCanvas; var oTextArea; var $oTextArea; var iScrollWidth; $(function() { iScrollWidth = scrollMeasure(); oCanvas = document.getElementById('mock-text'); oTextArea = document.getElementById('textariffic'); $oTextArea = $(oTextArea); $oTextArea .keyup(update) .mouseup(update) .scroll(update); $('#randomize').bind('click', randomize); update(); }); function randomize() { var aFonts = ['Arial', 'Arial Black', 'Comic Sans MS', 'Courier New', 'Impact', 'Times New Roman', 'Verdana', 'Webdings']; var iFont = Math.floor(Math.random() * aFonts.length); var iWidth = Math.floor(Math.random() * 500) + 300; var iHeight = Math.floor(Math.random() * 500) + 300; var iFontSize = Math.floor(Math.random() * 18) + 10; var iLineHeight = Math.floor(Math.random() * 18) + 10; var oCSS = { 'font-family': aFonts[iFont], width: iWidth + 'px', height: iHeight + 'px', 'font-size': iFontSize + 'px', 'line-height': iLineHeight + 'px' }; console.log(oCSS); $oTextArea.css(oCSS); update(); return false; } function showTip(x, y) { $('#tip').css({ left: x + 'px', top: y + 'px' }); } // https://stackoverflow.com/a/11124580/14651 // https://stackoverflow.com/a/3960916/14651 function wordWrap(oContext, text, maxWidth) { var aSplit = text.split(' '); var aLines = []; var sLine = ""; // Split words by newlines var aWords = []; for (var i in aSplit) { var aWord = aSplit[i].split('\n'); if (aWord.length > 1) { for (var j in aWord) { aWords.push(aWord[j]); aWords.push("\n"); } aWords.pop(); } else { aWords.push(aSplit[i]); } } while (aWords.length > 0) { var sWord = aWords[0]; if (sWord == "\n") { aLines.push(sLine); aWords.shift(); sLine = ""; } else { // Break up work longer than max width var iItemWidth = oContext.measureText(sWord).width; if (iItemWidth > maxWidth) { var sContinuous = ''; var iWidth = 0; while (iWidth <= maxWidth) { var sNextLetter = sWord.substring(0, 1); var iNextWidth = oContext.measureText(sContinuous + sNextLetter).width; if (iNextWidth <= maxWidth) { sContinuous += sNextLetter; sWord = sWord.substring(1); } iWidth = iNextWidth; } aWords.unshift(sContinuous); } // Extra space after word for mozilla and ie var sWithSpace = (jQuery.browser.mozilla || jQuery.browser.msie) ? ' ' : ''; var iNewLineWidth = oContext.measureText(sLine + sWord + sWithSpace).width; if (iNewLineWidth <= maxWidth) { // word fits on current line to add it and carry on sLine += aWords.shift() + " "; } else { aLines.push(sLine); sLine = ""; } if (aWords.length === 0) { aLines.push(sLine); } } } return aLines; } // http://davidwalsh.name/detect-scrollbar-width function scrollMeasure() { // Create the measurement node var scrollDiv = document.createElement("div"); scrollDiv.className = "scrollbar-measure"; document.body.appendChild(scrollDiv); // Get the scrollbar width var scrollbarWidth = scrollDiv.offsetWidth - scrollDiv.clientWidth; // Delete the DIV document.body.removeChild(scrollDiv); return scrollbarWidth; } function update() { var oPosition = $oTextArea.position(); var sContent = $oTextArea.val(); var oSelection = $oTextArea.getSelection(); oCanvas.width = $oTextArea.width(); oCanvas.height = $oTextArea.height(); var oContext = oCanvas.getContext("2d"); var sFontSize = $oTextArea.css('font-size'); var sLineHeight = $oTextArea.css('line-height'); var fontSize = parseFloat(sFontSize.replace(/[^0-9.]/g, '')); var lineHeight = parseFloat(sLineHeight.replace(/[^0-9.]/g, '')); var sFont = [$oTextArea.css('font-weight'), sFontSize + '/' + sLineHeight, $oTextArea.css('font-family')].join(' '); var iSubtractScrollWidth = oTextArea.clientHeight < oTextArea.scrollHeight ? iScrollWidth : 0; oContext.save(); oContext.clearRect(0, 0, oCanvas.width, oCanvas.height); oContext.font = sFont; var aLines = wordWrap(oContext, sContent, oCanvas.width - iSubtractScrollWidth); var x = 0; var y = 0; var iGoal = oSelection.end; aLines.forEach(function(sLine, i) { if (iGoal > 0) { oContext.fillText(sLine.substring(0, iGoal), 0, (i + 1) * lineHeight); x = oContext.measureText(sLine.substring(0, iGoal + 1)).width; y = i * lineHeight - oTextArea.scrollTop; var iLineLength = sLine.length; if (iLineLength == 0) { iLineLength = 1; } iGoal -= iLineLength; } else { // after } }); oContext.restore(); showTip(oPosition.left + x, oPosition.top + y); } </script> </head> <body> <a href="#" id="randomize">Randomize</a> <form id="tipper"> <textarea id="textariffic">Aliquam urna. Nullam augue dolor, tincidunt condimentum, malesuada quis, ultrices at, arcu. Aliquam nunc pede, convallis auctor, sodales eget, aliquam eget, ligula. Proin nisi lacus, scelerisque nec, aliquam vel, dictum mattis, eros. Curabitur et neque. Fusce sollicitudin. Quisque at risus. Suspendisse potenti. Mauris nisi. Sed sed enim nec dui viverra congue. Phasellus velit sapien, porttitor vitae, blandit volutpat, interdum vel, enim. Cras sagittis bibendum neque. Proin eu est. Fusce arcu. Aliquam elit nisi, malesuada eget, dignissim sed, ultricies vel, purus. Maecenas accumsan diam id nisi.Phasellus et nunc. Vivamus sem felis, dignissim non, lacinia id, accumsan quis, ligula. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Sed scelerisque nulla sit amet mi. Nulla consequat, elit vitae tempus vulputate, sem libero rhoncus leo, vulputate viverra nulla purus nec turpis. Nam turpis sem, tincidunt non, congue lobortis, fermentum a, ipsum. Nulla facilisi. Aenean facilisis. Maecenas a quam eu nibh lacinia ultricies. Morbi malesuada orci quis tellus.Sed eu leo. Donec in turpis. Donec non neque nec ante tincidunt posuere. Pellentesque blandit. Ut vehicula vestibulum risus. Maecenas commodo placerat est. Integer massa nunc, luctus at, accumsan non, pulvinar sed, odio. Pellentesque eget libero iaculis dui iaculis vehicula. Curabitur quis nulla vel felis ullamcorper varius. Sed suscipit pulvinar lectus.</textarea> </form> <div id="tip"></div> <canvas id="mock-text"></canvas> </body></html> Bug There's one bug I do recall. If you put the cursor before the first letter on a line, it shows the "position" as the last letter on the previous line. This has to do with how selection.end work. I don't think it should be too difficult to look for that case and fix it accordingly. Version 1 Leaving this here so you can see the progress without having to dig through the edit history. It's not perfect and it's most Definitely a hack, but I got it to work pretty well on WinXP IE, FF, Safari, Chrome and Opera. As far as I can tell there's no way to directly find out the x/y of a cursor on any browser. The IE method , mentioned by Adam Bellaire is interesting, but unfortunately not cross-browser. I figured the next best thing would be to use the characters as a grid. Unfortunately there's no font metric information built into any of the browsers, which means a monospace font is the only font type that's going to have a consistent measurement. Also, there's no reliable means of figuring out a font-width from the font-height. At first I'd tried using a percentage of the height, which worked great. Then I changed the font-size and everything went to hell. I tried one method to figure out character width, which was to create a temporary textarea and keep adding characters until the scrollHeight (or scrollWidth) changed. It seems plausable, but about halfway down that road, I realized I could just use the cols attribute on the textarea and figured there are enough hacks in this ordeal to add another one. This means you can't set the width of the textarea via css. You HAVE to use the cols for this to work. The next problem I ran into is that, even when you set the font via css, the browsers report the font differently. When you don't set a font, mozilla uses monospace by default, IE uses Courier New , Opera "Courier New" (with quotes), Safari, 'Lucida Grand' (with single quotes). When you do set the font to monospace , mozilla and ie take what you give them, Safari comes out as -webkit-monospace and Opera stays with "Courier New" . So now we initialize some vars. Make sure to set your line height in the css as well. Firefox reports the correct line height, but IE was reporting "normal" and I didn't bother with the other browsers. I just set the line height in my css and that resolved the difference. I haven't tested with using ems instead of pixels. Char height is just font size. Should probably pre-set that in your css as well. Also, one more pre-setting before we start placing characters - which really had me scratching my head. For ie and mozilla, texarea chars are < cols, everything else is <= chars. So Chrome can fit 50 chars across, but mozilla and ie would break the last word off the line. Now we're going to create an array of first-character positions for every line. We loop through every char in the textarea. If it's a newline, we add a new position to our line array. If it's a space, we try to figure out if the current "word" will fit on the line we're on or if it's going to get pushed to the next line. Punctuation counts as a part of the "word". I haven't tested with tabs, but there's a line there for adding 4 chars for a tab char. Once we have an array of line positions, we loop through and try to find which line the cursor is on. We're using hte "End" of the selection as our cursor. x = (cursor position - first character position of cursor line) * character width y = ((cursor line + 1) * line height) - scroll position I'm using jquery 1.2.6 , jquery-fieldselection , and jquery-dimensions The Demo: http://enobrev.info/cursor/ And the code: <?xml version="1.0" encoding="UTF-8" ?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Tooltip</title> <script type="text/javascript" src="js/jquery-1.2.6.js"></script> <script type="text/javascript" src="js/jquery-fieldselection.js"></script> <script type="text/javascript" src="js/jquery.dimensions.js"></script> <style type="text/css"> form { margin: 20px auto; width: 500px; } #textariffic { height: 400px; font-size: 12px; font-family: monospace; line-height: 15px; } #tip { position: absolute; z-index: 2; padding: 20px; border: 1px solid #000; background-color: #FFF; } </style> <script type="text/javascript"> $(function() { $('textarea') .keyup(update) .mouseup(update) .scroll(update); }); function showTip(x, y) { y = y + $('#tip').height(); $('#tip').css({ left: x + 'px', top: y + 'px' }); } function update() { var oPosition = $(this).position(); var sContent = $(this).val(); var bGTE = jQuery.browser.mozilla || jQuery.browser.msie; if ($(this).css('font-family') == 'monospace' // mozilla || $(this).css('font-family') == '-webkit-monospace' // Safari || $(this).css('font-family') == '"Courier New"') { // Opera var lineHeight = $(this).css('line-height').replace(/[^0-9]/g, ''); lineHeight = parseFloat(lineHeight); var charsPerLine = this.cols; var charWidth = parseFloat($(this).innerWidth() / charsPerLine); var iChar = 0; var iLines = 1; var sWord = ''; var oSelection = $(this).getSelection(); var aLetters = sContent.split(""); var aLines = []; for (var w in aLetters) { if (aLetters[w] == "\n") { iChar = 0; aLines.push(w); sWord = ''; } else if (aLetters[w] == " ") { var wordLength = parseInt(sWord.length); if ((bGTE && iChar + wordLength >= charsPerLine) || (!bGTE && iChar + wordLength > charsPerLine)) { iChar = wordLength + 1; aLines.push(w - wordLength); } else { iChar += wordLength + 1; // 1 more char for the space } sWord = ''; } else if (aLetters[w] == "\t") { iChar += 4; } else { sWord += aLetters[w]; } } var iLine = 1; for(var i in aLines) { if (oSelection.end < aLines[i]) { iLine = parseInt(i) - 1; break; } } if (iLine > -1) { var x = parseInt(oSelection.end - aLines[iLine]) * charWidth; } else { var x = parseInt(oSelection.end) * charWidth; } var y = (iLine + 1) * lineHeight - this.scrollTop; // below line showTip(oPosition.left + x, oPosition.top + y); } } </script> </head> <body> <form id="tipper"> <textarea id="textariffic" cols="50">Aliquam urna. Nullam augue dolor, tincidunt condimentum, malesuada quis, ultrices at, arcu. Aliquam nunc pede, convallis auctor, sodales eget, aliquam eget, ligula. Proin nisi lacus, scelerisque nec, aliquam vel, dictum mattis, eros. Curabitur et neque. Fusce sollicitudin. Quisque at risus. Suspendisse potenti. Mauris nisi. Sed sed enim nec dui viverra congue. Phasellus velit sapien, porttitor vitae, blandit volutpat, interdum vel, enim. Cras sagittis bibendum neque. Proin eu est. Fusce arcu. Aliquam elit nisi, malesuada eget, dignissim sed, ultricies vel, purus. Maecenas accumsan diam id nisi.Phasellus et nunc. Vivamus sem felis, dignissim non, lacinia id, accumsan quis, ligula. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Sed scelerisque nulla sit amet mi. Nulla consequat, elit vitae tempus vulputate, sem libero rhoncus leo, vulputate viverra nulla purus nec turpis. Nam turpis sem, tincidunt non, congue lobortis, fermentum a, ipsum. Nulla facilisi. Aenean facilisis. Maecenas a quam eu nibh lacinia ultricies. Morbi malesuada orci quis tellus.Sed eu leo. Donec in turpis. Donec non neque nec ante tincidunt posuere. Pellentesque blandit. Ut vehicula vestibulum risus. Maecenas commodo placerat est. Integer massa nunc, luctus at, accumsan non, pulvinar sed, odio. Pellentesque eget libero iaculis dui iaculis vehicula. Curabitur quis nulla vel felis ullamcorper varius. Sed suscipit pulvinar lectus. </textarea> </form> <p id="tip">Here I Am!!</p> </body></html> | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128342",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19990/"
]
} |
128,343 | I am currently initializing a Hashtable in the following way: Hashtable filter = new Hashtable();filter.Add("building", "A-51");filter.Add("apartment", "210"); I am looking for a nicer way to do this. I tried something like Hashtable filter2 = new Hashtable() { {"building", "A-51"}, {"apartment", "210"}}; However the above code does not compile. | The exact code you posted: Hashtable filter2 = new Hashtable() { {"building", "A-51"}, {"apartment", "210"} }; Compiles perfectly in C# 3. Given you reported compilation problems, I'm guessing you are using C# 2? In this case you can at least do this: Hashtable filter2 = new Hashtable(); filter2["building"] = "A-51"; filter2["apartment"] = "210"; | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128343",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14755/"
]
} |
128,349 | Date coming out of a database, need to format as "mm/dd/yy" For Each dr as DataRow in ds.Tables(0).RowsResponse.Write(dr("CreateDate"))Next | string.Format( "{0:MM/dd/yy}", dr("CreateDate") ) Edit: If dr("CreateDate") is DBNull, this returns "". | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/70/"
]
} |
128,352 | I didn't upgrade to Vista until May or so and one of the things I've always heard developers I know in real life say is "first thing you should do is turn off that UAC crap" Well, I've left it on this whole time for a few reasons. First, just as a failsafe in case I do something idiotic like have a momentary lapse of reason and run an attachment from an email, or in case I view a site which hits some unpatched exploit. Second, as a big of an experiment to see how good or bad it really is. Finally, I figure that it enforces some better practices. I used to develop every website in Windows directly in inetpub\wwwroot (Visual Studio .NET 2003 more or less required this) but now I develop them elsewhere because the UAC clickfest is a nightmare. I figure this is Microsoft's way of saying "you should really be doing it this way". By way of another analogy - if you wrote a web app which runs on XP and 2000 just fine but requires 50 different security features of Server 2003 to be turned off, the real solution might be instead to just fix the application such that it doesn't require the security features to be turned off. But now I'm having to work with an app which is really really NOT designed to be developed outside of inetpub/wwwroot and so UAC is really a nuisance. It's beyond the scope of the project to rectify this. I want to stick to my guns and leave UAC on but I'm also worried about being so autopilot about clicking "Yes" or "Allow" three times every time I need to modify a file. Am I just being hard headed? Do most developers on Vista leave the UAC on or off? And for the instance described above, is there a better/easier way? | I think it is necessary to leave UAC on on a test machine, so you can see what a real user would see using your app. However, I turn it off on my development machine since I find it distracting, and I trust myself enough to not need it. (Hopefully your test machine != your dev machine right?) All this being said, I support UAC, and I am not recommending anyone else turn it off, especially 'common users'. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128352",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2577/"
]
} |
128,365 | I have a server access log, with timestamps of each http request, I'd like to obtain a count of the number of requests at each second. Using sed , and cut -c , so far I've managed to cut the file down to just the timestamps, such as: 22-Sep-2008 20:00:21 +0000 22-Sep-2008 20:00:22 +0000 22-Sep-2008 20:00:22 +0000 22-Sep-2008 20:00:22 +0000 22-Sep-2008 20:00:24 +0000 22-Sep-2008 20:00:24 +0000 What I'd love to get is the number of times each unique timestamp appears in the file. For example, with the above example, I'd like to get output that looks like: 22-Sep-2008 20:00:21 +0000: 1 22-Sep-2008 20:00:22 +0000: 3 22-Sep-2008 20:00:24 +0000: 2 I've used sort -u to filter the list of timestamps down to a list of unique tokens, hoping that I could use grep like grep -c -f <file containing patterns> <file> but this just produces a single line of a grand total of matching lines. I know this can be done in a single line, stringing a few utilities together ... but I can't think of which. Anyone know? | I think you're looking for uniq --count -c, --count prefix lines by the number of occurrences | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4249/"
]
} |
128,377 | I have an application - more like a utility - that sits in a corner and updates two different databases periodically. It is a little standalone app that has been built with a Spring Application Context. The context has two Hibernate Session Factories configured in it, in turn using Commons DBCP data sources configured in Spring. Currently there is no transaction management, but I would like to add some. The update to one database depends on a successful update to the other. The app does not sit in a Java EE container - it is bootstrapped by a static launcher class called from a shell script. The launcher class instantiates the Application Context and then invokes a method on one of its beans. What is the 'best' way to put transactionality around the database updates? I will leave the definition of 'best' to you, but I think it should be some function of 'easy to set up', 'easy to configure', 'inexpensive', and 'easy to package and redistribute'. Naturally FOSS would be good. | The best way to distribute transactions over more than one database is: Don't. Some people will point you to XA but XA (or Two Phase Commit) is a lie (or marketese). Imagine: After the first phase have told the XA manager that it can send the final commit, the network connection to one of the databases fails. Now what? Timeout? That would leave the other database corrupt. Rollback? Two problems: You can't roll back a commit and how do you know what happened to the second database? Maybe the network connection failed after it successfully committed the data and only the "success" message was lost? The best way is to copy the data in a single place. Use a scheme which allows you to abort the copy and continue it at any time (for example, ignore data which you already have or order the select by ID and request only records > MAX(ID) of your copy). Protect this with a transaction. This is not a problem since you're only reading data from the source, so when the transaction fails for any reason, you can ignore the source database. Therefore, this is a plain old single source transaction. After you have copied the data, process it locally. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15452/"
]
} |
128,389 | This is something that I always find a bit hard to explain to others: Why do XML namespaces exist? When should we use them and when should we not?What are the common pitfalls when working with namespaces in XML? Also, how do they relate to XML schemas? Should XSD schemas always be associated with a namespace? | They're for allowing multiple markup languages to be combined, without having to worry about conflicts of element and attribute names. For example, look at any bit of XSLT code, and then think what would happen if you didn't use namespaces and were trying to write an XSLT where the output has to contain "template", "for-each", etc, elements. Syntax errors, is what. I'll leave the advice and pitfalls to others with more experience than I. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128389",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3389/"
]
} |
128,412 | We are using SQL Server 2005, but this question can be for any RDBMS . Which of the following is more efficient, when selecting all columns from a view? Select * from view or Select col1, col2, ..., colN from view | NEVER, EVER USE "SELECT *"!!!! This is the cardinal rule of query design! There are multiple reasons for this. One of which is, that if your table only has three fields on it and you use all three fields in the code that calls the query, there's a great possibility that you will be adding more fields to that table as the application grows, and if your select * query was only meant to return those 3 fields for the calling code, then you're pulling much more data from the database than you need. Another reason is performance. In query design, don't think about reusability as much as this mantra: TAKE ALL YOU CAN EAT, BUT EAT ALL YOU TAKE. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2539/"
]
} |
128,414 | I have seen a lot of discussions going on and people asking about DataGrid for WPF and complaining about Microsoft for not having one with their WPF framework till date. We know that WPF is a great UI technology and have the Concept of ItemsControl,DataTemplate, etc,etc to make great UX. Even WPF has got a more closely matching control- ListView, which can be easily templated to give better UX than a traditional Datagrid like display. And I would say a readymade DataGrid control will kill or hide a lot of creativity and it surely will decrease the innovations in User Experience field. So what is your opinion about the need of DataGrid in WPF as a Framework component? If you feel it is necessary then is it just because the world is so used to the DatGrid way of data display for many years? Some other threads having the discussion about DatGrid are here and here Link to WPF ToolKit - Latest WPF DatGrid | DataGrids are excellent for displaying large amounts of tabular data bound to a backing store. But what happened in the WinForms world was that people often used them for everything that required a multi-element scrolling list. Souped-up third-party DataGrids soon became available that allowed columns and fields to contain buttons and ComboBoxes and icons, etc. The DataGrid became a workhorse because there was a need for something it could be coaxed into behaving like. Similar happened to DataTables before generic collections came along--and when you're using lots of DataTables, presenting it in the UI with a DataGrid is the path of least resistance. I think that when WPF came out, a lot of programmers like me were still thinking in this fashion, and sought out WPF ports of the DataGrid concept. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128414",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8091/"
]
} |
128,430 | Suppose we have a 3D Space with a plane on it with an arbitrary equation: ax+by+cz+d=0now suppose that we pick 3 random points on that plane: (x0, y0, z0) (x1, y1, z1) (x1, y1, z1) now I have a different point of view(camera) for this plane. I mean I have a different camera that will look at this plane from a different point of view. From that camera point of view these points have different locations. for example (x0, y0, z0) will be (x0', y0')and (x1, y1, z1) will be (x1', y1') and (x2, y2, z2) will be (x2', y2') from the new camera point of view. I want to pick a point for example (X,Y) from the new camera point of view and tell where it will be on that plane. All I know is that 3 points and their locations on 3D space and their projection locations on the new camera view. Do you know the coefficients of the plane-equation and the camera positions (along with the projection), or do you only have the six points? I know the location of first 3 points. therefore we can calculate the coefficients of the plane. so we know exactly where the plane is from (0,0,0) point of view. and then we have the camera that can only see the points! So the only thing that camera sees is 3 points and also it knows their locations in 3D space (and for sure their locations on 2D camera view plane). and after all I want to look at camera view, pick a point (for example (x1, y1)) and tell where is that point on that plane. (for sure this (X,Y,Z) point should fit on the plane equation). Also I know nothing about the camera location. | DataGrids are excellent for displaying large amounts of tabular data bound to a backing store. But what happened in the WinForms world was that people often used them for everything that required a multi-element scrolling list. Souped-up third-party DataGrids soon became available that allowed columns and fields to contain buttons and ComboBoxes and icons, etc. The DataGrid became a workhorse because there was a need for something it could be coaxed into behaving like. Similar happened to DataTables before generic collections came along--and when you're using lots of DataTables, presenting it in the UI with a DataGrid is the path of least resistance. I think that when WPF came out, a lot of programmers like me were still thinking in this fashion, and sought out WPF ports of the DataGrid concept. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128430",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
128,431 | Is there a way to run an Amazon EC2 AMI image in Windows? I'd like to be able to do some testing and configuration locally. I'm looking for something like Virtual PC. | If you build your images from scratch you can do it with VMware (or insert your favorite VM software here). Build and install your linux box as you'd like it, then run the AMI packaging/uploading tools in the guest. Then, just keep backup copies of your VM image in sync with the different AMI's you upload. Some caveats: you'll need to make sure you're using compatible kernels, or at least have compatible kernel modules in the VM, or your instance won't boot on the EC2 network. You'll also have to make sure your system can autoconfigure itself, too (network, mounts, etc). If you want to use an existing AMI, it's a little trickier. You need to download and unpack the AMI into a VM image, add a kernel and boot it. As far as I know, there's no 'one click' method to make it work. Also, the AMI's might be encrypted (I know they are at least signed). You may be able to do this by having a 'bootstrap' VM set up to specifically extract the AMI's into a virtual disk using the AMI tools, then boot that virtual disk separately. I know it's pretty vague, but those are the steps you'd have to go through. You could probably do some scripting to automate the process of converting AMI's to vdks. The Amazon forum is also helpful. For example, see this article . Oh, this article also talks about some of these processes in detail. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/681/"
]
} |
128,445 | I have an application that we're trying to migrate to 64bit from 32bit. It's .NET, compiled using the x64 flags. However, we have a large number of DLLs written in FORTRAN 90 compiled for 32bit. The functions in the FORTRAN DLLs are fairly simple: you put data in, you pull data out; no state of any sort. We also don't spend a lot of time there, a total of maybe 3%, but the calculation logic it performs is invaluable. Can I somehow call the 32bit DLLs from 64bit code? MSDN suggests that I can't, period. I've done some simple hacking and verified this. Everything throws an invalid entry point exception. The only possible solution i've found so far is to create COM+ wrappers for all of the 32bit DLL functions and invoke COM from the 64bit process. This seems like quite a headache. We can also run the process in WoW emulation, but then the memory ceiling wouldn't be increased, capping at around 1.6gb. Is there any other way to call the 32bit DLLs from a 64bit CLR process? | You'll need to have the 32-bit dll loaded into a separate 32-bit process, and have your 64 bit process communicate with it via interprocess communication. I don't think there is any way a 32-bit dll can be loaded into a 64 bit process otherwise. There is a pretty good article here: Accessing 32-bit DLLs from 64-bit code | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128445",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1390/"
]
} |
128,450 | I have some controller methods I'd like to share. What is the best practice for doing this in ruby on rails? Should I create an abstract class that my controllers extend, or should I create module and add it in to each controller? Below are the controller methods I want to share: def driving_directions @address_to = params[:address_to] @address_from = params[:address_from] @map_center = params[:map_center_start] # if we were not given a center point to start our map on # let's create one. if !@map_center && @address_to @map_center = GeoKit::Geocoders::MultiGeocoder.geocode(@address_to).ll elsif !@map_center && @address_from @map_center = GeoKit::Geocoders::MultiGeocoder.geocode(@address_from).ll endenddef printer_friendly starting_point = params[:starting_point].split(',').collect{|e|e.to_f} ne = params[:ne].split(',').collect{|e|e.to_f} sw = params[:sw].split(',').collect{|e|e.to_f} size = params[:size].split(',').collect{|e|e.to_f} address = params[:address] @markers = retrieve_points(ne,sw,size,false) @map = initialize_map([[sw[0],sw[1]],[ne[0],ne[1]]],[starting_point[0],starting_point[1]],false,@markers,true) @address_string = addressend | In my opinion, normal OO design principles apply: If the code is really a set of utilities that doesn't need access to object state, I would consider putting it in a module to be called separately. For instance, if the code is all mapping utilities, create a module Maps , and access the methods like: Maps::driving_directions . If the code needs state and is used or could be used in every controller, put the code in ApplicationController. If the code needs state and is used in a subset of all controllers that are closely and logically related (i.e. all about maps) then create a base class ( class MapController < ApplicationController ) and put the shared code there. If the code needs state and is used in a subset of all controllers that are not very closely related, put it in a module and include it in necessary controllers. In your case, the methods need state ( params ), so the choice depends on the logical relationship between the controllers that need it.In addition: Also: Use partials when possible for repeated code and either place in a common 'partials' directory or include via a specific path. Stick to a RESTful approach when possible (for methods) and if you find yourself creating a lot of non-RESTful methods consider extracting them to their own controller. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/128450",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1486/"
]
} |
128,463 | I am looking for a method to place some text onto the clipboard with VBScript . The VBScript in question will be deployed as part of our login script. I would like to avoid using anything that isn't available on a clean Windows XP system. Edit: In answer to the questions about what this is for. We wanted to encourage users inside our organization to use the file server to transfer documents instead of constantly sending attachments by email. One of the biggest barriers to this is that it isn't always obvious to people what the correct network path is to a file/folder. We developed a quick script, and attached it to the Windows context menu so that a user can right click on any file/folder, and get a URL that they can email to someone within our organization. I want the URL displayed in the dialog box to also be placed onto the clipboard. GetNetworkPath | Another solution I have found that isn't perfect in my opinion, but doesn't have the annoying security warnings is to use clip.exe from a w2k3 server. Set WshShell = WScript.CreateObject("WScript.Shell")WshShell.Run "cmd.exe /c echo hello world | clip", 0, TRUE Example with a multiline string as per question below : Link1 Dim stringString = "text here" &chr(13)& "more text here"Set WshShell = WScript.CreateObject("WScript.Shell")WshShell.Run "cmd.exe /c echo " & String & " | clip", 0, TRUE | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128463",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20267/"
]
} |
128,478 | PEP 8 states: Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. However if the class/method/function that I am importing is only used in rare cases, surely it is more efficient to do the import when it is needed? Isn't this: class SomeClass(object): def not_often_called(self) from datetime import datetime self.datetime = datetime.now() more efficient than this? from datetime import datetimeclass SomeClass(object): def not_often_called(self) self.datetime = datetime.now() | Module importing is quite fast, but not instant. This means that: Putting the imports at the top of the module is fine, because it's a trivial cost that's only paid once. Putting the imports within a function will cause calls to that function to take longer. So if you care about efficiency, put the imports at the top. Only move them into a function if your profiling shows that would help (you did profile to see where best to improve performance, right??) The best reasons I've seen to perform lazy imports are: Optional library support. If your code has multiple paths that use different libraries, don't break if an optional library is not installed. In the __init__.py of a plugin, which might be imported but not actually used. Examples are Bazaar plugins, which use bzrlib 's lazy-loading framework. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/128478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15676/"
]
} |
128,527 | We have our JBoss and Oracle on separate servers. The connections seem to be dropped and is causing issues with JBoss. How can I have the JBoss reconnect to Oracle if the connection is bad while we figure out why the connections are being dropped in the first place? | There is usually a configuration option on the pool to enable a validation query to be executed on borrow. If the validation query executes successfully, the pool will return that connection. If the query does not execute successfully, the pool will create a new connection. The JBoss Wiki documents the various attributes of the pool. <check-valid-connection-sql>select 1 from dual</check-valid-connection-sql> Seems like it should do the trick. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6013/"
]
} |
128,560 | When is it a good idea to use PHP_EOL ? I sometimes see this in code samples of PHP. Does this handle DOS/Mac/Unix endline issues? | Yes, PHP_EOL is ostensibly used to find the newline character in a cross-platform-compatible way, so it handles DOS/Unix issues. Note that PHP_EOL represents the endline character for the current system. For instance, it will not find a Windows endline when executed on a unix-like system. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/128560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3757/"
]
} |
128,561 | I have a new application written in WPF that needs to support an old API that allows it to receive a message that has been posted to a hidden window. Typically another application uses FindWindow to identify the hidden window using the name of its custom window class. 1) I assume to implement a custom window class I need to use old school win32 calls? My old c++ application used RegisterClass and CreateWindow to make the simplest possible invisible window. I believe I should be able to do the same all within c#. I don't want my project to have to compile any unmanaged code. I have tried inheriting from System.Windows.Interop.HwndHost and using System.Runtime.InteropServices.DllImport to pull in the above API methods. Doing this I can successfully host a standard win32 window e.g. "listbox" inside WPF.However when I call CreateWindowEx for my custom window it always returns null. My call to RegisterClass succeeds but I am not sure what I should be setting theWNDCLASS.lpfnWndProc member to. 2) Does anyone know how to do this successfully? | For the record I finally got this to work.Turned out the difficulties I had were down to string marshalling problems.I had to be more precise in my importing of win32 functions. Below is the code that will create a custom window class in c# - useful for supporting old APIs you might have that rely on custom window classes. It should work in either WPF or Winforms as long as a message pump is running on the thread. EDIT:Updated to fix the reported crash due to early collection of the delegate that wraps the callback. The delegate is now held as a member and the delegate explicitly marshaled as a function pointer. This fixes the issue and makes it easier to understand the behaviour. class CustomWindow : IDisposable{ delegate IntPtr WndProc(IntPtr hWnd, uint msg, IntPtr wParam, IntPtr lParam); [System.Runtime.InteropServices.StructLayout( System.Runtime.InteropServices.LayoutKind.Sequential, CharSet = System.Runtime.InteropServices.CharSet.Unicode )] struct WNDCLASS { public uint style; public IntPtr lpfnWndProc; public int cbClsExtra; public int cbWndExtra; public IntPtr hInstance; public IntPtr hIcon; public IntPtr hCursor; public IntPtr hbrBackground; [System.Runtime.InteropServices.MarshalAs(System.Runtime.InteropServices.UnmanagedType.LPWStr)] public string lpszMenuName; [System.Runtime.InteropServices.MarshalAs(System.Runtime.InteropServices.UnmanagedType.LPWStr)] public string lpszClassName; } [System.Runtime.InteropServices.DllImport("user32.dll", SetLastError = true)] static extern System.UInt16 RegisterClassW( [System.Runtime.InteropServices.In] ref WNDCLASS lpWndClass ); [System.Runtime.InteropServices.DllImport("user32.dll", SetLastError = true)] static extern IntPtr CreateWindowExW( UInt32 dwExStyle, [System.Runtime.InteropServices.MarshalAs(System.Runtime.InteropServices.UnmanagedType.LPWStr)] string lpClassName, [System.Runtime.InteropServices.MarshalAs(System.Runtime.InteropServices.UnmanagedType.LPWStr)] string lpWindowName, UInt32 dwStyle, Int32 x, Int32 y, Int32 nWidth, Int32 nHeight, IntPtr hWndParent, IntPtr hMenu, IntPtr hInstance, IntPtr lpParam ); [System.Runtime.InteropServices.DllImport("user32.dll", SetLastError = true)] static extern System.IntPtr DefWindowProcW( IntPtr hWnd, uint msg, IntPtr wParam, IntPtr lParam ); [System.Runtime.InteropServices.DllImport("user32.dll", SetLastError = true)] static extern bool DestroyWindow( IntPtr hWnd ); private const int ERROR_CLASS_ALREADY_EXISTS = 1410; private bool m_disposed; private IntPtr m_hwnd; public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } private void Dispose(bool disposing) { if (!m_disposed) { if (disposing) { // Dispose managed resources } // Dispose unmanaged resources if (m_hwnd != IntPtr.Zero) { DestroyWindow(m_hwnd); m_hwnd = IntPtr.Zero; } } } public CustomWindow(string class_name){ if (class_name == null) throw new System.Exception("class_name is null"); if (class_name == String.Empty) throw new System.Exception("class_name is empty"); m_wnd_proc_delegate = CustomWndProc; // Create WNDCLASS WNDCLASS wind_class = new WNDCLASS(); wind_class.lpszClassName = class_name; wind_class.lpfnWndProc = System.Runtime.InteropServices.Marshal.GetFunctionPointerForDelegate(m_wnd_proc_delegate); UInt16 class_atom = RegisterClassW(ref wind_class); int last_error = System.Runtime.InteropServices.Marshal.GetLastWin32Error(); if (class_atom == 0 && last_error != ERROR_CLASS_ALREADY_EXISTS) { throw new System.Exception("Could not register window class"); } // Create window m_hwnd = CreateWindowExW( 0, class_name, String.Empty, 0, 0, 0, 0, 0, IntPtr.Zero, IntPtr.Zero, IntPtr.Zero, IntPtr.Zero ); } private static IntPtr CustomWndProc(IntPtr hWnd, uint msg, IntPtr wParam, IntPtr lParam) { return DefWindowProcW(hWnd, msg, wParam, lParam); } private WndProc m_wnd_proc_delegate;} | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128561",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5427/"
]
} |
128,573 | I have a class with two class methods (using the classmethod() function) for getting and setting what is essentially a static variable. I tried to use the property() function with these, but it results in an error. I was able to reproduce the error with the following in the interpreter: class Foo(object): _var = 5 @classmethod def getvar(cls): return cls._var @classmethod def setvar(cls, value): cls._var = value var = property(getvar, setvar) I can demonstrate the class methods, but they don't work as properties: >>> f = Foo()>>> f.getvar()5>>> f.setvar(4)>>> f.getvar()4>>> f.varTraceback (most recent call last): File "<stdin>", line 1, in ?TypeError: 'classmethod' object is not callable>>> f.var=5Traceback (most recent call last): File "<stdin>", line 1, in ?TypeError: 'classmethod' object is not callable Is it possible to use the property() function with @classmethod decorated functions? | 3.8 < Python < 3.11 Can use both decorators together. See this answer . Python < 3.9 A property is created on a class but affects an instance. So if you want a classmethod property, create the property on the metaclass. >>> class foo(object):... _var = 5... class __metaclass__(type): # Python 2 syntax for metaclasses... pass... @classmethod... def getvar(cls):... return cls._var... @classmethod... def setvar(cls, value):... cls._var = value... >>> foo.__metaclass__.var = property(foo.getvar.im_func, foo.setvar.im_func)>>> foo.var5>>> foo.var = 3>>> foo.var3 But since you're using a metaclass anyway, it will read better if you just move the classmethods in there. >>> class foo(object):... _var = 5... class __metaclass__(type): # Python 2 syntax for metaclasses... @property... def var(cls):... return cls._var... @var.setter... def var(cls, value):... cls._var = value... >>> foo.var5>>> foo.var = 3>>> foo.var3 or, using Python 3's metaclass=... syntax, and the metaclass defined outside of the foo class body, and the metaclass responsible for setting the initial value of _var : >>> class foo_meta(type):... def __init__(cls, *args, **kwargs):... cls._var = 5... @property... def var(cls):... return cls._var... @var.setter... def var(cls, value):... cls._var = value...>>> class foo(metaclass=foo_meta):... pass...>>> foo.var5>>> foo.var = 3>>> foo.var3 | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/128573",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9940/"
]
} |
128,580 | I'm trying to get the contents of a XML document element, but the element has a colon in it's name. This line works for every element but the ones with a colon in the name: $(this).find("geo:lat").text(); I assume that the colon needs escaping. How do I fix this? | Use a backslash, which itself should be escaped so JavaScript doesn't eat it: $(this).find("geo\\:lat").text(); | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128580",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/399/"
]
} |
128,588 | Seems so basic, I can't believe I don't know this! I just need a scratch folder to dump some temporary files to. I don't care if it gets wiped out between usages or not, and I don't think I should have to go through the hassle of creating one and maintaining it myself from within my application. Is that too much to ask? | Use System.IO.Path.GetTempPath() . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128588",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5469/"
]
} |
128,618 | Is there any easy way to create a class that uses IFormatProvider that writes out a user-friendly file-size? public static string GetFileSizeString(string filePath){ FileInfo info = new FileInfo(@"c:\windows\notepad.exe"); long size = info.Length; string sizeString = size.ToString(FileSizeFormatProvider); // This is where the class does its magic...} It should result in strings formatted something like " 2,5 MB ", " 3,9 GB ", " 670 bytes " and so on. | I use this one, I get it from the web public class FileSizeFormatProvider : IFormatProvider, ICustomFormatter{ public object GetFormat(Type formatType) { if (formatType == typeof(ICustomFormatter)) return this; return null; } private const string fileSizeFormat = "fs"; private const Decimal OneKiloByte = 1024M; private const Decimal OneMegaByte = OneKiloByte * 1024M; private const Decimal OneGigaByte = OneMegaByte * 1024M; public string Format(string format, object arg, IFormatProvider formatProvider) { if (format == null || !format.StartsWith(fileSizeFormat)) { return defaultFormat(format, arg, formatProvider); } if (arg is string) { return defaultFormat(format, arg, formatProvider); } Decimal size; try { size = Convert.ToDecimal(arg); } catch (InvalidCastException) { return defaultFormat(format, arg, formatProvider); } string suffix; if (size > OneGigaByte) { size /= OneGigaByte; suffix = "GB"; } else if (size > OneMegaByte) { size /= OneMegaByte; suffix = "MB"; } else if (size > OneKiloByte) { size /= OneKiloByte; suffix = "kB"; } else { suffix = " B"; } string precision = format.Substring(2); if (String.IsNullOrEmpty(precision)) precision = "2"; return String.Format("{0:N" + precision + "}{1}", size, suffix); } private static string defaultFormat(string format, object arg, IFormatProvider formatProvider) { IFormattable formattableArg = arg as IFormattable; if (formattableArg != null) { return formattableArg.ToString(format, formatProvider); } return arg.ToString(); }} an example of use would be: Console.WriteLine(String.Format(new FileSizeFormatProvider(), "File size: {0:fs}", 100));Console.WriteLine(String.Format(new FileSizeFormatProvider(), "File size: {0:fs}", 10000)); Credits for http://flimflan.com/blog/FileSizeFormatProvider.aspx There is a problem with ToString(), it's expecting a NumberFormatInfo type that implements IFormatProvider but the NumberFormatInfo class is sealed :( If you're using C# 3.0 you can use an extension method to get the result you want: public static class ExtensionMethods{ public static string ToFileSize(this long l) { return String.Format(new FileSizeFormatProvider(), "{0:fs}", l); }} You can use it like this. long l = 100000000;Console.WriteLine(l.ToFileSize()); Hope this helps. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/128618",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2429/"
]
} |
128,623 | How can I disable all table constrains in Oracle with a single command?This can be either for a single table, a list of tables, or for all tables. | It is better to avoid writing out temporary spool files. Use a PL/SQL block. You can run this from SQL*Plus or put this thing into a package or procedure. The join to USER_TABLES is there to avoid view constraints. It's unlikely that you really want to disable all constraints (including NOT NULL, primary keys, etc). You should think about putting constraint_type in the WHERE clause. BEGIN FOR c IN (SELECT c.owner, c.table_name, c.constraint_name FROM user_constraints c, user_tables t WHERE c.table_name = t.table_name AND c.status = 'ENABLED' AND NOT (t.iot_type IS NOT NULL AND c.constraint_type = 'P') ORDER BY c.constraint_type DESC) LOOP dbms_utility.exec_ddl_statement('alter table "' || c.owner || '"."' || c.table_name || '" disable constraint ' || c.constraint_name); END LOOP;END;/ Enabling the constraints again is a bit tricker - you need to enable primary key constraints before you can reference them in a foreign key constraint. This can be done using an ORDER BY on constraint_type. 'P' = primary key, 'R' = foreign key. BEGIN FOR c IN (SELECT c.owner, c.table_name, c.constraint_name FROM user_constraints c, user_tables t WHERE c.table_name = t.table_name AND c.status = 'DISABLED' ORDER BY c.constraint_type) LOOP dbms_utility.exec_ddl_statement('alter table "' || c.owner || '"."' || c.table_name || '" enable constraint ' || c.constraint_name); END LOOP;END;/ | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/128623",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9435/"
]
} |
128,636 | .NET has a lot of complex data structures. Unfortunately, some of them are quite similar and I'm not always sure when to use one and when to use another. Most of my C# and VB books talk about them to a certain extent, but they never really go into any real detail. What's the difference between Array, ArrayList, List, Hashtable, Dictionary, SortedList, and SortedDictionary? Which ones are enumerable (IList -- can do 'foreach' loops)? Which ones use key/value pairs (IDict)? What about memory footprint? Insertion speed? Retrieval speed? Are there any other data structures worth mentioning? I'm still searching for more details on memory usage and speed (Big-O notation) | Off the top of my head: Array * - represents an old-school memory array - kind of like a alias for a normal type[] array. Can enumerate. Can't grow automatically. I would assume very fast insert and retrival speed. ArrayList - automatically growing array. Adds more overhead. Can enum., probably slower than a normal array but still pretty fast. These are used a lot in .NET List - one of my favs - can be used with generics, so you can have a strongly typed array, e.g. List<string> . Other than that, acts very much like ArrayList Hashtable - plain old hashtable. O(1) to O(n) worst case. Can enumerate the value and keys properties, and do key/val pairs Dictionary - same as above only strongly typed via generics, such as Dictionary<string, string> SortedList - a sorted generic list. Slowed on insertion since it has to figure out where to put things. Can enum., probably the same on retrieval since it doesn't have to resort, but deletion will be slower than a plain old list. I tend to use List and Dictionary all the time - once you start using them strongly typed with generics, its really hard to go back to the standard non-generic ones. There are lots of other data structures too - there's KeyValuePair which you can use to do some interesting things, there's a SortedDictionary which can be useful as well. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/128636",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21244/"
]
} |
128,674 | Pretty simple scenario. I have a web service that receives a byte array that is to be saved as a particular file type on disk. What is the most efficient way to do this in C#? | That would be File.WriteAllBytes() . | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/128674",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5469/"
]
} |
128,818 | I have seen people say that it is bad form to use catch with no arguments, especially if that catch doesn't do anything: StreamReader reader=new StreamReader("myfile.txt");try{ int i = 5 / 0;}catch // No args, so it will catch any exception{}reader.Close(); However, this is considered good form: StreamReader reader=new StreamReader("myfile.txt");try{ int i = 5 / 0;}finally // Will execute despite any exception{ reader.Close();} As far as I can tell, the only difference between putting cleanup code in a finally block and putting cleanup code after the try..catch blocks is if you have return statements in your try block (in that case, the cleanup code in finally will run, but code after the try..catch will not). Otherwise, what's so special about finally? | The big difference is that try...catch will swallow the exception, hiding the fact that an error occurred. try..finally will run your cleanup code and then the exception will keep going, to be handled by something that knows what to do with it. | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/128818",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/21727/"
]
} |
128,853 | I'm sure there's some trivial one-liner with perl, ruby, bash whatever that would let me run a command in a loop until I observe some string in stdout, then stop. Ideally, I'd like to capture stdout as well, but if it's going to console, that might be enough. The particular environment in question at the moment is RedHat Linux but need same thing on Mac sometimes too. So something, generic and *nixy would be best. Don't care about Windows - presumably a *nixy thing would work under cygwin. UPDATE: Note that by "observe some string" I mean "stdout contains some string" not "stdout IS some string". | In Perl: #!/usr/local/bin/perl -wif (@ARGV != 2){ print "Usage: watchit.pl <cmd> <str>\n"; exit(1);}$cmd = $ARGV[0];$str = $ARGV[1];while (1){ my $output = `$cmd`; print $output; # or dump to file if desired if ($output =~ /$str/) { exit(0); }} Example: [bash$] ./watchit.pl ls stopwatchit.plwatchit.pl~watchit.plwatchit.pl~... # from another terminal type "touch stop"stop watchit.plwatchit.pl~ You might want to add a sleep in there, though. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128853",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7671/"
]
} |
128,857 | I have a user reporting that when they use the back button to return to a web page that they come back as a different person. It seems like they may be accessing a different users profile. Here are the important parts of the code: //here's the code on the web pagepublic static WebProfile p = null;protected void Page_Load(object sender, EventArgs e){ p = ProfileController.GetWebProfile(); if (!this.IsPostBack) { PopulateForm(); } }//here's the code in the "ProfileController" (probably misnamed)public static WebProfile GetWebProfile(){ //get shopperID from cookie string mscsShopperID = GetShopperID(); string userName = new tpw.Shopper(Shopper.Columns.ShopperId, mscsShopperID).Email; p = WebProfile.GetProfile(userName); return p;} I'm using static methods and a static WebProfile because I need to use the profile object in a static WebMethod (ajax pageMethod ). Could this lead to the profile object being "shared" by different users? Am I not using static methods and objects correctly? The reason I changed WebProfile object to a static object was because I need to access the profile object within a [WebMethod] (called from javascript on the page). Is there a way to access a profile object within a [WebMethod] ? If not, what choices do I have? | In Perl: #!/usr/local/bin/perl -wif (@ARGV != 2){ print "Usage: watchit.pl <cmd> <str>\n"; exit(1);}$cmd = $ARGV[0];$str = $ARGV[1];while (1){ my $output = `$cmd`; print $output; # or dump to file if desired if ($output =~ /$str/) { exit(0); }} Example: [bash$] ./watchit.pl ls stopwatchit.plwatchit.pl~watchit.plwatchit.pl~... # from another terminal type "touch stop"stop watchit.plwatchit.pl~ You might want to add a sleep in there, though. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/128857",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4888/"
]
} |
128,919 | I'm working on a web app that is somewhere between an email service and a social network. I feel it has the potential to grow really big in the future, so I'm concerned about scalability. Instead of using one centralized MySQL/InnoDB database and then partitioning it when that time comes, I've decided to create a separate SQLite database for each active user: one active user per 'shard'. That way backing up the database would be as easy as copying each user's small database file to a remote location once a day. Scaling up will be as easy as adding extra hard disks to store the new files. When the app grows beyond a single server I can link the servers together at the filesystem level using GlusterFS and run the app unchanged, or rig up a simple SQLite proxy system that will allow each server to manipulate sqlite files in adjacent servers. Concurrency issues will be minimal because each HTTP request will only touch one or two database files at a time, out of thousands, and SQLite only blocks on reads anyway. I'm betting that this approach will allow my app to scale gracefully and support lots of cool and unique features. Am I betting wrong? Am I missing anything? UPDATE I decided to go with a less extreme solution, which is working fine so far. I'm using a fixed number of shards - 256 sqlite databases, to be precise. Each user is assigned and bound to a random shard by a simple hash function. Most features of my app require access to just one or two shards per request, but there is one in particular that requires the execution of a simple query on 10 to 100 different shards out of 256, depending on the user. Tests indicate it would take about 0.02 seconds, or less, if all the data is cached in RAM. I think I can live with that! UPDATE 2.0 I ported the app to MySQL/InnoDB and was able to get about the same performance for regular requests, but for that one request that requires shard walking, innodb is 4-5 times faster. For this reason, and other reason, I'm dropping this architecture, but I hope someone somewhere finds a use for it...thanks. | The place where this will fail is if you have to do what's called "shard walking" - which is finding out all the data across a bunch of different users. That particular kind of "query" will have to be done programmatically, asking each of the SQLite databases in turn - and will very likely be the slowest aspect of your site. It's a common issue in any system where data has been "sharded" into separate databases. If all the of the data is self-contained to the user, then this should scale pretty well - the key to making this an effective design is to know how the data is likely going to be used and if data from one person will be interacting with data from another (in your context). You may also need to watch out for file system resources - SQLite is great, awesome, fast, etc - but you do get some caching and writing benefits when using a "standard database" (i.e. MySQL, PostgreSQL, etc) because of how they're designed. In your proposed design, you'll be missing out on some of that. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128919",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6475/"
]
} |
128,921 | I have a document library with a custom column named "compound" which is just text. I want to put a filter (input text box) on that document library page so the view shows only the items where the compound column contains my typed-in text. Optimally, wildcards such as * or ? or full regular expressions could be supported... but for now, I just need a "contains". The out-of-the-box text filter seems to only support an exact match. The result output would be identical to what I would see if I created a new view, and added a filter with a "contains" clause. Third party solutions are acceptable. | The place where this will fail is if you have to do what's called "shard walking" - which is finding out all the data across a bunch of different users. That particular kind of "query" will have to be done programmatically, asking each of the SQLite databases in turn - and will very likely be the slowest aspect of your site. It's a common issue in any system where data has been "sharded" into separate databases. If all the of the data is self-contained to the user, then this should scale pretty well - the key to making this an effective design is to know how the data is likely going to be used and if data from one person will be interacting with data from another (in your context). You may also need to watch out for file system resources - SQLite is great, awesome, fast, etc - but you do get some caching and writing benefits when using a "standard database" (i.e. MySQL, PostgreSQL, etc) because of how they're designed. In your proposed design, you'll be missing out on some of that. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/128921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.