text
stringlengths
8
267k
meta
dict
Q: ASP.Net: How to do pagination with a Repeater? I'm using the Repeater control on my site to display data from the database. I need to do pagination ("now displaying page 1 of 10", 10 items per page, etc) but I'm not sure I'm going about it the best way possible. I know the Repeater control doesn't have any built-in pagination, so I'll have to make my own. Is there a way to tell the DataSource control to return rows 10-20 of a much larger result set? If not, how do I write that into a query (SQL Server 2005)? I'm currently using the TOP keyword to only return the first 10 rows, but I'm not sure how to display rows 10-20. A: You have to use the PagedDataSource, it allows you to turn a standard data source into one that can be paged. Here's an example article A: This isn't a way to page the data, but have you looked into the ListView control? It gives the flexibility of repeater / data list but with built in paging like the grid view. And for paging in sql, you would want to do something like this A: This was answered here.
{ "language": "en", "url": "https://stackoverflow.com/questions/22981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Form post doesn't contain textbox data [ASP.NET C#] I have several "ASP:TextBox" controls on a form (about 20). When the form loads, the text boxes are populated from a database. The user can change the populated values, and when they submit the form, I take the values posted to the server and conditionally save them (determined by some business logic). All but 1 of the text boxes work as intended. The odd box out, upon postback, does not contain the updated value that the user typed into the box. When debugging the application, it is clear that myTextBox.Text reflects the old, pre-populated value, not the new, user-supplied value. Every other box properly shows their respective user-supplied values. I did find a workaround. My solution was to basically extract the text box's value out of the Request.Form object: Request.Form[myTextBox.UniqueID], which does contain the user-supplied value. What could be going on, here? As I mentioned, the other text boxes receive the user-supplied values just fine, and this particular problematic text box doesn't have any logic associated to it -- it just takes the value and saves it. The main difference between this text box and the others is that this is a multi-line box (for inputting notes), which I believe is rendered as an HTML "textarea" tag instead of an "input" tag in ASP.NET. A: Are you initially loading the data only when !Page.IsPostBack? Also, is view state enabled for the text box? A: this happens to me all the time. protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { // populate text boxes from database } } A: I would second Jonathan's response I would check your databinding settings. If you do not need ViewState for the textboxes (i.e. no postback occurs until form submit) then you should disable it. It sounds like you are not having problems saving the data (since you said you have managed to get the control to read the correct data back). Therefore, I would say the problem loads in your databinding code. A: Remember the order of the page lifecycle, and where you are databinding your form. * *PreInit *Init *Load *Your Control Event Handler If you are reading the value in the Control Event handler, yet databinding in Init or Load, you'll have the old value. The trick is to always databind in the correct event, or check for postback and don't databind then. A: Are you initially loading the data only when !Page.IsPostBack? Also, is view state enabled for the text box? I had almost forgotten to check the ViewState, but ended up remembering to verify that it wasn't disabled before making my post here on SO. I even set EnableViewState="true" to make sure. I did find the solution, and it coincided with most of the answers here. The form was indeed loading its data more than once (which is intentional behavior). I implemented some special code for this field, and all is well. Thanks for your replies, all!
{ "language": "en", "url": "https://stackoverflow.com/questions/22988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Any good tools to automate SQL Server management tasks? I know I could write scripts and create jobs to run them, but at least some of what I'm wanting it to do is beyond my programming abilities for that to be an option. What I'm imagining is something that can run on a regular schedule that will examine all the databases on a server and automatically shrink data and log files (after a backup, of course) when they've reached a file size that contains too much free space. It would be nice if it could defrag index files when they've become too fragmented as well. I guess what I'm probably looking for is a DBA in a box! Or it could just be that I need better performance monitoring tools instead. I know how to take care of both of those issues, but it's more that I forget to check for those issues until I start seeing performance issues with my apps. A: That stuff is all built in, it is called a maintenance plan A: If you are using SQL Server 2005. Fire up the Management Studio and look at the Maintenance Plan section. See http://msdn.microsoft.com/en-us/library/ms187658.aspx for an overview and http://msdn.microsoft.com/en-us/library/ms189036.aspx for details on the Maintenance plan wizard. Finally, http://msdn.microsoft.com/en-us/library/ms140255.aspx is a list of all the maintenance tasks available. I am pretty sure this is all available even in the Express Edition. I can't speak to if anything has changed in 2008, I haven't used it yet. A: yeah everything you described (except maybe perf monitoring) can be done with database maintenance plans, back ups, shrinking log files etc. A: I guess the tool I was looking for was under my nose the whole time! I've used Maintenance Plans for backups but I think I set those up at least 4 years ago or more, long before I knew anything about shrinking files and defragging indexes. Thanks!
{ "language": "en", "url": "https://stackoverflow.com/questions/23001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Checklist for testing a new site What are the most common things to test in a new site? For instance to prevent exploits by bots, malicious users, massive load, etc.? And just as importantly, what tools and approaches should you use? (some stress test tools are really expensive/had to use, do you write your own? etc) Common exploits that should be checked for. Edit: the reason for this question is partially from being in SO beta, however please refrain from SO beta discussion, SO beta got me thinking about my own site and good thing too. This is meant to be a checklist for things that I, you, or someone else hasn't thought of before. A: Try and break your own site before someone else does. Your web site is basically a publicly accessible API that allows access to a database and other backend systems. Test the URLs as if they were any other API. I like to start by cataloging all URLs that have some sort of permenant affect on the state of the system - this is easy if you are doing Ruby on Rails development or trying to follow a RESTful design pattern. For each of those URLs, try running a GET, POST, PUT or DELETE HTTP methods with different parameters so that you can ensure that you're only giving access to what you want to give access to. This of course is in addition to obvious: Functional testing, Load Testing, SQL Injection, XSS etc. A: Turn off javascript and make sure your site can still be navigated. Even if you want to ignore the small but significant number of people who have it disabled, this will impact search engines as well. A: * *What do friendly bots see (eg: Google); check using Google Webmaster Tools; A: YSlow can give you a quick analysis of different metrics. A: Regarding tools for running functional tests of a web pages, I've found that Selenium IDE to be useful. The Firefox (version 2 only compatible at the moment) plug in lets your capture almost all web events, and save them and replay them in the same browser. In conjunction with another Firefox https://addons.mozilla.org/en-US/firefox/addon/1843"> Firebug you can create some very powerful tests. If you want to set up Selenium Remote Control you can then convert the Selenium IDE tests into nUnit tests, which you can run automatically. I use cruise control and run these web tests as part of a daily build. The nice thing about using Selenium remote control is that it can run the same functional tests on multiple browsers and operating systems, something that you can't do with the IDE. Although the web tests will take ages to run, there is an version of Selenium called Selenium Grid that lets you use any old hardware you have spare to run the tests in parallel as part of a computing grid. Not tried this myself, but it sounds interesting. All of the above is open source and free which helped me convince management to use if :-) A: For checking the cross browser and cross platform look of your site, browershots.org is maybe the best free tool that can safe a lot of time and costs. A: There's seperate stages for this one. Firstly there's the technical testing, where you check all technical functionality: * *SQL injections *Cross-site Scripting (XSS) *load times *stress levels Then there's the phase where you have someone completely computer-illiterate sit down and ask them to find something. Not only does it show you where there's flaws in your navigational logic (I find that developers look upon things way differently than 'other people') but they're also guaranteed to find some way to break your site.
{ "language": "en", "url": "https://stackoverflow.com/questions/23016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: CruiseControl.Net Build Publisher - Only publish compiled files While setting up CruiseControl, I added a buildpublisher block to the publisher tasks: <buildpublisher> <sourceDir>C:\MyBuild\</sourceDir> <publishDir>C:\MyBuildPublished\</publishDir> <alwaysPublish>false</alwaysPublish> </buildpublisher> This works, but it copies the entire file contents of the build, I only want to copy the DLL's and .aspx pages, I don't need the source code to get published. Does anyone know of a way to filter this, or do I need to setup a task to run a RoboCopy script instead? A: I set up a task to do this. I'm not aware of any way to make CruiseControl be that specific. I usually just chain a batch file to do the copy to the CC.net task. A: I'm not sure with a web project, but for our winforms app, you can grab the TargetOutputs from the MSBuild task like so: <MSBuild Projects="@(VSProjects)" Properties="Configuration=$(Configuration)"> <Output TaskParameter="TargetOutputs" ItemName="BuildTargetOutputs"/> </MSBuild> and then do a copy: <Copy SourceFiles="@(BuildTargetOutputs)" DestinationFolder="bin" SkipUnchangedFiles="true" /> Not sure what the TargetOutputs are for a web project, but for winforms and class libraries, it's the .dll or .exe. A: The default build publisher in CC.NET does not provide a way to do this. You have a few options: * *Create your own build publisher with the desired functionality *Create a custom NAnt/MSBuild task *Use a scripting technology (RoboCopy, batch file, etc.) to create a script file and run an "Executable" task for CC.NET, or an "exec" task for NAnt/MSBuild A: A CC.Net Powershell task can be used for this as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/23027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: When/how frequently should I test? As a novice developer who is getting into the rhythm of my first professional project, I'm trying to develop good habits as soon as possible. However, I've found that I often forget to test, put it off, or do a whole bunch of tests at the end of a build instead of one at a time. My question is what rhythm do you like to get into when working on large projects, and where testing fits into it. A: Well, if you want to follow the TDD guys, before you start to code ;) I am very much in the same position as you. I want to get more into testing, but I am currently in a position where we are working to "get the code out" rather than "get the code out right" which scares the crap out of me. So I am slowly trying to integrate testing processes in my development cycle. Currently, I test as I code, trying to bust the code as I write it. I do find it hard to get into the TDD mindset.. Its taking time, but that is the way I would want to work.. EDIT: I thought I should probably expand on this, this is my basic "working process"... * *Plan what I want from the code, possible object design, whatever. *Create my first class, add a huge comment to the top outlining what my "vision" for the class is. *Outline the basic test scenarios.. These will basically become the unit tests. *Create my first method.. Also writing a short comment explaining how it is expected to work. *Write an automated test to see if it does what I expect. *Repeat steps 4-6 for each method (note the automated tests are in a huge list that runs on F5). *I then create some beefy tests to emulate the class in the working environment, obviously fixing any issues. *If any new bugs come to light following this, I then go back and write the new test in, make sure it fails (this also serves as proof-of-concept for the bug) then fix it.. I hope that helps.. Open to comments on how to improve this, as I said it is a concern of mine.. A: Before you check the code in. A: First and often. If I'm creating some new functionality for the system I'll be looking to initially define the interfaces and then write unit tests for those interfaces. To work out what tests to write consider the API of the interface and the functionality it provides, get out a pen and paper and think for a while about potential error conditions or ways to prove that it is doing the correct job. If this is too difficult then it's likely that your API isn't good enough. In regards to the tests, see if you can avoid writing "integration" tests that test more than one specific object and keep them as "unit" test. Then create a default implementation of your interface (that does nothing, returns rubbish values but doesn't throw exceptions), plug it into the tests to make sure that the tests fail (this tests that your tests work! :) ). Then write in the functionality and re-run the tests. This mechanism isn't perfect but will cover a lot of simple coding mistakes and provide you with an opportunity to run your new feature without having to plug it into the entire application. Following this you then need to test it in the main application with the combination of existing features. This is where testing is more difficult and if possible should be partially outsourced to good QA tester as they'll have the knack of breaking things. Although it helps if you have these skills too. Getting testing right is a knack that you have to pick up to be honest. My own experience comes from my own naive deployments and the subsequent bugs that were reported by the users when they used it in anger. At first when this happened to me I found it irritating that the user was intentionally trying to break my software and I wanted to mark all the "bugs" down as "training issues". However after reflecting on it I realised that it is our role (as developers) to make the application as simple and reliable to use as possible even by idiots. It is our role to empower idiots and thats why we get paid the dollar. Idiot handling. To effectively test like this you have to get into the mindset of trying to break everything. Assume the mantle of a user that bashes the buttons and generally attempts to destroy your application in weird and wonderful ways. Assume that if you don't find flaws then they will be discovered in production to your companies serious loss of face. Take full responsibility for all of these issues and curse yourself when a bug you are responsible (or even part responsible) for is discovered in production. If you do most of the above then you should start to produce much more robust code, however it is a bit of an art form and requires a lot of experience to be good at. A: A good key to remember is "Test early, test often and test again, when you think you are done" A: When to test? When it's important that the code works correctly! A: When hacking something together for myself, I test at the end. Bad practice, but these are usually small things that I'll use a few times and that's it. On a larger project, I write tests before I write a class and I run the tests after every change to that class. A: I test constantly. After I finish even a loop inside of a function, I run the program and hit a breakpoint at the top of the loop, then run through it. This is all just to make sure that the process is doing exactly what I want it to. Then, once a function is finished, you test it in it's entirety. You probably want to set a breakpoint just before the function is called, and check your debugger to make sure that it works perfectly. I guess I would say: "Test often." A: I've only recently added unit testing to my regular work flow but I write unit tests: * *to express the requirements for each new code module (right after I write the interface but before writing the implementation) *every time I think "it had better ... by the time I'm done" *when something breaks, to quantify the bug and prove that I've fixed it *when I write code which explicitly allocates or deallocates memory---I loath hunting for memory leaks... I run the tests on most builds, and always before running the code. A: Start with unit testing. Specifically, check out TDD, Test Driven Development. The concept behind TDD is you write the unit tests first, then write your code. If the test fails, you go back and re-work your code. If it passes, you move on to the next one. I take a hybrid approach to TDD. I don't like to write tests against nothing, so I usually write some of the code first, then put the unit tests in. It's an iterative process, one which you're never really done with. You change the code, you run your tests. If there's any failures, fix and repeat. The other sort of testing is integration testing, which comes along later in the process, and might typically be done by a QA testing team. In any case, integration testing addresses the need to test the pieces as a whole. It's the working product you're concerned with testing. This one is more difficult to deal with b/c it usually involves having automated testing tools (like Robot, for ex.). Also, take a look at a product like CruiseControl.NET to do continuous builds. CC.NET is nice b/c it will run your unit tests with each build, notifying you immediately of any failures. A: We don't do TDD here (though some have advocated it), but our rule is that you're supposed to check your unit tests in with your changes. It doesn't always happen, but it's easy to go back and look at a specific changeset and see whether or not tests were written. A: I find that if I wait until the end of writing some new feature to test, I forget many of the edge cases that I thought might break the feature. This is ok if you are doing things to learn for yourself, but in a professional environment, I find my flow to be the classic form of: Red, Green, Refactor. Red: Write your test so that it fails. That way you know the test is asserting against the correct variable. Green: Make your new test pass in the easiest way possible. If that means hard-coding it, that's ok. This is great for those that just want something to work right away. Refactor: Now that your test passes, you can go back and change your code with confidence. Your new change broke your test? Great, your change had an implication you didn't realize, now your test is telling you. This rhythm has made me speed my development over time because I basically have a history compiler for all the things I thought that needed to be checked in order for a feature to work! This, in turn, leads to many other benefits, that I won't get to here... A: Lots of great answers here! I try to test at the lowest level that makes sense: * *If a single computation or conditional is difficult or complex, add test code while you're writing it and ensure each piece works. Comment out the test code when you're done, but leave it there to document how you tested the algorithm. *Test each function. * *Exercise each branch at least once. *Exercise the boundary conditions -- input values at which the code changes its behavior -- to catch "off by one" errors. *Test various combinations of valid and invalid inputs. *Look for situations that might break the code, and test them. *Test each module with the same strategy as above. *Test the body of code as a whole, to ensure the components interact properly. If you've been diligent about lower-level testing, this is essentially a "confidence test" to ensure nothing broke during assembly. Since most of my code is for embedded devices, I pay particular attention to robustness, interaction between various threads, tasks, and components, and unexpected use of resources: memory, CPU, filesystem space, etc. In general, the earlier you encounter an error, the easier it is to isolate, identify, and fix it--and the more time you get to spend creating, rather than chasing your tail.* **I know, -1 for the gratuitous buffer-pointer reference!*
{ "language": "en", "url": "https://stackoverflow.com/questions/23031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: XML Collection Best Practices I'm creating an application that will store a hierarchical collection of items in an XML file and I'm wondering about the industry standard for storing collections in XML. Which of the following two formats is preferred? (If there is another option I'm not seeing, please advise.) Option A <School> <Student Name="Jack" /> <Student Name="Jill" /> <Class Name="English 101" /> <Class Name="Math 101" /> </School> Option B <School> <Students> <Student Name="Jack" /> <Student Name="Jill" /> </Students> <Classes> <Class Name="English 101" /> <Class Name="Math 101" /> </Classes> </School> A: I'm no XML expert, but I find Option B to be more human readable, and I think it's just as machine readable as Option A. I believe that XML is designed to be both human and machine readable, so I would go for Option B myself. I just realized something else after Ryan Farley's post. If the Students or Classes section becomes too big and must be moved to another XML file, it seems like it would be easier to copy the node and create a new XML file out of that node with Option B. A: Definitely - Option B. I wouldn't mix students and classes in the XML just the same way that I wouldn't mix students and classes in the same table in a database. A: Option B, absolutely. When there's a logical grouping of similar items, it should have a parent item. That way, my parser won't have to step through all 500 student records checking to see if there are class records mixed in. A: Another compelling reason to use option B is error checking. If the original file is modified outside an XML application, or if no XSD schema is applied, there could be the case where you have an uneven number of students and classes. At least if you have the students and classes grouped together, you will easily be able to tell if each record is complete, independently of any other record.
{ "language": "en", "url": "https://stackoverflow.com/questions/23064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to solve the Single stepping problem with VS2008 SP1 Debugging in visual studio seems to have been broken with sp1. Single stepping randomly does not work and just starts to run. Sometimes breakpoints are ignored. It is unpredictable and unusable. It will generally hit the first break point but after that it is totally unpredictable. Any idea what needs to be done to correct this behavior ? A: Make sure you are debuging using the debug configuration, not the release one. Also make sure optimizations are disabled in debug configuration. Optimizations must be off when you debug else it can lead to very erratic behaviours like these. For C# projects, which I am assuming the question is about looking at the tags, the optimization option would be located in the "Build" tab of "Project > Properties..." Last option of "General" it's called "Optimize Code". A: There is a fix which for some reason isn't included in the update process: http://code.msdn.microsoft.com/KB957912/Release/ProjectReleases.aspx?ReleaseId=1796 It worked for me although some people say they still have the same problem. A: We are using c# as a language. The problem has been identified by microsoft. quote from forums: We have identified the root cause of this issue and are currently working on a solution. We apologize for the inconvenience that this is causing you. We will let you know as soon as we have a solution. In the mean time, if we discover any workrounds, we will post them here.
{ "language": "en", "url": "https://stackoverflow.com/questions/23078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How Did You Decide Between WISA and LAMP? Did you ever have to choose between WISA or LAMP at the beginning of a web project? While pros and cons are littered around the net, it would be helpful to know about your real experience in coming up w/ criteria, evaluating, deciding, and reflecting upon your decision to go w/ either platform. A: Cost is our biggest thing pushing us towards the LAMP environment, no question about it. Trying to go through Corporate procurement for Windows and SQL server licenses is horrific. A: WISA can be cheap, if your application doesn't need anything beyond shared hosting, there is little cost. It can also be expensive, then again so can LAMP once you get to the same size. Personally, I like the WISA stack, but its more out of familiarity than anything. Two things that stand out: * *SqlServer - Only oracle comes close to this, none of the free RDMBS can even hold a candle to it. *C# - Performance wise, its far better than either of the big three P's in lamp (Perl, PHP and Python). Of course, if you use Java its comparable. There is no need to be religious about one or the other. Do what fits your needs best, and do what you prefer to work in. A: Something that people don't tend to figure in his the time savings in Developer Hours between platforms. Take for example, a WISA app vrs a LAMP app, The initial cost of the enviroment may be a $2000 difference, but that is made up in just 20 developer hours. So, if by using .NET, you are able to trim 20 hours from development or maintenance of the project you have already made up the difference. There is never more apparent than when you need to scale the platform out and you suddenly realize you need to sink mountains of developer time into making a scripting language as fast as a compiled one. A: This is basically ASP.NET vs PHP.. If you (or the developers) have lots of experience with PHP, you use LAMP, or if they have used ASP.NET a lot, you chose WISA.. That said, while not strictly LAMP, Apache/MySQL/PHP will run on pretty much any platform you can name, which I would consider a big plus. There is never more apparent than when you need to scale the platform out and you suddenly realize you need to sink mountains of developer time into making a scripting language as fast as a compiled one. Arguing the benefits of a compiled language for web-applications is a bit silly, really. The language itself shouldn't ever limit the application, if it's designed sensibly.. Many big sites are coded in PHP for example. Again, that said, if the developers are familiar with ASP.Net, they are going to code better in that, so it will scale better.. Same with PHP. Basically, choose a reasonable language that the developer(s) know, and then the appropriate server... A: I personally use both stacks and the reason really depends on the client. If a client can support LAMP, it is certainly cheaper but it is important what the client or company can support. As an independent developer I would not recommend LAMP when all of the client's assets exist on Windows. It is really a comfort level as either platform works equally well to solve any problem. A: @Thomas WISA is: W=Windows I=IIS S=SQL (Microsoft SQL Server) A=ASP (or ASP .NET) As for choosing between them, I would think that the available resources and talent would be the deciding factor. If you can get great ASP .NET and MS SQL devs, go that route. If you've got a bunch of PHP/MySQL gurus on hand, go LAMP. The reality is, regardless of the pros and cons of the platform, you'll struggle to get a great system on WISA out of a primarily PHP dev team, and vice versa. A: I think the first part is your Application. If you decide to go PHP, you almost automatically end up with LAMP, as WIMP or WISP stacks are quite rare (I think blog.stackoverflow.com runs on WIMP), and with .net you definitely want to go WISA. So normally, it boils down to .net vs. PHP. (Ignoring Ruby, Python and all the other stuff for a moment). When you made that decision, the rest comes naturally or adapts into your environment (i.e. if all your admins in the company are windows admins, maybe WAMP works better for you) I switched from PHP to .net about a year ago and I never looked back at PHP, but I never had to look at the bill for Windows and SQL Server licenses to be fair. Deployment on WISA has a much higher initial cost due to the licenses involved, whereas a LAMP Stack is free (Yes, MySQL is also free for commercial use). Addendum: All the funny acronyms stand for the combination of technologies: (L)inux or (W)indows, (A)pache or (I)IS, (M)ySQL or (S)QL Server, (P)hp or (A)SP.net. A: I've used PHP/MySQL for a while, and I've used Rails, and I'm getting into ASP.NET right now. My incentive for switching to ASP.NET at the moment is similar to my incentive for digging into Rails--I find C# and Ruby to be much more enjoyable languages to code in. The object models are much more mature, and it feels like I'm fighting with the tool a lot less. I can't really compare MySQL to SQL Server yet, because I haven't done too much with the latter yet. A: My answer is let your developers choose the tools they are best with. A: My decision was based on two things. First and foremost I hated programming in ASP. I did it for an old job, and when given a choice I would choose PHP. I also tend to enjoy Linux over Windows. When it came to actually picking though, the corporate heads chose LAMP due to cost. Because let's be honest as developers, language isn't that big of deal. One thing I didn't get into, but apparently MySQL isn't exactly free in business situations. I don't know the details, but you should look into it before getting sued. A: FYI MySQL $599/year/server for basic up to $4999/year/server for everything MsSQL $212/processor/month for server web apps. If you have a dual processor machine that's just over $5k for either MySQL or MsSQL, however, if you have more that two processors or only need MySQL basic the cost is cheaper than MS. Pricing as of July 2010 A: I think the team is the biggest issue. WISA isn't universally worse or better than LAMP for any particular job. My expertize is in LAMP. I have very little experience with WISA, so I would never pick it. It's more along the lines of photography -- if all your lenses were Canon's, why would you buy a Nikon body for a big gig? A: That is true MySQL is $599 (one license is required per database server) for commercial use
{ "language": "en", "url": "https://stackoverflow.com/questions/23082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What's an alternative to GWL_USERDATA for storing an object pointer? In the Windows applications I work on, we have a custom framework that sits directly above Win32 (don't ask). When we create a window, our normal practice is to put this in the window's user data area via SetWindowLong(hwnd, GWL_USERDATA, this), which allows us to have an MFC-like callback or a tightly integrated WndProc, depending. The problem is that this will not work on 64-bit Windows, since LONG is only 32-bits wide. What's a better solution to this problem that works on both 32- and 64-bit systems? A: SetWindowLongPtr was created to replace SetWindowLong in these instances. It's LONG_PTR parameter allows you to store a pointer for 32-bit or 64-bit compilations. LONG_PTR SetWindowLongPtr( HWND hWnd, int nIndex, LONG_PTR dwNewLong ); Remember that the constants have changed too, so usage now looks like: SetWindowLongPtr(hWnd, GWLP_USERDATA, this); Also don't forget that now to retrieve the pointer, you must use GetWindowLongPtr: LONG_PTR GetWindowLongPtr( HWND hWnd, int nIndex ); And usage would look like (again, with changed constants): LONG_PTR lpUserData = GetWindowLongPtr(hWnd, GWLP_USERDATA); MyObject* pMyObject = (MyObject*)lpUserData; A: The other alternative is SetProp/RemoveProp (When you are subclassing a window that already uses GWLP_USERDATA) Another good alternative is ATL style thunking of the WNDPROC, for more info on that, see * *http://www.ragestorm.net/blogs/?cat=20 *http://www.hackcraft.net/cpp/windowsThunk/
{ "language": "en", "url": "https://stackoverflow.com/questions/23083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: How to write a spec that is productive? I've seen different program managers write specs in different format. Almost every one has had his/her own style of writing a spec. On one hand are those wordy documents which given to a programmer are likely to cause him/her missing a few things. I personally dread the word documents spec...I think its because of my reading style...I am always speed reading things which I think will cause me to miss out on key points. On the other hand, I have seen this innovative specs written in Excel by one of our clients. The way he used to write the spec was kind of create a mock application in Excel and use some VBA to mock it. He would do things like on button click where should the form go or what action should it perform (in comments). On data form, he would display a form in cells and on each data entry cell he would comment on what valid values are, what validation should it perform etc. I think that using this technique, it was less likely to miss out on things that needed to be done. Also, it was much easier to unit test it for the developer. The tester too had a better understanding of the system as it 'performed' before actually being written. Visio is another tool to do screen design but I still think Excel has a better edge over it considering its VBA support and its functions. Do you think this should become a more popular way of writing spec? I know it involves a bit of extra work on part of project manager(or whoever is writing the spec) but the payoff is huge...I myself could see a lot of productivity gain from using it. And if there are any better formats of specs that would actually help programmer. A: Joel on Software is particularly good at these and has some good articles about the subject... A specific case: the write-up and the spec. A: Two approaches have worked well for me. One is the "working prototype" which you sort of described in your question. In my experience, the company contracted a user interface expert to create fully functional HTML mocks. The data on the page was static, but it allowed for developers and management to see and play with a "functional" version of the site. All that was left to do was replace the static data on the pages with dynamic content - this prototype was our spec for the initial version of our product. The designer even included detailed explanation of some subtle behavior in popup dialogs that would appear when hovering over mock links. It worked well for our team. On a subsequent project, we didn't have the luxury of the UI expert, but we used similar approach. We used a wiki to mock a version of the site. We created links between the functional aspects of the system and documented each piece of functionality in detail. Each piece of functionality could, in turn, link to detailed design and architecture decisions. We also used to wiki to hold our to list feature list for each release (which became our release notes). These documents linked back to the detailed feature page. The wiki became a living document - describing our releases and evolution of our system in great detail. It was an invaluable resource. I prefer the wiki to the working prototype because it's more easily extensible - growing and becoming more valuable as your system evolves. A: I think you may have a look about Test-Driven Requirements, which is a technique to make executable specifications. There are some great tools like FIT, Fitnesse, GreenPepper or Concordion for that purpose. A: One of the Microsoft Press books has excellent examples of various documents, including an SRS (which I think is what you are talking about). It might be one of the requirements books by Weigert (I think that's his name, I'm blanking on it right now). I've seen US government organizations use that as a template, and from my three work experiences with the government, they like to make their own whereever they can, so if they are reusing it, it must be good. Also - a spec should contain NO CODE, in my opinion. It should focus on what the system must do, should do, and can not do using text and diagrams.
{ "language": "en", "url": "https://stackoverflow.com/questions/23091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the best way to deal with cache and the browser back button? What's the best way to handle a user going back to a page that had cached items in an asp.net app? Is there a good way to capture the back button (event?) and handle the cache that way? A: You can try using the HttpResponse.Cache property if that would help: Response.Cache.SetExpires(DateTime.Now.AddSeconds(60)); Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetValidUntilExpires(false); Response.Cache.VaryByParams["Category"] = true; if (Response.Cache.VaryByParams["Category"]) { //... } Or could could block caching of the page altogether with HttpResponse.CacheControl, but its been deprecated in favor of the Cache property above: Response.CacheControl = "No-Cache"; Edit: OR you could really go nuts and do it all by hand: Response.ClearHeaders(); Response.AppendHeader("Cache-Control", "no-cache"); //HTTP 1.1 Response.AppendHeader("Cache-Control", "private"); // HTTP 1.1 Response.AppendHeader("Cache-Control", "no-store"); // HTTP 1.1 Response.AppendHeader("Cache-Control", "must-revalidate"); // HTTP 1.1 Response.AppendHeader("Cache-Control", "max-stale=0"); // HTTP 1.1 Response.AppendHeader("Cache-Control", "post-check=0"); // HTTP 1.1 Response.AppendHeader("Cache-Control", "pre-check=0"); // HTTP 1.1 Response.AppendHeader("Pragma", "no-cache"); // HTTP 1.1 Response.AppendHeader("Keep-Alive", "timeout=3, max=993"); // HTTP 1.1 Response.AppendHeader("Expires", "Mon, 26 Jul 1997 05:00:00 GMT"); // HTTP 1.1 A: As far as I know (or at least have read) is its best to try not to work in response to user events, but rather think "in the page".. Architect your application so it doesn't care if the back button is pushed.. It will just deal with it.. This may mean a little extra work from a development point of view, but overall will make the application a lot more robust.. I.e if step 3 performs some data chages, then the user clicks back (to step 2) and clicks next again, then the application checks to see if the changes have been made.. Or ideally, it doesnt make any hard changes until the user clicks "OK" at the end.. This way, all the changes are stored and you can repopulate the form based on previously entered values on load, each and every time.. I hope that makes sense :) A: RFC 2616 §13.13 says that History and Cache are different things. There should be absolutely no way for cache to affect Back button. If any combination of HTTP headers affects Back button, it's a bug in the browser …with one exception. In HTTPS browsers interpret Cache-control: must-revalidate as request to refresh pages when Back button is used (Mozilla calls it "silly bank mode"). This isn't supported in plain HTTP. A: The best way to deal with it is to probably put a no-cache directive in your ASP.NET pages (or a master page if you're using one). I don't think there's a way to deal with this directly in your ASP.NET code (since the cache decision is happening on the client). As for MVC, don't know how you would accomplish that (assuming it's different from Web Forms-based ASP.NET); I haven't used it. A: The following code worked for me in IE9+, FF21 and Latest Chrome: Response.Cache.SetCacheability(HttpCacheability.NoCache | HttpCacheability.Private); Response.Cache.AppendCacheExtension("must-revalidate"); Response.Cache.AppendCacheExtension("max-age=0"); Response.Cache.SetNoStore(); You can place this in Page_Load() event handler in the MasterPage so that every page in your app requires a round-trip to the server when pressing the back button.
{ "language": "en", "url": "https://stackoverflow.com/questions/23094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What common web exploits should I know about? I'm pretty green still when it comes to web programming, I've spent most of my time on client applications. So I'm curious about the common exploits I should fear/test for in my site. A: I'm posting the OWASP Top 2007 abbreviated list here so people don't have to look through to another link and in case the source goes down. Cross Site Scripting (XSS) * *XSS flaws occur whenever an application takes user supplied data and sends it to a web browser without first validating or encoding that content. XSS allows attackers to execute script in the victim's browser which can hijack user sessions, deface web sites, possibly introduce worms, etc. Injection Flaws * *Injection flaws, particularly SQL injection, are common in web applications. Injection occurs when user-supplied data is sent to an interpreter as part of a command or query. The attacker's hostile data tricks the interpreter into executing unintended commands or changing data. Malicious File Execution * *Code vulnerable to remote file inclusion (RFI) allows attackers to include hostile code and data, resulting in devastating attacks, such as total server compromise. Malicious file execution attacks affect PHP, XML and any framework which accepts filenames or files from users. Insecure Direct Object Reference * *A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, database record, or key, as a URL or form parameter. Attackers can manipulate those references to access other objects without authorization. Cross Site Request Forgery (CSRF) * *A CSRF attack forces a logged-on victim's browser to send a pre-authenticated request to a vulnerable web application, which then forces the victim's browser to perform a hostile action to the benefit of the attacker. CSRF can be as powerful as the web application that it attacks. Information Leakage and Improper Error Handling * *Applications can unintentionally leak information about their configuration, internal workings, or violate privacy through a variety of application problems. Attackers use this weakness to steal sensitive data, or conduct more serious attacks. Broken Authentication and Session Management * *Account credentials and session tokens are often not properly protected. Attackers compromise passwords, keys, or authentication tokens to assume other users' identities. Insecure Cryptographic Storage * *Web applications rarely use cryptographic functions properly to protect data and credentials. Attackers use weakly protected data to conduct identity theft and other crimes, such as credit card fraud. Insecure Communications * *Applications frequently fail to encrypt network traffic when it is necessary to protect sensitive communications. Failure to Restrict URL Access * *Frequently, an application only protects sensitive functionality by preventing the display of links or URLs to unauthorized users. Attackers can use this weakness to access and perform unauthorized operations by accessing those URLs directly. The Open Web Application Security Project -Adam A: SQL INJECTION ATTACKS. They are easy to avoid but all too common. NEVER EVER EVER EVER (did I mention "ever"?) trust user information passed to you from form elements. If your data is not vetted before being passed into other logical layers of your application, you might as well give the keys to your site to a stranger on the street. You do not mention what platform you are on but if on ASP.NET get a start with good ol' Scott Guthrie and his article "Tip/Trick: Guard Against SQL Injection Attacks". After that you need to consider what type of data you will permit users to submit into and eventually out of your database. If you permit HTML to be inserted and then later presented you are wide-open for Cross Site Scripting attacks (known as XSS). Those are the two that come to mind for me, but our very own Jeff Atwood had a good article at Coding Horror with a review of the book "19 Deadly Sins of Software Security". A: Most people here have mentioned SQL Injection and XSS, which is kind of correct, but don't be fooled - the most important things you need to worry about as a web developer is INPUT VALIDATION, which is where XSS and SQL Injection stem from. For instance, if you have a form field that will only ever accept integers, make sure you're implementing something at both the client-side AND the server-side to sanitise the data. Check and double check any input data especially if it's going to end up in an SQL query. I suggest building an escaper function and wrap it around anything going into a query. For instance: $query = "SELECT field1, field2 FROM table1 WHERE field1 = '" . myescapefunc($userinput) . "'"; Likewise, if you're going to display any user-inputted information onto a webpage, make sure you've stripped any <script> tags or anything else that might result in Javascript execution (such as onLoad= onMouseOver= etc. attributes on tags). A: OWASP keeps a list of the Top 10 web attacks to watch our for, in addition to a ton of other useful security information for web development. A: These three are the most important: * *Cross Site Request Forgery *Cross Site Scripting *SQL injection A: bool UserCredentialsOK(User user) { if (user.Name == "modesty") return false; else // perform other checks } A: Everyone's going to say "SQL Injection", because it's the scariest-sounding vulnerability and the easiest one to get your head around. Cross-Site Scripting (XSS) is going to come in second place, because it's also easy to understand. "Poor input validation" isn't a vulnerability, but rather an evaluation of a security best practice. Let's try this from a different perspective. Here are features that, when implemented in a web application, are likely to mess you up: * *Dynamic SQL (for instance, UI query builders). By now, you probably know that the only reliably safe way to use SQL in a web app is to use parameterized queries, where you explicitly bind each parameter in the query to a variable. The places where I see web apps most frequently break this rule is when the malicious input isn't an obvious parameter (like a name), but rather a query attribute. An obvious example are the iTunes-like "Smart Playlist" query builders you see on search sites, where things like where-clause operators are passed directly to the backend. Another great rock to turn over are table column sorts, where you'll see things like DESC exposed in HTTP parameters. *File upload. File upload messes people up because file pathnames look suspiciously like URL pathnames, and because web servers make it easy to implement the "download" part just by aiming URLs at directories on the filesystem. 7 out of 10 upload handlers we test allow attackers to access arbitrary files on the server, because the app developers assumed the same permissions were applied to the filesystem "open()" call as are applied to queries. *Password storage. If your application can mail me back my raw password when I lose it, you fail. There's a single safe reliable answer for password storage, which is bcrypt; if you're using PHP, you probably want PHPpass. *Random number generation. A classic attack on web apps: reset another user's password, and, because the app is using the system's "rand()" function, which is not crypto-strong, the password is predictable. This also applies anywhere you're doing cryptography. Which, by the way, you shouldn't be doing: if you're relying on crypto anywhere, you're very likely vulnerable. *Dynamic output. People put too much faith in input validation. Your chances of scrubbing user inputs of all possible metacharacters, especially in the real world, where metacharacters are necessary parts of user input, are low. A much better approach is to have a consistent regime of filtering database outputs and transforming them into HTML entities, like quot, gt, and lt. Rails will do this for you automatically. *Email. Plenty of applications implement some sort of outbound mail capability that enable an attacker to either create an anonymous account, or use no account at all, to send attacker-controlled email to arbitrary email addresses. Beyond these features, the #1 mistake you are likely to make in your application is to expose a database row ID somewhere, so that user X can see data for user Y simply by changing a number from "5" to "6". A: This is also a short little presentation on security by one of wordpress's core developers. Security in wordpress it covers all of the basic security problems in web apps. A: The most common are probably database injection attacks and cross-site scripting attacks; mainly because those are the easiest to accomplish (that's likely because those are the ones programmers are laziest about). A: You can see even on this site that the most damaging things you'll be looking after involve code injection into your application, so XSS (Cross Site Scripting) and SQL injection (@Patrick's suggestions) are your biggest concerns. Basically you're going to want to make sure that if your application allows for a user to inject any code whatsoever, it's regulated and tested to be sure that only things you're sure you want to allow (an html link, image, etc) are passed, and nothing else is executed. A: SQL Injection. Cross Site Scripting. A: Using stored procedures and/or parameterized queries will go a long way in protecting you from sql injection. Also do NOT have your web app access the database as sa or dbo - set a up a standard user account and set the permissions. AS for XSS (cross site scripting) ASP.NET has some built in protections. The best thing is to filter input using validation controls and Regex. A: I'm no expert, but from what I learned so far the golden rule is not to trust any user data (GET, POST, COOKIE). Common attack types and how to save yourself: * *SQL Injection Attack: Use prepared queries *Cross Site Scripting: Send no user data to browser without filtering/escaping first. This also includes user data stored in database, which originally came from users.
{ "language": "en", "url": "https://stackoverflow.com/questions/23102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: Best method to parse various custom XML documents in Java What is the best method to parse multiple, discrete, custom XML documents with Java? A: I would use Stax to parse XML, it's fast and easy to use. I've been using it on my last project to parse XML files up to 24MB. There's a nice introduction on java.net, which tells you everything you need to know to get started. A: Basically, you have two main XML parsing methods in Java : * *SAX, where you use an handler to only grab what you want in your XML and ditch the rest *DOM, which parses your file all along, and allows you to grab all elements in a more tree-like fashion. Another very useful XML parsing method, albeit a little more recent than these ones, and included in the JRE only since Java6, is StAX. StAX was conceived as a medial method between the tree-based of DOM and event-based approach of SAX. It is quite similar to SAX in the fact that parsing very large documents is easy, but in this case the application "pulls" info from the parser, instead of the parsing "pushing" events to the application. You can find more explanation on this subject here. So, depending on what you want to achieve, you can use one of these approaches. A: You will want to use org.xml.sax.XMLReader (http://docs.oracle.com/javase/7/docs/api/org/xml/sax/XMLReader.html). A: Use the dom4j library First read the document import java.net.URL; import org.dom4j.Document; import org.dom4j.DocumentException; import org.dom4j.io.SAXReader; public class Foo { public Document parse(URL url) throws DocumentException { SAXReader reader = new SAXReader(); Document document = reader.read(url); return document; } } Then use XPATH to get to the values you need public void get_author(Document document) { Node node = document.selectSingleNode( "//AppealRequestProcessRequest/author" ); String author = node.getText(); return author; } A: If you only need to parse then I would recommend using XPath library. Here is a nice reference: http://www.ibm.com/developerworks/library/x-javaxpathapi.html But you may want to consider turning XMLs to objects and then the sky is the limit. For that you may use XStream, this is a great library which i use alot A: Below is the code of extracting some value value using vtd-xml. import com.ximpleware.*; public class extractValue{ public static void main(String s[]) throws VTDException, IOException{ VTDGen vg = new VTDGen(); if (!vg.parseFile("input.xml", false)); VTDNav vn = vg.getNav(); AutoPilot ap = new AutoPilot(vn); ap.selectXPath("/aa/bb[name='k1']/value"); int i=0; while ((i=ap.evalXPath())!=-1){ System.out.println(" value ===>"+vn.toString(i)); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/23106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Need to test an ajax timeout condition As the title mentions, I have a timeout callback handler on an ajax call, and I want to be able to test that condition but nothing is coming to mind immediately on ways I can force my application to hit that state, any suggestions? A: You could always run a server-side script that keeps running for a period of time. For example: <?php sleep(10); //sleep for 10 seconds. print "This script has finished."; > A: First off, I think you need to be clearer in your question - what technology are you using and where is this process that is timing out - server-side or client-side? If you want to have the server-side code take a long time and you are using .NET, place this line in the method you call server-side: System.Threading.Thread.Sleep(timeoutMilliseconds); As long as you use a number sufficient so that your client-side code assumes the server has timed out, you should be good. A: YUI Connection Manager allows you to introduce slowdown in your Javascript to test AJAX against latency.
{ "language": "en", "url": "https://stackoverflow.com/questions/23124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What's a good beginning text on functional programming? I like to study languages outside my comfort zone, but I've had a hard time finding a place to start for functional languages. I heard a lot of good things about Structure and Interpretations of Computer Programs, but when I tried to read through it a couple of years ago it just seemed to whiz over my head. I do way better with books than web sites, but when I visit the local book store the books on LISP look kind of scary. So what's a good starting point? My goal is to be able to use a functional programming language to solve simple problems in 6 months or so, and the ability to move to more advanced topics, recognize when a functional language is the right tool for the job, and use the language to solve more problems over the course of 2-3 years. I like books that are heavy on examples but also include challenges to work through. Does such a thing exist for functional languages? A: I really like Thompson’s “Haskell: The Craft of Functional Programming” because it’s well written and Haskell allows an easier start than other functional languages while being completely pure (unlike Lisp or Scheme). A: Since there are a bunch of different functional programming languages, it's hard to recommend books. But if you're interested in Common Lisp, recently I've been reading "Practical Common Lisp" by Peter Seibel, which you can check out online for free before dropping your hard earned cash on it. It's a pretty gentle introduction to CL, with great explanations and tons of examples. Seibel's a great writer (example: read the story of Mac,) he's good at keeping you engaged, which is really where SICP falls down, I think. It's just so dry! But while Practical Common Lisp is pretty example-heavy, it doesn't really have challenges to work through, although the examples are mostly designed to let you continue to work and build on them. Another good book, this one Scheme-oriented: How to Design Programs. (Online) I haven't had as much time with this book, being more of a Lisper than a Schemer myself, but it's well written, has good explanations and examples, and has lots of exercises to work on. It seems pretty popular in the Scheme crowd. A: The Schemers Guide and related software - seriously good stuff http://www.schemers.com/tsg.html A: Check out Introduction to functional programming. It offers a different perspective. A: I found The Little Schemer a great, great introduction to functional programming. It's entirely based on simple, bite sized examples which are built up upon as the book goes on. A: I learned from Jeffrey Ullman's Elements of ML Programming, which is pretty good. It loses points for being about Standard ML, when OCaml, F#, and Haskell are (seemingly) more popular. A: I feel Purely Functional Data Structures by Chris Okasaki is worth a look. FYI http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf A: The Little Schemer teaches recursion really well, and it's fun and simple to read. I also liked The Scheme Programming Language for a broader introduction into the language. A: Try Real World Haskell. It's free online. A: Haskell is a very good functional programming language for beginners. Someone had asked about good resources for Haskell, so I will point you there. If you are looking for a good book on Functional Programming, I would recommend "Functional Programming: Practice and Theory" by Bruce J. Maclennan. It is however required that you brush up on your Set Theory and Logic before giving it a read. It includes examples in LISP, Haskell and other languages. A: SICP is a great book. This is probably my bias, but I thought ocaml was pretty easy to get into. You have the option of programming in a few different styles until you're completely comfortable. I posted a bunch of links to Haskell and Ocaml references that are books, with examples et cetera that seem right up your alley. If you prefer Lisp, you can try to power through the 99-problems in Lisp(which you can do in any language, really), or you can watch the lectures from the people who wrote SICP. Further down the road, get a hold of "Purely Functional Data Structures", as it'll get into the hard-core deep design and considerations you have to take into account in functional languages --it uses ML (which ocaml derived from). A: I really recommend "On Lisp" from Paul Graham. It is concise and very readable even for beginners in functional programming (as I was when I read it). It contains a lot of very short examples, each which helps to understand one single thing. I often thought reading this book: this is just the language containing exactly the features I ever wanted in other (nonfunctional) languages, but never got. :-( And this is exactly the book to learn it, always comprehensible, sometimes even funny! You may get it for free at the author's site! A: If you have experience with .NET, Expert #F is good. F# is derived from OCaml. Lisp is more pure as functional languages go. A: Real-World Functional Programming (with examples in F# and C#) A: I have heard good things about Haskell Functional Programming, but I also found this list of functional programming books at amazon that might be helpful to you.
{ "language": "en", "url": "https://stackoverflow.com/questions/23166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: HTML using Groovy MarkupBuilder, how do I elegantly mix tags and text? When using Groovy MarkupBuilder, I have places where I need to output text into the document, or call a function which outputs text into the document. Currently, I'm using the undefined tag "text" to do the output. Is there a better way to write this code? li { text("${type.getAlias()} blah blah ") function1(type.getXYZ()) if (type instanceof Class1) { text(" implements ") ft.getList().each { if (it == '') return text(it) if (!function2(type, it)) text(", ") } } } A: Actually, the recommended way now is to use mkp.yield, e.g., src.p { mkp.yield 'Some element that has a ' strong 'child element' mkp.yield ' which seems pretty basic.' } to produce <p>Some element that has a <strong>child element</strong> which seems pretty basic.</p> A: Include a method: void text(n){ builder.yield n } Most likely you (I) copied this code from somewhere that had a text method, but you didn't also copy the text method. Since MarkupBuilder accepts any name for the name of a tag and browsers ignore unknown markup, it just happened to work.
{ "language": "en", "url": "https://stackoverflow.com/questions/23169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Caching Schemes for Managed Languages This is mostly geared toward desktop application developers. How do I design a caching block which plays nicely with the GC? How do I tell the GC that I have just done a cache sweep and it is time to do a GC? How do I get an accurate measure of when it is time to do a cache sweep? Are there any prebuilt caching schemes which I could borrow some ideas from? A: All you'll ever need to know (and then some): http://msdn.microsoft.com/en-us/library/ee817645.aspx Oh, and GC.Collect() forces a collect. A: While I obviously cannot speak to the specifics of your application, in most instances you should not tie your caching implementation to some perceived expectation for how the GC will work. As Stu mentions, calling GC.Collect() will force a collection (with overloads for a specific generation) but more often than not doing so will result in worse performance than just letting the GC manage itself. If you do find (after doing some real performance testing) that you need to interact with the GC make sure you take into account the different types of GC's that the framework currently has (see here for more information).
{ "language": "en", "url": "https://stackoverflow.com/questions/23175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Suggestions on Ajax development environment for PHP I am a C/C++ programmer professionally, but I've created a couple of personal web sites using PHP and MySQL. They're pretty basic, and I'd like to jazz them up using Ajax, but I've never done any Ajax. I've done all the development so far manually, i.e. no IDE or anything like that. Does anyone have suggestions on Ajax development environments that can help me? Shareware or freeware would be preferable as I'd find it hard to justify spending more than a minimal amount of money on this... A: If you want an IDE, try Aptana Studio. It supports HTML, CSS, JavaScript, PHP, XML, Ruby, Ruby on Rails, and more.... A: As T.O. says, try Aptana. There's a very good free version, and they really push the AJAX. They even have Jaxer, an "AJAX Server" that they're working on. If nothing else, the plugins are great, and, other than a few quirks, I really like working in it. A: Aptana is supposedly a decent IDE for Javascript development. I myself just use Eclipse and a decent javascript framework like jQuery that has an easy syntax. A: Rolling your own AJAX has become somewhat outdated in the presence of Javascript libraries like Prototype and JQuery. I would recommend looking into one of those libraries (Jeff used JQuery for SO and he's been really impressed with it from what I understand). As far as a development environment goes, I don't know that there's much. A typical text editor with syntax highlighting would do the trick for writing (like Notepad++). For debugging, take a look at the Firebug extension for Firefox (though if you use JQuery, a debugging tool may not be as useful). A: First off, make sure you understand the basics of the HTTP protocol. Then learn how the javascript httpXmlRequest function works. Once you've covered those, pick an Ajax library - prototype is good. Then look at a few examples, and follow the API. Job done. I seriously have no idea how they manage to write entire books on this subject. Edit: Why vote me down? Learning the basics first, leads to a much better understanding of the way it works. And yes, I believe Jeff should learn C too ;-P A: Sajax is another good toolkit with PHP support. Mostly though I prefer to use a Javascript framework like Jquery or Prototype
{ "language": "en", "url": "https://stackoverflow.com/questions/23176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: "All Users" Folder Is there a .NET variable that returns the "All Users" directory? A: You'll want to use the system.environment variables. Most of the predefined ones are shown here. For the "All Users" you would use: System.Environment.GetEnvironmentVariable("ALLUSERSPROFILE") I know I got a lot of upmods and a correct answer for my other stuff, but this actually works. where as the other environment variables I linked to previously don't seem to work with that function call. A: Or, Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData) You can then pass this result to System.IO.Directory.GetParent() to get the root "All Users" folder. A: Is this any use? Oops: http://msdn.microsoft.com/en-us/library/bb774096(VS.85).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/23178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How Does One Sum Dimensions of an Array Specified at Run-Time? I am working on a function to establish the entropy of a distribution. It uses a copula, if any are familiar with that. I need to sum up the values in the array based on which dimensions are "cared about." Example: Consider the following example... Dimension 0 (across) _ _ _ _ _ _ _ _ _ _ _ _ _ |_ 0 _|_ 0 _|_ 0 _|_ 2 _| Dimension 1 |_ 1 _|_ 0 _|_ 2 _|_ 0 _| (down) |_ 0 _|_ 3 _|_ 0 _|_ 6 _| |_ 0 _|_ 0 _|_ 0 _|_ 0 _| I "care about" dimension 0 only, and "don't care" about the rest (dim 1). Summing this array with the above specifications will "collapse" the "stacks" of dimension 1 down to a single 4 x 1 array: _ _ _ _ _ _ _ _ _ _ _ _ _ |_ 1 _|_ 3 _|_ 2 _|_ 8 _| This can then be summed, or have any operation performed. I need to do this with an array of 'n' dimensions, which could feasibly be 20. Also, I need to be able to do this, caring about certain dimensions, and collapsing the rest. I am having an especially hard time with this because I cant visualize 20 dimensions :p . If anyone could help me set up some c/c++ code to collapse/sum, I would be very very grateful. Update: Just got home. Here is some info to answer your questions: * *Sorry for rolling back the edits, I was hoping when I clicked roll-back it would show me the changes so I could see what I messed up, a bit like wikipedia. This wasn't the case, as I found out. *@jeff - What doesnt make sense? I am using this great service for (what I think is) a legit reason. I want to get better at my hobby, which is all it is, as I am in high school. Many of my posts regard implementing a genetic algorithm (This post, sparsearray, rank an array, pointer manipulation). *I am using a sparse array representation, as it is possible to exceed the number of molecules in the universe using a traditional (dense) array. For now, the implementation of the sparsearray itself doesnt matter a whole lot, as I am working to make it work with a standard array before going to a sparse representation. For those who havent seen my previous questions, I am using a binary search tree as the structure to contain the sparse array points, and a "driver" function to traverse the tree as necessary, returning whatever the function is designed to do. This is flexible, so I can accomodate a lot of different methods of accessing the array. *The structure is a hypercube, and the number of dimensions is specified at run time, as well as the length of each dimension (which are all the same, as it is a hypercube). Thanks everyone for your imput. A: @Jeff I actually think this is an interesting question. I'm not sure how useful it is, but it is a valid question. @Ed Can you provide a little more info on this question? You said the dimension of the array is dynamic, but is the number of elements dynamic as well? EDIT: I'm going to try and answer the question anyways. I can't give you the code off the top of my head (it would take a while to get it right without any compiler here on this PC), but I can point you in the right direction ... Let's use 8 dimensions (0-7) with indexes 0 to 3 as an example. You care about only 1,2 and 6. This means you have two arrays. First, array_care[4][4][4] for 1,2, and 6. The array_care[4][4][4] will hold the end result. Next, we want to iterate in a very specific way. We have the array input[4][4][4][4][4][4][4][4] to parse through, and we care about dimensions 1, 2, and 6. We need to define some temporary indexes: int dim[8] = {0,0,0,0,0,0,0,0}; We also need to store the order in which we want to increase the indexes: int increase_index_order[8] = {7,5,4,3,0,6,2,1}; int i = 0; This order is important for doing what you requested. Define a termination flag: bool terminate=false; Now we can create our loop: while (terminate) { array_care[dim[1]][dim[2]][dim[6]] += input[dim[0]][dim[1]][dim[2]][dim[3]][dim[4]][dim[5]][dim[6]][dim[7]]; while ((dim[increase_index_order[i]] = 3) && (i < 8)) { dim[increase_index_order[i]]=0; i++; } if (i < 8) { dim[increase_index_order[i]]++; i=0; } else { terminate=true; } } That should work for 8 dimensions, caring about 3 dimensions. It would take a bit more time to make it dynamic, and I don't have the time. Hope this helps. I apologize, but I haven't learned the code markups yet. :( A: This kind of thing is much easier if you use STL containers, or maybe Boost.MultiArray. But if you must use an array: #include <iostream> #include <boost/foreach.hpp> #include <vector> int sum(int x) { return x; } template <class T, unsigned N> int sum(const T (&x)[N]) { int r = 0; for(int i = 0; i < N; ++i) { r += sum(x[i]); } return r; } template <class T, unsigned N> std::vector<int> reduce(const T (&x)[N]) { std::vector<int> result; for(int i = 0; i < N; ++i) { result.push_back(sum(x[i])); } return result; } int main() { int x[][2][2] = { { { 1, 2 }, { 3, 4 } }, { { 5, 6 }, { 7, 8 } } }; BOOST_FOREACH(int v, reduce(x)) { std::cout<<v<<"\n"; } } A: This could have applications. Lets say you implemented a 2D Conway's Game of Life (which defines a 2D plane, 1 for 'alive', 0 for 'dead') and you stored the Games history for every iteration (which then defines a 3D cube). If you wanted to know how many bacteria there was alive over history, you would use the above algorithm. You could use the same algorithm for a 3D, (and 4D, 5D etc.) version of Game of Life grid. I'd say this was a question for recursion, I'm not yet a C programmer but I know it is possible in C. In python, def iter_arr(array): sum = 0 for i in array: if type(i) == type(list()): sum = sum + iter_arr(i) else: sum = sum + i return sum * *Iterate over each element in array *If element is another array, call the function again *If element is not array, add it to the sum *Return sum You would then apply this to each element in the 'cared about' dimension. This is easier in python due to duck-typing though ... A: Actually, by colllapsing the colums you already summed them, so the dimension doesn't matter at all for your example. Did I miss something or did you? A: I think the best thing to do here would be one/both of two things: * *Rethink the design, if its too complex, find a less-complex way. *Stop trying to visualise it.. :P Just store the dimensions in question that you need to sum, then do them one at a time. Once you have the base code, then look at improving the efficiency of your algorithm. A: I beg to differ, there is ALWAYS another way.. And if you really cannot refactor, then you need to break the problem down into smaller parts.. Like I said, establish which dimensions you need to sum, then hit them one at a time.. Also, stop changing the edits, they are correcting your spelling errors, they are trying to help you ;) A: When you say you don't know how many dimensions there are, how exactly are you defining the data structures? At some point, someone needs to create this array, and to do that, they need to know the dimensions of the array. You can force the creator to pass in this data along with the array. Unless the question is to define such a data structure... A: You're doing this in c/c++... so you have an array of array of array... you don't have to visualize 20 dimensions since that isn't how the data is laid out in memory, for a 2 dimensional: [1] --> [1,2,3,4,5,6,...] [2] --> [1,2,3,4,5,6,...] [3] --> [1,2,3,4,5,6,...] [4] --> [1,2,3,4,5,6,...] [5] --> [1,2,3,4,5,6,...] . . . . . . so, why can't you iterate across the first one summing it's contents? If you are trying to find the size, then sizeof(array)/sizeof(int) is a risky approach. You must know the dimension to be able to process this data, and set the memory up, so you know the depth of recursion to sum. Here is some pseudo code of what it seems you should do, sum( n_matrix, depth ) running_total = 0 if depth = 0 then foreach element in the array running_total += elm else foreach element in the array running_total += sum( elm , depth-1 ) return running_total A: x = number_of_dimensions; while (x > 1) { switch (x) { case 20: reduce20DimensionArray(); x--; break; case 19: ..... } } (Sorry, couldn't resist.) A: If I understand correctly, you want to sum all values in the cross section defined at each "bin" along 1 dimension. I suggest making a 1D array for your destination, then looping through each element in your array adding the value to the destination with the index of the dimension of interest. If you are using arbitrary number of dimensions, you must have a way of addressing elements (I would be curious how you are implementing this). Your implementation of this will affect how you set the destination index. But an obvious way would be with if statements checked in the iteration loops.
{ "language": "en", "url": "https://stackoverflow.com/questions/23190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Find out which process has an exclusive lock on a USB device handle I have a library that reads/writes to a USB-device using CreateFile() API. The device happens to implement the HID-device profile, such that it's compatible with Microsoft's HID class driver. Some other application installed on the system is opening the device in read/write mode with no share mode. Which prevents my library (and anything that consumes it) from working with the device. I suppose that's the rub with being an HID-compatible device -- other driver software (mice, controllers, PHIDGETS, etc) can be uncooperative. Anyway, the device file path is of the form: 1: "\\?\hid#hpqremhiddevice&col01#5&21ff20e7&0&0000#{4d1e55b2-f16f-11cf-88cb-001111000030}". 2: "\\?\hid#vid_045e&pid_0023#7&34aa9ece&0&0000#{4d1e55b2-f16f-11cf-88cb-001111000030}". 3: "\?\hid#vid_056a&pid_00b0&col01#6&5b05f29&0&0000#{4d1e55b2-f16f-11cf-88cb-001111000030}". And I'm trying to open it using code, like: // First, open it with minimum permissions, this device may not be ours. // we'll re-open it later in read/write hid_device_ref = CreateFile( device_path, GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); I've considered a tool like FileMon or Process Monitor from SysInternals. But I can't seem to get it to report usage on device file handles like the one listed above. A: Have you tried the tool called handle from sysinternals? Anyway, neither windows does this (display the name of the application that locked the device): when you try to eject an USB device, Windows just says that the device is currently in use and cannot be remove right now. A: This is what I use to read from a Magtek card reader: //Open file on the device deviceHandle = CreateFile (deviceDetail->DevicePath, GENERIC_READ, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, 0, NULL); Try those options and see if you can at least read from the device. I understand your pain here... I found the USB HID documentation to be basically wrong in several places. [Edit] There's not much out there on this problem. Here's a codeproject link that lightly touches on the subject in a thread at the bottom. Sounds like maybe if it's a keyboard or mouse windows grabs it exclusively. A: There's a trick you can do where you open the device handle requesting neither read nor write permission and interact with it using only feature reports. Jan Axelson mentions this trick in her books about USB HID devices. I believe this gets around the problem with the exclusive lock, which you would encounter (for example) when trying to open a handle to a device that Windows considers a system keyboard or mouse. Even though you can't read or write the handle, you can still send a feature report to the device using HidD_SetFeature and read a report from the device using HidD_GetFeature. I don't know offhand of a way to read input reports or send output reports under these circumstances, and perhaps it's impossible to do so, but you might not need either of those, especially if the device is "your" device in the sense that you control the firmware. Strictly speaking this does nothing to answer your question as asked, but it seemed potentially relevant so I figured I'd throw it out there. A: Cool - I'll try those options, as they're probably better defaults given my intentions. Unfortunately, I know my device is there and I'll eventually need read/write access later on (once I inspect the descriptors and have verifed it is infact my device). Which means that my real goal IS to know what's using it, so I can inform the customer/user: "Hey man, 'iexplore.exe' is currently using your SuperWidget device. You'll have to close that down in order to use SuperWidget application." (if not at the application-level, then at least at the phone support level.) I forgot to mention that the windows error reported by GetLastError() is: 0x20. The process cannot access the file because it is being used by another process. (So your sharing alternatives will probably get the file open, assuming no FILE_SHARE_NONE on behalf of the other process). [edit] Yeah, it's painful alright. I have seen mice and keyboards get locked by whatever Windows uses to read from them. I've also seen a lot of people have trouble inside a VM like Paralells on OS X, where the HID class driver has the device open exclusively preventing the VM from using standard USB requests. I've seen some code that recreates what ProcessMonitor does. Maybe SysInternals is just electing to ignore device handles, but the same method (or a slight variation) can be employed here to determine the PID. Mike
{ "language": "en", "url": "https://stackoverflow.com/questions/23197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Boundary Tests For a Networked App Besides "no connection", what other failure modes should I test for? How do I simulate a high-latency link, an unreliable link, or all the other sorts of crazy stuff that will undoubtedly happen "in the wild"? How about wireless applications? How do I test the performance in a less-than-ideal WL environment? A: To add to TimK's answer, if you have a router, test pulling the upstream link on the router, this will test a bad connection without your system knowing that you lost the physical link. Also if you plug it back in after a few seconds it's possible that the connection won't be lost*. This can simulate a very high latency. *this depends on your ISP and your router. A: If you're using Linux, try Virtual Distributed Ethermet (VDE). VDE gives you virtualised switches/hubs and Ethernet cables. You can tune network characteristics such as latency, delay, MTU, errored bits per MB, bandwidth, duplicates, etc on individual cables - all in real time! A: You definitely want to test physically pulling the cable out. Lots of networking code will throw different exceptions in that scenario vs when the connection has just been lost. A: To add to TimK's answer, if you have a router, test pulling the upstream link on the router, this will test a bad connection without your system knowing that you lost the physical link. A: Our network/server closet is a spaghetti-mess of wires; I'm not going to walk in there and start unplugging stuff lest I hit something mission-critical. (At least I have access to it; I'm sure many readers don't even know where their routers are.) Similarly, both ends of the ethernet cable require a hands-and-knees adventure to reach. I tested enabling/disabling the network adapter, and I'm going to test from my cable internet connection from home as well. Also, I had the idea of installing Tor to create a high latency connection. For wireless connections, I have a metal box to test what happens when the signal dies, but I notice that network connection behavior is very different depending on how I test: * *put the transmitter/reciever in a metal box *go stand next to the microwave in the kitchen and turn it on *go stand in a little closet which has concrete walls
{ "language": "en", "url": "https://stackoverflow.com/questions/23205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: C++ linker unresolved external symbols I'm building an application against some legacy, third party libraries, and having problems with the linking stage. I'm trying to compile with Visual Studio 9. My compile command is: cl -DNT40 -DPOMDLL -DCRTAPI1=_cdecl -DCRTAPI2=cdecl -D_WIN32 -DWIN32 -DWIN32_LEAN_AND_MEAN -DWNT -DBYPASS_FLEX -D_INTEL=1 -DIPLIB=none -I. -I"D:\src\include" -I"C:\Program Files\Microsoft Visual Studio 9.0\VC\include" -c -nologo -EHsc -W1 -Ox -Oy- -MD mymain.c The code compiles cleanly. The link command is: link -debug -nologo -machine:IX86 -verbose:lib -subsystem:console mymain.obj wsock32.lib advapi32.lib msvcrt.lib oldnames.lib kernel32.lib winmm.lib [snip large list of dependencies] D:\src\lib\app_main.obj -out:mymain.exe The errors that I'm getting are: app_main.obj : error LNK2019: unresolved external symbol "_\_declspec(dllimport) public: void __thiscall std::locale::facet::_Register(void)" (__imp_?_Register@facet@locale@std@@QAEXXZ) referenced in function "class std::ctype<char> const & __cdecl std::use_facet<class std::ctype<char> (class std::locale const &)" (??$use_facet@V?$ctype@D@std@@@std@@YAABV?$ctype@D@0@ABVlocale@0@@Z) app_main.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: static unsigned int __cdecl std::ctype<char>::_Getcat(class std::locale::facet const * *)" (__imp_?_Getcat@?$ctype@D@std@@SAIPAPBVfacet@locale@2@@Z) referenced in function "class std::ctype<char> const & __cdecl std::use_facet<class std::ctype<char> (class std::locale const &)" (??$use_facet@V?$ctype@D@std@@@std@@YAABV?$ctype@D@0@ABVlocale@0@@Z) app_main.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: static unsigned int __cdecl std::ctype<unsigned short>::_Getcat(class std::locale::facet const * *)" (__imp_?_Getcat@?$ctype@G@std@@SAIPAPBVfacet@locale@2@@Z) referenced in function "class std::ctype<unsigned short> const & __cdecl std::use_facet<class std::ctype<unsigned short> >(class std::locale const &)" (??$use_facet@V?$ctype@G@std@@@std@@YAABV?$ctype@G@0@ABVlocale@0@@Z) mymain.exe : fatal error LNK1120: 3 unresolved externals Notice that these errors are coming from the legacy code, not my code - app_main.obj is part of the legacy code, while mymain.c is my source. I've done some searching around, and what that I've read says that this type of error is caused by a mismatch with the -MD switch between my code and the library that I'm linking to. Since I'm dealing with legacy code, a solution has to come from my environment. It's been a long time since I've done C++ work, and even longer since I've used Visual Studio, so I'm hoping that this is just some ignorance on my part. Any ideas on how to get these resolved? A: These are standard library references. Make sure that all libraries (including the standard library) are using the same linkage. E.g. you can't link statically while linking the standard lib dynamically. The same goes for the threading model used. Take special care that you and the 3rd party library use the same linkage options. This can be a real pain in the *ss. A: Check this on MSDN: * */MD Causes your application to use the multithread- and DLL-specific version of the run-time library. */MT Causes your application to use the multithread, static version of the run-time library. Note: "... so that the linker will use LIBCMT.lib to resolve external symbols" So you'll need a different set of libraries. How I went about finding out which libraries to link: * *Find a configuration that does link, and add /verbose option. *Pipe the output to a text file. *Try the configuration that doesn't link. *Look in the verbose output from step 2 for the symbols that are unresolved ("_declspec(dllimport) public: void thiscall std::locale::facet::Register(void)" in your case) and find the used libraries. *Add those libraries to the list of libraries you're linking to. Old skool but it worked for me. Jan A: If you still wish to get the project to compile using VS2008 (or in the future) I can suggest using a binary editor to view the object file in question mainapp.obj. Here is an example from a small project of mine. The zdbException.obj contains the following excerpt DEFAULTLIB:"libc pmtd" /DEFAULTLI B:"uuid.lib" /DE FAULTLIB:"uuid.l ib" /include:?id @?$num_put@DV?$o streambuf_iterat or@DU?$char_trai ts@D@std@@@std@@ @std@@2V0locale@ 2@A /include:?id @?$numpunct@D@st d@@2V0locale@2@A /DEFAULTLIB:"LI BCMTD" /DEFAULTL IB:"OLDNAMES" /E DITANDCONTINUE Note the entry /DEFAULTLIB:"LIBCMTD". This indicates the object file was compiled with the static c run-time multi-threaded debug. There is also the possibility that the functions referenced in the obj are deprecated in the standard run-time lib shipped with VS2008. A: After trying to get this stuff to compile under VS 2008, I tried earlier versions of VS - 2005 worked with warnings, and 2003 just worked. I double checked the linkages and couldn't find any problems, so either I just couldn't find it, or that wasn't the problem. So to reiterate, downgrading to VS 2003 fixed it.
{ "language": "en", "url": "https://stackoverflow.com/questions/23209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Summary of differences in regular expression syntax for various tools and languages? I can never remember the differences in regular expression syntax used by tools like grep and AWK, or languages like Python and PHP. Generally, Perl has the most expansive syntax, but I'm often hamstrung by the limitations of even egrep ("extended" grep). Is there a site that lists the differences in a concise and easy-to-read fashion? A: Mastering Regular Expressions, devotes the last four chapters to Java, PHP, Perl, and .NET. One chapter for each. From what I know, the pocket edition contains just those final four chapters. A: I find this site helpful: http://www.regular-expressions.info/ Other than that, I use the corresponding documentation extensively and I believe, all said and done, there's no way around that. A: For my own future reference, I'll offer the Regexp Syntax Summary page which contrasts the syntax for grep, egrep, Emacs, Perl, Python, and Tcl. As expected, Perl supports the greatest variety of operators, but Python looks equally capable, if not more so.
{ "language": "en", "url": "https://stackoverflow.com/questions/23216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What's the purpose (if any) of "javascript:" in event handler tags? I've been making a concerted effort to improve my javascript skills lately by reading as much javascript code as I can. In doing this I've sometimes seen the javascript: prefix appended to the front of event handler attributes in HTML element tags. What's the purpose of this prefix? Basically, is there any appreciable difference between: onchange="javascript: myFunction(this)" and onchange="myFunction(this)" ? A: It should only be used in the href tag. That's ridiculous. The accepted way is this: <a href="/non-js-version/" onclick="someFunction(); return false">Blah</a> But to answer the OP, there is generally no reason to use javascript: anymore. In fact, you should attach the javascript event from your script, and not inline in the markup. But, that's a purist thing I think :-D A: The origins of javascript: in an event handler is actually just an IE specific thing so that you can specify the language in addition to the handler. This is because vbscript is also a supported client side scripting language in IE. Here's an example of "vbscript:". In other browsers (as has been said by Shadow2531) javascript: is just a label and is basically ignored. href="javascript:..." can be used in links to execute javascript code as DannySmurf points out. A: Probably nothing in your example. My understanding is that javascript: is for anchor tags (in place of an actual href). You'd use it so that your script can execute when the user clicks the link, but without initiating a navigation back to the page (which a blank href coupled with an onclick will do). For example: <a href="javascript:someFunction();">Blah</a> Rather than: <a href="" onclick="someFunction();">Blah</a> A: It should not be used in event handlers (though most browsers work defensively, and will not punish you). I would also argue that it should not be used in the href attribute of an anchor. If a browser supports javascript, it will use the properly defined event handler. If a browser does not, a javascript: link will appear broken. IMO, it is better to point them to a page explaining that they need to enable javascript to use that functionality, or better yet a non-javascript required version of the functionality. So, something like: <a href="non-ajax.html" onclick="niftyAjax(); return false;">Ajax me</a> Edit: Thought of a good reason to use javascript:. Bookmarklets. For instance, this one sends you to google reader to view the rss feeds for a page: var b=document.body; if(b&&!document.xmlVersion){ void(z=document.createElement('script')); void(z.src='http://www.google.com/reader/ui/subscribe-bookmarklet.js'); void(b.appendChild(z)); }else{ location='http://www.google.com/reader/view/feed/'+encodeURIComponent(location.href) } To have a user easily add this Bookmarklet, you would format it like so: <a href="javascript:var%20b=document.body;if(b&&!document.xmlVersion){void(z=document.createElement('script'));void(z.src='http://www.google.com/reader/ui/subscribe-bookmarklet.js');void(b.appendChild(z));}else{location='http://www.google.com/reader/view/feed/'+encodeURIComponent(location.href)}">Drag this to your bookmarks, or right click and bookmark it!</a> A: I am no authority in JavaScript, and perhaps more of a dunce than the asker, but AFAIK, the difference is that the javascript: prefix is preferred/required in URI-contexts, where the argument may be as well a traditional HTTP URL as a JavaScript trigger. So, my intuitive answer would be that, since onChange expects JavaScript, the javascript: prefix is redundant (if not downright erroneous). You can, however, write javascript:myFunction(this) in your address bar, and that function is run. Without the javascript:, your browser would try to interpret myFunction(this) as a URL and tries to fetch the DNS info, browse to that server, etc... A: javascript: in JS code (like in an onclick attribute) is just a label for use with continue/goto label statements that may or may not be supported by the browser (probably not anywhere). It could be zipzambam: instead. Even if the label can't be used, browsers still accept it so it doesn't cause an error. This means that if someone's throwing a useless label in an onclick attribute, they probably don't know what they're doing and are just copying and pasting or doing it out of habit from doing the below. javascript: in the href attribute signifies a Javascript URI. Example: javascript:(function()%7Balert(%22test%22)%3B%7D)()%3B A: I don't know if the javascript: prefix means anything within the onevent attributes but I know they are annoying in anchor tags when trying to open the link in a new tab. The href should be used as a fall back and never to attach javascript to links. A: @mercutio That's ridiculous. No, it's not ridiculous, javascript: is a pseudo protocol that can indeed only be used as the subject of a link, so he's quite right. Your suggestion is indeed better, but the best way of all is to use unobtrusive javascript techniques to iterate over HTML elements and add behaviour programmatically, as used in libraries like jQuery. A: Basically, is there any appreciable difference between: onchange="javascript: myFunction(this)" and onchange="myFunction(this)" ? Assuming you meant href="javascript: myFunction(this)", yes there is, especially when loading content using the javascript. Using the javascript: pseudo protocol makes the content inaccessible to some humans and all search engines, whereas using a real href and then changing the behaviour of the link using javascript makes the content accessible if javascript is turned off or not available in the particular client. A: Flubba: Use of javascript: in HREF breaks "Open in New Window" and "Open in New Tab" in a Firefox and other browsers. It isn't "wrong", but if you want to make your site hard to navigate...
{ "language": "en", "url": "https://stackoverflow.com/questions/23217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Why is String.Format static? Compare String.Format("Hello {0}", "World"); with "Hello {0}".Format("World"); Why did the .Net designers choose a static method over an instance method? What do you think? A: Well I guess you have to be rather particular about it, but like people are saying, it makes more sense for String.Format to be static because of the implied semantics. Consider: "Hello {0}".Format("World"); // this makes it sound like Format *modifies* // the string, which is not possible as // strings are immutable. string[] parts = "Hello World".Split(' '); // this however sounds right, // because it implies that you // split an existing string into // two *new* strings. A: The first thing I did when I got to upgrade to VS2008 and C#3, was to do this public static string F( this string format, params object[] args ) { return String.Format(format, args); } So I can now change my code from String.Format("Hello {0}", Name); to "Hello {0}".F(Name); which I preferred at the time. Nowadays (2014) I don't bother because it's just another hassle to keep re-adding that to each random project I create, or link in some bag-of-utils library. As for why the .NET designers chose it? Who knows. It seems entirely subjective. My money is on either * *Copying Java *The guy writing it at the time subjectively liked it more. There aren't really any other valid reasons that I can find A: I think it is because Format doesn't take a string per se, but a "format string". Most strings are equal to things like "Bob Smith" or "1010 Main St" or what have you and not to "Hello {0}", generally you only put those format strings in when you are trying to use a template to create another string, like a factory method, and therefore it lends it self to a static method. A: Because the Format method has nothing to do with a string's current value. That's true for all string methods because .NET strings are immutable. If it was non-static, you would need a string to begin with. It does: the format string. I believe this is just another example of the many design flaws in the .NET platform (and I don't mean this as a flame; I still find the .NET framework superior to most other frameworks). A: I think it's because it's a creator method (not sure if there's a better name). All it does is take what you give it and return a single string object. It doesn't operate on an existing object. If it was non-static, you would need a string to begin with. A: Maybe the .NET designers did it this way because JAVA did it this way... Embrace and extend. :) See: http://discuss.techinterview.org/default.asp?joel.3.349728.40 A: .NET Strings are Immutable Therefore having an instance method makes absolutely no sense. By that logic the string class should have no instance methods which return modified copies of the object, yet it has plenty (Trim, ToUpper, and so on). Furthermore, lots of other objects in the framework do this too. I agree that if they were to make it an instance method, Format seems like it would be a bad name, but that doesn't mean the functionality shouldn't be an instance method. Why not this? It's consistent with the rest of the .NET framework "Hello {0}".ToString("Orion"); A: Because the Format method has nothing to do with a string's current value. The value of the string isn't used. It takes a string and returns one. A: I don't actually know the answer but I suspect that it has something to do with the aspect of invoking methods on string literals directly. If I recall correctly (I didn't actually verify this because I don't have an old IDE handy), early versions of the C# IDE had trouble detecting method calls against string literals in IntelliSense, and that has a big impact on the discoverability of the API. If that was the case, typing the following wouldn't give you any help: "{0}".Format(12); If you were forced to type new String("{0}").Format(12); It would be clear that there was no advantage to making the Format method an instance method rather than a static method. The .NET libraries were designed by a lot of the same people that gave us MFC, and the String class in particular bears a strong resemblance to the CString class in MFC. MFC does have an instance Format method (that uses printf style formatting codes rather than the curly-brace style of .NET) which is painful because there's no such thing as a CString literal. So in a MFC codebase that I worked on I see a lot of this: CString csTemp = ""; csTemp.Format("Some string: %s", szFoo); which is painful. (I'm not saying that the code above is a great way to do things even in MFC, but that does seem to be the way that most of the developers on the project learned how to use CString::Format). Coming from that heritage, I can imagine that the API designers were trying to avoid that sort of situation again. A: Instance methods are good when you have an object that maintains some state; the process of formatting a string does not affect the string you are operating on (read: does not modify its state), it creates a new string. With extension methods, you can now have your cake and eat it too (i.e. you can use the latter syntax if it helps you sleep better at night). A: I think it looks better in general to use String.Format, but I could see a point in wanting to have a non-static function for when you already have a string stored in a variable that you want to "format". As an aside, all functions of the string class don't act on the string, but return a new string object, because strings are immutable. A: @Jared: Non-overloaded, non-inherited static methods (like Class.b(a,c)) that take an instance as the first variable are semantically equivalent to a method call (like a.b(c)) No, they aren't. (Assuming it compiles to the same CIL, which it should.) That's your mistake. The CIL produced is different. The distinction is that member methods can't be invoked on null values so the CIL inserts a check against null values. This obviously isn't done in the static variant. However, String.Format does not allow null values so the developers had to insert a check manually. From this point of view, the member method variant would be technically superior. A: This is to avoid confusion with .ToString() methods. For instance: double test = 1.54d; //string.Format pattern string.Format("This is a test: {0:F1}", test ); //ToString pattern "This is a test: " + test.ToString("F1"); If Format was an instance method on string this could cause confusion, as the patterns are different. String.Format() is a utility method to turn multiple objects into a formatted string. An instance method on a string does something to that string. Of course, you could do: public static string FormatInsert( this string input, params object[] args) { return string.Format( input, args ); } "Hello {0}, I have {1} things.".FormatInsert( "world", 3); A: I don't know why they did it, but it doesn't really matter anymore: public static class StringExtension { public static string FormatWith(this string format, params object[] args) { return String.Format(format, args); } } public class SomeClass { public string SomeMethod(string name) { return "Hello, {0}".FormatWith(name); } } That flows a lot easier, IMHO. A: Another reason for String.Format is the similarity to function printf from C. It was supposed to let C developers have an easier time switching languages. A: A big design goal for C# was to make the transition from C/C++ to it as easy as possible. Using dot syntax on a string literal would look very strange to someone with only a C/C++ background, and formatting strings is something a developer will likely do on day one with the language. So I believe they made it static to make it closer to familiar territory. A: I see nothing wrong with it being static.. The semantics of the static method seem to make a lot more sense to me. Perhaps it is because it is a primitive. Where primitives are used to often, you want to make the utility code for working with them as light as possible.. Also, I think the semantics are a lot better with String.Format over "MyString BLAH BLAH {0}".Format ... A: I haven't tried it yet but you could make an extension method for what you want. I wouldn't do it, but I think it would work. Also I find String.Format() more in line with other patterned static methods like Int32.Parse(), long.TryParse(), etc. You cloud also just use a StringBuilder if you want a non static format. StringBuilder.AppendFormat() A: Non-overloaded, non-inherited static methods (like Class.b(a,c)) that take an instance as the first variable are semantically equivalent to a method call (like a.b(c)) so the platform team made an arbitrary, aesthetic choice. (Assuming it compiles to the same CIL, which it should.) The only way to know would be to ask them why. Possibly they did it to keep the two strings close to each other lexigraphically, i.e. String.Format("Foo {0}", "Bar"); instead of "Foo {0}".Format("bar"); You want to know what the indexes are mapped to; perhaps they thought that the ".Format" part just adds noise in the middle. Interestingly, the ToString method (at least for numbers) is the opposite: number.ToString("000") with the format string on the right hand side. A: .NET Strings are Immutable Therefore having an instance method makes absolutely no sense. String foo = new String(); foo.Format("test {0}",1); // Makes it look like foo should be modified by the Format method. string newFoo = String.Format(foo, 1); // Indicates that a new string will be returned, and foo will be unaltered. A: String.Format takes at least one String and returns a different String. It doesn't need to modify the format string in order to return another string, so it makes little sense to do that (ignoring your formatting of it). On the other hand, it wouldn't be that much of a stretch to make String.Format be a member function, except I don't think C# allows for const member functions like C++ does. [Please correct me and this post if it does.] A: String.Format has to be a static method because strings are immutable. Making it an instance method would imply you could use it to "format" or modify the value of an existing string. This you can't do, and making it an instance method that returned a new string would make no sense. Hence, it's a static method.
{ "language": "en", "url": "https://stackoverflow.com/questions/23228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: When do you use the "this" keyword? I was curious about how other people use the this keyword. I tend to use it in constructors, but I may also use it throughout the class in other methods. Some examples: In a constructor: public Light(Vector v) { this.dir = new Vector(v); } Elsewhere public void SomeMethod() { Vector vec = new Vector(); double d = (vec * vec) - (this.radius * this.radius); } A: I only use it when absolutely necessary, ie, when another variable is shadowing another. Such as here: class Vector3 { float x; float y; float z; public Vector3(float x, float y, float z) { this.x = x; this.y = y; this.z = z; } } Or as Ryan Fox points out, when you need to pass this as a parameter. (Local variables have precedence over member variables) A: I tend to use it everywhere as well, just to make sure that it is clear that it is instance members that we are dealing with. A: Personally, I try to always use this when referring to member variables. It helps clarify the code and make it more readable. Even if there is no ambiguity, someone reading through my code for the first time doesn't know that, but if they see this used consistently, they will know if they are looking at a member variable or not. A: I use it anywhere there might be ambiguity (obviously). Not just compiler ambiguity (it would be required in that case), but also ambiguity for someone looking at the code. A: Another somewhat rare use for the this keyword is when you need to invoke an explicit interface implementation from within the implementing class. Here's a contrived example: class Example : ICloneable { private void CallClone() { object clone = ((ICloneable)this).Clone(); } object ICloneable.Clone() { throw new NotImplementedException(); } } A: Here's when I use it: * *Accessing Private Methods from within the class (to differentiate) *Passing the current object to another method (or as a sender object, in case of an event) *When creating extension methods :D I don't use this for Private fields because I prefix private field variable names with an underscore (_). A: [C++] I agree with the "use it when you have to" brigade. Decorating code unnecessarily with this isn't a great idea because the compiler won't warn you when you forget to do it. This introduces potential confusion for people expecting this to always be there, i.e. they'll have to think about it. So, when would you use it? I've just had a look around some random code and found these examples (I'm not passing judgement on whether these are good things to do or otherwise): * *Passing "yourself" to a function. *Assigning "yourself" to a pointer or something like that. *Casting, i.e. up/down casting (safe or otherwise), casting away constness, etc. *Compiler enforced disambiguation. A: I use it every time I refer to an instance variable, even if I don't need to. I think it makes the code more clear. A: I can't believe all of the people that say using it always is a "best practice" and such. Use "this" when there is ambiguity, as in Corey's example or when you need to pass the object as a parameter, as in Ryan's example. There is no reason to use it otherwise because being able to resolve a variable based on the scope chain should be clear enough that qualifying variables with it should be unnecessary. EDIT: The C# documentation on "this" indicates one more use, besides the two I mentioned, for the "this" keyword - for declaring indexers EDIT: @Juan: Huh, I don't see any inconsistency in my statements - there are 3 instances when I would use the "this" keyword (as documented in the C# documentation), and those are times when you actually need it. Sticking "this" in front of variables in a constructor when there is no shadowing going on is simply a waste of keystrokes and a waste of my time when reading it, it provides no benefit. A: You should always use it, I use it to diferantiate private fields and parameters (because our naming conventions state that we don't use prefixes for member and parameter names (and they are based on information found on the internet, so I consider that a best practice)) A: In Jakub Šturc's answer his #5 about passing data between contructors probably could use a little explanation. This is in overloading constructors and is the one case where use of this is mandatory. In the following example we can call the parameterized constructor from the parameterless constructor with a default parameter. class MyClass { private int _x public MyClass() : this(5) {} public MyClass(int v) { _x = v;} } I've found this to be a particularly useful feature on occasion. A: I use it when, in a function that accepts a reference to an object of the same type, I want to make it perfectly clear which object I'm referring to, where. For example class AABB { // ... members bool intersects( AABB other ) { return other.left() < this->right() && this->left() < other.right() && // +y increases going down other.top() < this->bottom() && this->top() < other.bottom() ; } } ; (vs) class AABB { bool intersects( AABB other ) { return other.left() < right() && left() < other.right() && // +y increases going down other.top() < bottom() && top() < other.bottom() ; } } ; At a glance which AABB does right() refer to? The this adds a bit of a clarifier. A: I use it whenever StyleCop tells me to. StyleCop must be obeyed. Oh yes. A: I don't mean this to sound snarky, but it doesn't matter. Seriously. Look at the things that are important: your project, your code, your job, your personal life. None of them are going to have their success rest on whether or not you use the "this" keyword to qualify access to fields. The this keyword will not help you ship on time. It's not going to reduce bugs, it's not going to have any appreciable effect on code quality or maintainability. It's not going to get you a raise, or allow you to spend less time at the office. It's really just a style issue. If you like "this", then use it. If you don't, then don't. If you need it to get correct semantics then use it. The truth is, every programmer has his own unique programing style. That style reflects that particular programmer's notions of what the "most aesthetically pleasing code" should look like. By definition, any other programmer who reads your code is going to have a different programing style. That means there is always going to be something you did that the other guy doesn't like, or would have done differently. At some point some guy is going to read your code and grumble about something. I wouldn't fret over it. I would just make sure the code is as aesthetically pleasing as possible according to your own tastes. If you ask 10 programmers how to format code, you are going to get about 15 different opinions. A better thing to focus on is how the code is factored. Are things abstracted right? Did I pick meaningful names for things? Is there a lot of code duplication? Are there ways I can simplify stuff? Getting those things right, I think, will have the greatest positive impact on your project, your code, your job, and your life. Coincidentally, it will probably also cause the other guy to grumble the least. If your code works, is easy to read, and is well factored, the other guy isn't going to be scrutinizing how you initialize fields. He's just going to use your code, marvel at it's greatness, and then move on to something else. A: There are several usages of this keyword in C#. * *To qualify members hidden by similar name *To have an object pass itself as a parameter to other methods *To have an object return itself from a method *To declare indexers *To declare extension methods *To pass parameters between constructors *To internally reassign value type (struct) value. *To invoke an extension method on the current instance *To cast itself to another type *To chain constructors defined in the same class You can avoid the first usage by not having member and local variables with the same name in scope, for example by following common naming conventions and using properties (Pascal case) instead of fields (camel case) to avoid colliding with local variables (also camel case). In C# 3.0 fields can be converted to properties easily by using auto-implemented properties. A: I got in the habit of using it liberally in Visual C++ since doing so would trigger IntelliSense ones I hit the '>' key, and I'm lazy. (and prone to typos) But I've continued to use it, since I find it handy to see that I'm calling a member function rather than a global function. A: Any time you need a reference to the current object. One particularly handy scenario is when your object is calling a function and wants to pass itself into it. Example: void onChange() { screen.draw(this); } A: I tend to underscore fields with _ so don't really ever need to use this. Also R# tends to refactor them away anyway... A: I pretty much only use this when referencing a type property from inside the same type. As another user mentioned, I also underscore local fields so they are noticeable without needing this. A: I use it only when required, except for symmetric operations which due to single argument polymorphism have to be put into methods of one side: boolean sameValue (SomeNum other) { return this.importantValue == other.importantValue; } A: [C++] this is used in the assignment operator where most of the time you have to check and prevent strange (unintentional, dangerous, or just a waste of time for the program) things like: A a; a = a; Your assignment operator will be written: A& A::operator=(const A& a) { if (this == &a) return *this; // we know both sides of the = operator are different, do something... return *this; } A: this on a C++ compiler The C++ compiler will silently lookup for a symbol if it does not find it immediately. Sometimes, most of the time, it is good: * *using the mother class' method if you did not overloaded it in the child class. *promoting a value of a type into another type But sometimes, You just don't want the compiler to guess. You want the compiler to pick-up the right symbol and not another. For me, those times are when, within a method, I want to access to a member method or member variable. I just don't want some random symbol picked up just because I wrote printf instead of print. this->printf would not have compiled. The point is that, with C legacy libraries (§), legacy code written years ago (§§), or whatever could happen in a language where copy/pasting is an obsolete but still active feature, sometimes, telling the compiler to not play wits is a great idea. These are the reasons I use this. (§) it's still a kind of mystery to me, but I now wonder if the fact you include the <windows.h> header in your source, is the reason all the legacy C libraries symbols will pollute your global namespace (§§) realizing that "you need to include a header, but that including this header will break your code because it uses some dumb macro with a generic name" is one of those russian roulette moments of a coder's life A: 'this.' helps find members on 'this' class with a lot of members (usually due to a deep inheritance chain). Hitting CTRL+Space doesn't help with this, because it also includes types; where-as 'this.' includes members ONLY. I usually delete it once I have what I was after: but this is just my style breaking through. In terms of style, if you are a lone-ranger -- you decide; if you work for a company stick to the company policy (look at the stuff in source control and see what other people are doing). In terms of using it to qualify members, neither is right or wrong. The only wrong thing is inconsistency -- that is the golden rule of style. Leave the nit-picking others. Spend your time pondering real coding problems -- and obviously coding -- instead. A: I use it every time I can. I believe it makes the code more readable, and more readable code equals less bugs and more maintainability. A: When you are many developers working on the same code base, you need some code guidelines/rules. Where I work we've desided to use 'this' on fields, properties and events. To me it makes good sense to do it like this, it makes the code easier to read when you differentiate between class-variables and method-variables. A: It depends on the coding standard I'm working under. If we are using _ to denote an instance variable then "this" becomes redundant. If we are not using _ then I tend to use this to denote instance variable. A: I use it to invoke Intellisense just like JohnMcG, but I'll go back and erase "this->" when I'm done. I follow the Microsoft convention of prefixing member variables with "m_", so leaving it as documentation would just be redundant. A: 1 - Common Java setter idiom: public void setFoo(int foo) { this.foo = foo; } 2 - When calling a function with this object as a parameter notifier.addListener(this); A: There is one use that has not already been mentioned in C++, and that is not to refer to the own object or disambiguate a member from a received variable. You can use this to convert a non-dependent name into an argument dependent name inside template classes that inherit from other templates. template <typename T> struct base { void f() {} }; template <typename T> struct derived : public base<T> { void test() { //f(); // [1] error base<T>::f(); // quite verbose if there is more than one argument, but valid this->f(); // f is now an argument dependent symbol } } Templates are compiled with a two pass mechanism. During the first pass, only non-argument dependent names are resolved and checked, while dependent names are checked only for coherence, without actually substituting the template arguments. At that step, without actually substituting the type, the compiler has almost no information of what base<T> could be (note that specialization of the base template can turn it into completely different types, even undefined types), so it just assumes that it is a type. At this stage the non-dependent call f that seems just natural to the programmer is a symbol that the compiler must find as a member of derived or in enclosing namespaces --which does not happen in the example-- and it will complain. The solution is turning the non-dependent name f into a dependent name. This can be done in a couple of ways, by explicitly stating the type where it is implemented (base<T>::f --adding the base<T> makes the symbol dependent on T and the compiler will just assume that it will exist and postpones the actual check for the second pass, after argument substitution. The second way, much sorter if you inherit from templates that have more than one argument, or long names, is just adding a this-> before the symbol. As the template class you are implementing does depend on an argument (it inherits from base<T>) this-> is argument dependent, and we get the same result: this->f is checked in the second round, after template parameter substitution. A: Never. Ever. If you have variable shadowing, your naming conventions are on crack. I mean, really, no distinguishing naming for member variables? Facepalm A: You should not use "this" unless you absolutely must. There IS a penalty associated with unnecessary verbosity. You should strive for code that is exactly as long as it needs to be, and no longer.
{ "language": "en", "url": "https://stackoverflow.com/questions/23250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "248" }
Q: How IE7 determines a site's Security Zone Does anyone know how IE7 determines what Security Zone to use for a site? I see the basics for IE6 here, but I can't find the equivalent for IE7. A: I could use a little more information to narrow down my answer, but here is what I have: Internet Explorer has 5 different security zones be default: Local Machine Zone, Intranet, Internet, Trusted, and Restricted These are determined in urlmon.dll (Url Moniker) More information here: http://msdn.microsoft.com/en-us/library/ms537183(VS.85).aspx But you can also implement your own custom security zone: http://msdn.microsoft.com/en-us/library/ms537182(VS.85).aspx The way that IE determines the security zones should not have changes between IE6 and IE7 (or IE8 for that matter) Intranet sites are determined: 1. By url host names do not have any dots (http://stackoverflow vs http://stackoverflow.com) *Sites from the file:// scheme where the resource is collected from UNC A: Security Zones are configure, but not limited to, by an ADS stream attached to the file. When IE7 downloads a file from the internet, it attaches an ADS stream that described the zone the file belongs to. Check out the Streams tool from http://technet.microsoft.com/en-us/sysinternals/default.aspx. A: The way it determines zone between IE6 and IE7 did change. There were bugs in how IE6 did it. Unfortunately I know of no documentation on exactaly how it does it. If you posted the URLs that are giving you trouble, or gave some indication as to the problem you're trying to solve that you think this information would solve for you, we may be able to help in some other way. A: Not sure what the confusion is. Sites on your intranet are in the intranet zone, web sites are in the internet zone, and sites on your computer are in the local zone, unless you've specifically overridden something in the browser's preferences.
{ "language": "en", "url": "https://stackoverflow.com/questions/23270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the difference between procedural programming and functional programming? I've read the Wikipedia articles for both procedural programming and functional programming, but I'm still slightly confused. Could someone boil it down to the core? A: Funtional Programming num = 1 def function_to_add_one(num): num += 1 return num function_to_add_one(num) function_to_add_one(num) function_to_add_one(num) function_to_add_one(num) function_to_add_one(num) #Final Output: 2 Procedural Programming num = 1 def procedure_to_add_one(): global num num += 1 return num procedure_to_add_one() procedure_to_add_one() procedure_to_add_one() procedure_to_add_one() procedure_to_add_one() #Final Output: 6 function_to_add_one is a function procedure_to_add_one is a procedure Even if you run the function five times, every time it will return 2 If you run the procedure five times, at the end of fifth run it will give you 6. DISCLAIMER: Obviously this is a hyper-simplified view of reality. This answer just gives a taste of "functions" as opposed to "procedures". Nothing more. Once you have tasted this superficial yet deeply penetrative intuition, start exploring the two paradigms, and you will start to see the difference quite clearly. Helps my students, hope it helps you too. A: Konrad said: As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined; which means that uncertain values like user input or random values are hard to model in purely functional languages. The order of evaluation in a purely functional program may be hard(er) to reason about (especially with laziness) or even unimportant but I think that saying it is not well defined makes it sound like you can't tell if your program is going to work at all! Perhaps a better explanation would be that control flow in functional programs is based on when the value of a function's arguments are needed. The Good Thing about this that in well written programs, state becomes explicit: each function lists its inputs as parameters instead of arbitrarily munging global state. So on some level, it is easier to reason about order of evaluation with respect to one function at a time. Each function can ignore the rest of the universe and focus on what it needs to do. When combined, functions are guaranteed to work the same[1] as they would in isolation. ... uncertain values like user input or random values are hard to model in purely functional languages. The solution to the input problem in purely functional programs is to embed an imperative language as a DSL using a sufficiently powerful abstraction. In imperative (or non-pure functional) languages this is not needed because you can "cheat" and pass state implicitly and order of evaluation is explicit (whether you like it or not). Because of this "cheating" and forced evaluation of all parameters to every function, in imperative languages 1) you lose the ability to create your own control flow mechanisms (without macros), 2) code isn't inherently thread safe and/or parallelizable by default, 3) and implementing something like undo (time travel) takes careful work (imperative programmer must store a recipe for getting the old value(s) back!), whereas pure functional programming buys you all these things—and a few more I may have forgotten—"for free". I hope this doesn't sound like zealotry, I just wanted to add some perspective. Imperative programming and especially mixed paradigm programming in powerful languages like C# 3.0 are still totally effective ways to get things done and there is no silver bullet. [1] ... except possibly with respect memory usage (cf. foldl and foldl' in Haskell). A: In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast with the procedural programming style that emphasizes changes in state. A: To expand on Konrad's comment: and the order of evaluation is not well-defined Some functional languages have what is called Lazy Evaluation. Which means a function is not executed until the value is needed. Until that time the function itself is what is passed around. Procedural languages are step 1 step 2 step 3... if in step 2 you say add 2 + 2, it does it right then. In lazy evaluation you would say add 2 + 2, but if the result is never used, it never does the addition. A: If you have a chance, I would recommand getting a copy of Lisp/Scheme, and doing some projects in it. Most of the ideas that have lately become bandwagons were expressed in Lisp decades ago: functional programming, continuations (as closures), garbage collection, even XML. So that would be a good way to get a head start on all these current ideas, and a few more besides, like symbolic computation. You should know what functional programming is good for, and what it isn't good for. It isn't good for everything. Some problems are best expressed in terms of side-effects, where the same question gives differet answers depending on when it is asked. A: @Creighton: In Haskell there is a library function called product: prouduct list = foldr 1 (*) list or simply: product = foldr 1 (*) so the "idiomatic" factorial fac n = foldr 1 (*) [1..n] would simply be fac n = product [1..n] A: Procedural programming divides sequences of statements and conditional constructs into separate blocks called procedures that are parameterized over arguments that are (non-functional) values. Functional programming is the same except that functions are first-class values, so they can be passed as arguments to other functions and returned as results from function calls. Note that functional programming is a generalization of procedural programming in this interpretation. However, a minority interpret "functional programming" to mean side-effect-free which is quite different but irrelevant for all major functional languages except Haskell. A: I believe that procedural/functional/objective programming are about how to approach a problem. The first style would plan everything in to steps, and solves the problem by implementing one step (a procedure) at a time. On the other hand, functional programming would emphasize the divide-and-conquer approach, where the problem is divided into sub-problem, then each sub-problem is solved (creating a function to solve that sub problem) and the results are combined to create the answer for the whole problem. Lastly, Objective programming would mimic the real world by create a mini-world inside the computer with many objects, each of which has a (somewhat) unique characteristics, and interacts with others. From those interactions the result would emerge. Each style of programming has its own advantages and weaknesses. Hence, doing something such as "pure programming" (i.e. purely procedural - no one does this, by the way, which is kind of weird - or purely functional or purely objective) is very difficult, if not impossible, except some elementary problems specially designed to demonstrate the advantage of a programming style (hence, we call those who like pureness "weenie" :D). Then, from those styles, we have programming languages that is designed to optimized for some each style. For example, Assembly is all about procedural. Okay, most early languages are procedural, not only Asm, like C, Pascal, (and Fortran, I heard). Then, we have all famous Java in objective school (Actually, Java and C# is also in a class called "money-oriented," but that is subject for another discussion). Also objective is Smalltalk. In functional school, we would have "nearly functional" (some considered them to be impure) Lisp family and ML family and many "purely functional" Haskell, Erlang, etc. By the way, there are many general languages such as Perl, Python, Ruby. A: None of the answers here show idiomatic functional programming. The recursive factorial answer is great for representing recursion in FP, but the majority of code is not recursive so I don't think that answer is fully representative. Say you have an arrays of strings, and each string represents an integer like "5" or "-200". You want to check this input array of strings against your internal test case (Using integer comparison). Both solutions are shown below Procedural arr_equal(a : [Int], b : [Str]) -> Bool { if(a.len != b.len) { return false; } bool ret = true; for( int i = 0; i < a.len /* Optimized with && ret*/; i++ ) { int a_int = a[i]; int b_int = parseInt(b[i]); ret &= a_int == b_int; } return ret; } Functional eq = i, j => i == j # This is usually a built-in toInt = i => parseInt(i) # Of course, parseInt === toInt here, but this is for visualization arr_equal(a : [Int], b : [Str]) -> Bool = zip(a, b.map(toInt)) # Combines into [Int, Int] .map(eq) .reduce(true, (i, j) => i && j) # Start with true, and continuously && it with each value While pure functional languages are generally research languages (As the real-world likes free side-effects), real-world procedural languages will use the much simpler functional syntax when appropriate. This is usually implemented with an external library like Lodash, or available built-in with newer languages like Rust. The heavy lifting of functional programming is done with functions/concepts like map, filter, reduce, currying, partial, the last three of which you can look up for further understanding. Addendum In order to be used in the wild, the compiler will normally have to work out how to convert the functional version into the procedural version internally, as function call overhead is too high. Recursive cases such as the factorial shown will use tricks such as tail call to remove O(n) memory usage. The fact that there are no side effects allows functional compilers to implement the && ret optimization even when the .reduce is done last. Using Lodash in JS, obviously does not allow for any optimization, so it is a hit to performance (Which isn't usually a concern with web development). Languages like Rust will optimize internally (And have functions such as try_fold to assist && ret optimization). A: A functional language (ideally) allows you to write a mathematical function, i.e. a function that takes n arguments and returns a value. If the program is executed, this function is logically evaluated as needed.1 A procedural language, on the other hand, performs a series of sequential steps. (There's a way of transforming sequential logic into functional logic called continuation passing style.) As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined; which means that uncertain values like user input or random values are hard to model in purely functional languages. 1 As everything else in this answer, that’s a generalisation. This property, evaluating a computation when its result is needed rather than sequentially where it’s called, is known as “laziness”. Not all functional languages are actually universally lazy, nor is laziness restricted to functional programming. Rather, the description given here provides a “mental framework” to think about different programming styles that are not distinct and opposite categories but rather fluid ideas. A: To expand on Konrad's comment: As a consequence, a purely functional program always yields the same value for an input, and the order of evaluation is not well-defined; Because of this, functional code is generally easier to parallelize. Since there are (generally) no side effects of the functions, and they (generally) just act on their arguments, a lot of concurrency issues go away. Functional programming is also used when you need to be capable of proving your code is correct. This is much harder to do with procedural programming (not easy with functional, but still easier). Disclaimer: I haven't used functional programming in years, and only recently started looking at it again, so I might not be completely correct here. :) A: One thing I hadn't seen really emphasized here is that modern functional languages such as Haskell really more on first class functions for flow control than explicit recursion. You don't need to define factorial recursively in Haskell, as was done above. I think something like fac n = foldr (*) 1 [1..n] is a perfectly idiomatic construction, and much closer in spirit to using a loop than to using explicit recursion. A: A functional programming is identical to procedural programming in which global variables are not being used. A: Basically the two styles, are like Yin and Yang. One is organized, while the other chaotic. There are situations when Functional programming is the obvious choice, and other situations were Procedural programming is the better choice. This is why there are at least two languages that have recently come out with a new version, that embraces both programming styles. ( Perl 6 and D 2 ) #Procedural:# * *The output of a routine does not always have a direct correlation with the input. *Everything is done in a specific order. *Execution of a routine may have side effects. *Tends to emphasize implementing solutions in a linear fashion. ##Perl 6 ## sub factorial ( UInt:D $n is copy ) returns UInt { # modify "outside" state state $call-count++; # in this case it is rather pointless as # it can't even be accessed from outside my $result = 1; loop ( ; $n > 0 ; $n-- ){ $result *= $n; } return $result; } ##D 2## int factorial( int n ){ int result = 1; for( ; n > 0 ; n-- ){ result *= n; } return result; } #Functional:# * *Often recursive. *Always returns the same output for a given input. *Order of evaluation is usually undefined. *Must be stateless. i.e. No operation can have side effects. *Good fit for parallel execution *Tends to emphasize a divide and conquer approach. *May have the feature of Lazy Evaluation. ##Haskell## ( copied from Wikipedia ); fac :: Integer -> Integer fac 0 = 1 fac n | n > 0 = n * fac (n-1) or in one line: fac n = if n > 0 then n * fac (n-1) else 1 ##Perl 6 ## proto sub factorial ( UInt:D $n ) returns UInt {*} multi sub factorial ( 0 ) { 1 } multi sub factorial ( $n ) { $n * samewith $n-1 } # { $n * factorial $n-1 } ##D 2## pure int factorial( invariant int n ){ if( n <= 1 ){ return 1; }else{ return n * factorial( n-1 ); } } #Side note:# Factorial is actually a common example to show how easy it is to create new operators in Perl 6 the same way you would create a subroutine. This feature is so ingrained into Perl 6 that most operators in the Rakudo implementation are defined this way. It also allows you to add your own multi candidates to existing operators. sub postfix:< ! > ( UInt:D $n --> UInt ) is tighter(&infix:<*>) { [*] 2 .. $n } say 5!; # 120␤ This example also shows range creation (2..$n) and the list reduction meta-operator ([ OPERATOR ] LIST) combined with the numeric infix multiplication operator. (*) It also shows that you can put --> UInt in the signature instead of returns UInt after it. ( You can get away with starting the range with 2 as the multiply "operator" will return 1 when called without any arguments ) A: I've never seen this definition given elsewhere, but I think this sums up the differences given here fairly well: Functional programming focuses on expressions Procedural programming focuses on statements Expressions have values. A functional program is an expression who's value is a sequence of instructions for the computer to carry out. Statements don't have values and instead modify the state of some conceptual machine. In a purely functional language there would be no statements, in the sense that there's no way to manipulate state (they might still have a syntactic construct named "statement", but unless it manipulates state I wouldn't call it a statement in this sense). In a purely procedural language there would be no expressions, everything would be an instruction which manipulates the state of the machine. Haskell would be an example of a purely functional language because there is no way to manipulate state. Machine code would be an example of a purely procedural language because everything in a program is a statement which manipulates the state of the registers and memory of the machine. The confusing part is that the vast majority of programming languages contain both expressions and statements, allowing you to mix paradigms. Languages can be classified as more functional or more procedural based on how much they encourage the use of statements vs expressions. For example, C would be more functional than COBOL because a function call is an expression, whereas calling a sub program in COBOL is a statement (that manipulates the state of shared variables and doesn't return a value). Python would be more functional than C because it allows you to express conditional logic as an expression using short circuit evaluation (test && path1 || path2 as opposed to if statements). Scheme would be more functional than Python because everything in scheme is an expression. You can still write in a functional style in a language which encourages the procedural paradigm and vice versa. It's just harder and/or more awkward to write in a paradigm which isn't encouraged by the language. A: Procedural languages tend to keep track of state (using variables) and tend to execute as a sequence of steps. Purely functional languages don't keep track of state, use immutable values, and tend to execute as a series of dependencies. In many cases the status of the call stack will hold the information that would be equivalent to that which would be stored in state variables in procedural code. Recursion is a classic example of functional style programming. A: To Understand the difference, one needs to to understand that "the godfather" paradigm of both procedural and functional programming is the imperative programming. Basically procedural programming is merely a way of structuring imperative programs in which the primary method of abstraction is the "procedure." (or "function" in some programming languages). Even Object Oriented Programming is just another way of structuring an imperative program, where the state is encapsulated in objects, becoming an object with a "current state," plus this object has a set of functions, methods, and other stuff that let you the programmer manipulate or update the state. Now, in regards to functional programming, the gist in its approach is that it identifies what values to take and how these values should be transferred. (so there is no state, and no mutable data as it takes functions as first class values and pass them as parameters to other functions). PS: understanding every programming paradigm is used for should clarify the differences between all of them. PSS: In the end of the day, programming paradigms are just different approaches to solving problems. PSS: this quora answer has a great explanation.
{ "language": "en", "url": "https://stackoverflow.com/questions/23277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "312" }
Q: Algorithm to find Largest prime factor of a number What is the best approach to calculating the largest prime factor of a number? I'm thinking the most efficient would be the following: * *Find lowest prime number that divides cleanly *Check if result of division is prime *If not, find next lowest *Go to 2. I'm basing this assumption on it being easier to calculate the small prime factors. Is this about right? What other approaches should I look into? Edit: I've now realised that my approach is futile if there are more than 2 prime factors in play, since step 2 fails when the result is a product of two other primes, therefore a recursive algorithm is needed. Edit again: And now I've realised that this does still work, because the last found prime number has to be the highest one, therefore any further testing of the non-prime result from step 2 would result in a smaller prime. A: JavaScript code: 'option strict'; function largestPrimeFactor(val, divisor = 2) { let square = (val) => Math.pow(val, 2); while ((val % divisor) != 0 && square(divisor) <= val) { divisor++; } return square(divisor) <= val ? largestPrimeFactor(val / divisor, divisor) : val; } Usage Example: let result = largestPrimeFactor(600851475143); Here is an example of the code: A: Similar to @Triptych answer but also different. In this example list or dictionary is not used. Code is written in Ruby def largest_prime_factor(number) i = 2 while number > 1 if number % i == 0 number /= i; else i += 1 end end return i end largest_prime_factor(600851475143) # => 6857 A: The simplest solution is a pair of mutually recursive functions. The first function generates all the prime numbers: * *Start with a list of all natural numbers greater than 1. *Remove all numbers that are not prime. That is, numbers that have no prime factors (other than themselves). See below. The second function returns the prime factors of a given number n in increasing order. * *Take a list of all the primes (see above). *Remove all the numbers that are not factors of n. The largest prime factor of n is the last number given by the second function. This algorithm requires a lazy list or a language (or data structure) with call-by-need semantics. For clarification, here is one (inefficient) implementation of the above in Haskell: import Control.Monad -- All the primes primes = 2 : filter (ap (<=) (head . primeFactors)) [3,5..] -- Gives the prime factors of its argument primeFactors = factor primes where factor [] n = [] factor xs@(p:ps) n = if p*p > n then [n] else let (d,r) = divMod n p in if r == 0 then p : factor xs d else factor ps n -- Gives the largest prime factor of its argument largestFactor = last . primeFactors Making this faster is just a matter of being more clever about detecting which numbers are prime and/or factors of n, but the algorithm stays the same. A: All numbers can be expressed as the product of primes, eg: 102 = 2 x 3 x 17 712 = 2 x 2 x 2 x 89 You can find these by simply starting at 2 and simply continuing to divide until the result isn't a multiple of your number: 712 / 2 = 356 .. 356 / 2 = 178 .. 178 / 2 = 89 .. 89 / 89 = 1 using this method you don't have to actually calculate any primes: they'll all be primes, based on the fact that you've already factorised the number as much as possible with all preceding numbers. number = 712; currNum = number; // the value we'll actually be working with for (currFactor in 2 .. number) { while (currNum % currFactor == 0) { // keep on dividing by this number until we can divide no more! currNum = currNum / currFactor // reduce the currNum } if (currNum == 1) return currFactor; // once it hits 1, we're done. } A: //this method skips unnecessary trial divisions and makes //trial division more feasible for finding large primes public static void main(String[] args) { long n= 1000000000039L; //this is a large prime number long i = 2L; int test = 0; while (n > 1) { while (n % i == 0) { n /= i; } i++; if(i*i > n && n > 1) { System.out.println(n); //prints n if it's prime test = 1; break; } } if (test == 0) System.out.println(i-1); //prints n if it's the largest prime factor } A: n = abs(number); result = 1; if (n mod 2 == 0) { result = 2; while (n mod 2 = 0) n /= 2; } for(i=3; i<sqrt(n); i+=2) { if (n mod i == 0) { result = i; while (n mod i = 0) n /= i; } } return max(n,result) There are some modulo tests that are superflous, as n can never be divided by 6 if all factors 2 and 3 have been removed. You could only allow primes for i, which is shown in several other answers here. You could actually intertwine the sieve of Eratosthenes here: * *First create the list of integers up to sqrt(n). *In the for loop mark all multiples of i up to the new sqrt(n) as not prime, and use a while loop instead. *set i to the next prime number in the list. Also see this question. A: I'm aware this is not a fast solution. Posting as hopefully easier to understand slow solution. public static long largestPrimeFactor(long n) { // largest composite factor must be smaller than sqrt long sqrt = (long)Math.ceil(Math.sqrt((double)n)); long largest = -1; for(long i = 2; i <= sqrt; i++) { if(n % i == 0) { long test = largestPrimeFactor(n/i); if(test > largest) { largest = test; } } } if(largest != -1) { return largest; } // number is prime return n; } A: My answer is based on Triptych's, but improves a lot on it. It is based on the fact that beyond 2 and 3, all the prime numbers are of the form 6n-1 or 6n+1. var largestPrimeFactor; if(n mod 2 == 0) { largestPrimeFactor = 2; n = n / 2 while(n mod 2 == 0); } if(n mod 3 == 0) { largestPrimeFactor = 3; n = n / 3 while(n mod 3 == 0); } multOfSix = 6; while(multOfSix - 1 <= n) { if(n mod (multOfSix - 1) == 0) { largestPrimeFactor = multOfSix - 1; n = n / largestPrimeFactor while(n mod largestPrimeFactor == 0); } if(n mod (multOfSix + 1) == 0) { largestPrimeFactor = multOfSix + 1; n = n / largestPrimeFactor while(n mod largestPrimeFactor == 0); } multOfSix += 6; } I recently wrote a blog article explaining how this algorithm works. I would venture that a method in which there is no need for a test for primality (and no sieve construction) would run faster than one which does use those. If that is the case, this is probably the fastest algorithm here. A: Here's the best algorithm I know of (in Python) def prime_factors(n): """Returns all the prime factors of a positive integer""" factors = [] d = 2 while n > 1: while n % d == 0: factors.append(d) n /= d d = d + 1 return factors pfs = prime_factors(1000) largest_prime_factor = max(pfs) # The largest element in the prime factor list The above method runs in O(n) in the worst case (when the input is a prime number). EDIT: Below is the O(sqrt(n)) version, as suggested in the comment. Here is the code, once more. def prime_factors(n): """Returns all the prime factors of a positive integer""" factors = [] d = 2 while n > 1: while n % d == 0: factors.append(d) n /= d d = d + 1 if d*d > n: if n > 1: factors.append(n) break return factors pfs = prime_factors(1000) largest_prime_factor = max(pfs) # The largest element in the prime factor list A: Actually there are several more efficient ways to find factors of big numbers (for smaller ones trial division works reasonably well). One method which is very fast if the input number has two factors very close to its square root is known as Fermat factorisation. It makes use of the identity N = (a + b)(a - b) = a^2 - b^2 and is easy to understand and implement. Unfortunately it's not very fast in general. The best known method for factoring numbers up to 100 digits long is the Quadratic sieve. As a bonus, part of the algorithm is easily done with parallel processing. Yet another algorithm I've heard of is Pollard's Rho algorithm. It's not as efficient as the Quadratic Sieve in general but seems to be easier to implement. Once you've decided on how to split a number into two factors, here is the fastest algorithm I can think of to find the largest prime factor of a number: Create a priority queue which initially stores the number itself. Each iteration, you remove the highest number from the queue, and attempt to split it into two factors (not allowing 1 to be one of those factors, of course). If this step fails, the number is prime and you have your answer! Otherwise you add the two factors into the queue and repeat. A: Python Iterative approach by removing all prime factors from the number def primef(n): if n <= 3: return n if n % 2 == 0: return primef(n/2) elif n % 3 ==0: return primef(n/3) else: for i in range(5, int((n)**0.5) + 1, 6): #print i if n % i == 0: return primef(n/i) if n % (i + 2) == 0: return primef(n/(i+2)) return n A: I am using algorithm which continues dividing the number by it's current Prime Factor. My Solution in python 3 : def PrimeFactor(n): m = n while n%2==0: n = n//2 if n == 1: # check if only 2 is largest Prime Factor return 2 i = 3 sqrt = int(m**(0.5)) # loop till square root of number last = 0 # to store last prime Factor i.e. Largest Prime Factor while i <= sqrt : while n%i == 0: n = n//i # reduce the number by dividing it by it's Prime Factor last = i i+=2 if n> last: # the remaining number(n) is also Factor of number return n else: return last print(PrimeFactor(int(input()))) Input : 10 Output : 5 Input : 600851475143 Output : 6857 A: Inspired by your question I decided to implement my own version of factorization (and finding largest prime factor) in Python. Probably the simplest to implement, yet quite efficient, factoring algorithm that I know is Pollard's Rho algorithm. It has a running time of O(N^(1/4)) at most which is much more faster than time of O(N^(1/2)) for trial division algorithm. Both algos have these running times only in case of composite (non-prime) number, that's why primality test should be used to filter out prime (non-factorable) numbers. I used following algorithms in my code: Fermat Primality Test ..., Pollard's Rho Algorithm ..., Trial Division Algorithm. Fermat primality test is used before running Pollard's Rho in order to filter out prime numbers. Trial Division is used as a fallback because Pollard's Rho in very rare cases may fail to find a factor, especially for some small numbers. Obviously after fully factorizing a number into sorted list of prime factors the largest prime factor will be the last element in this list. In general case (for any random number) I don't know of any other ways to find out largest prime factor besides fully factorizing a number. As an example in my code I'm factoring first 190 fractional digits of Pi, code factorizes this number within 1 second, and shows largest prime factor which is 165 digits (545 bits) in size! Try it online! def is_fermat_probable_prime(n, *, trials = 32): # https://en.wikipedia.org/wiki/Fermat_primality_test import random if n <= 16: return n in (2, 3, 5, 7, 11, 13) for i in range(trials): if pow(random.randint(2, n - 2), n - 1, n) != 1: return False return True def pollard_rho_factor(N, *, trials = 16): # https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm import random, math for j in range(trials): i, stage, y, x = 0, 2, 1, random.randint(1, N - 2) while True: r = math.gcd(N, x - y) if r != 1: break if i == stage: y = x stage <<= 1 x = (x * x + 1) % N i += 1 if r != N: return [r, N // r] return [N] # Pollard-Rho failed def trial_division_factor(n, *, limit = None): # https://en.wikipedia.org/wiki/Trial_division fs = [] while n & 1 == 0: fs.append(2) n >>= 1 d = 3 while d * d <= n and limit is None or d <= limit: q, r = divmod(n, d) if r == 0: fs.append(d) n = q else: d += 2 if n > 1: fs.append(n) return fs def factor(n): if n <= 1: return [] if is_fermat_probable_prime(n): return [n] fs = trial_division_factor(n, limit = 1 << 12) if len(fs) >= 2: return sorted(fs[:-1] + factor(fs[-1])) fs = pollard_rho_factor(n) if len(fs) >= 2: return sorted([e1 for e0 in fs for e1 in factor(e0)]) return trial_division_factor(n) def demo(): import time, math # http://www.math.com/tables/constants/pi.htm # pi = 3. # 1415926535 8979323846 2643383279 5028841971 6939937510 5820974944 5923078164 0628620899 8628034825 3421170679 # 8214808651 3282306647 0938446095 5058223172 5359408128 4811174502 8410270193 8521105559 6446229489 5493038196 # n = first 190 fractional digits of Pi n = 1415926535_8979323846_2643383279_5028841971_6939937510_5820974944_5923078164_0628620899_8628034825_3421170679_8214808651_3282306647_0938446095_5058223172_5359408128_4811174502_8410270193_8521105559_6446229489 print('Number:', n) tb = time.time() fs = factor(n) print('All Prime Factors:', fs) print('Largest Prime Factor:', f'({math.log2(fs[-1]):.02f} bits, {len(str(fs[-1]))} digits)', fs[-1]) print('Time Elapsed:', round(time.time() - tb, 3), 'sec') if __name__ == '__main__': demo() Output: Number: 1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489 All Prime Factors: [3, 71, 1063541, 153422959, 332958319, 122356390229851897378935483485536580757336676443481705501726535578690975860555141829117483263572548187951860901335596150415443615382488933330968669408906073630300473] Largest Prime Factor: (545.09 bits, 165 digits) 122356390229851897378935483485536580757336676443481705501726535578690975860555141829117483263572548187951860901335596150415443615382488933330968669408906073630300473 Time Elapsed: 0.593 sec A: Here is my attempt in c#. The last print out is the largest prime factor of the number. I checked and it works. namespace Problem_Prime { class Program { static void Main(string[] args) { /* The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143 ? */ long x = 600851475143; long y = 2; while (y < x) { if (x % y == 0) { // y is a factor of x, but is it prime if (IsPrime(y)) { Console.WriteLine(y); } x /= y; } y++; } Console.WriteLine(y); Console.ReadLine(); } static bool IsPrime(long number) { //check for evenness if (number % 2 == 0) { if (number == 2) { return true; } return false; } //don't need to check past the square root long max = (long)Math.Sqrt(number); for (int i = 3; i <= max; i += 2) { if ((number % i) == 0) { return false; } } return true; } } } A: #python implementation import math n = 600851475143 i = 2 factors=set([]) while i<math.sqrt(n): while n%i==0: n=n/i factors.add(i) i+=1 factors.add(n) largest=max(factors) print factors print largest A: Calculates the largest prime factor of a number using recursion in C++. The working of the code is explained below: int getLargestPrime(int number) { int factor = number; // assumes that the largest prime factor is the number itself for (int i = 2; (i*i) <= number; i++) { // iterates to the square root of the number till it finds the first(smallest) factor if (number % i == 0) { // checks if the current number(i) is a factor factor = max(i, number / i); // stores the larger number among the factors break; // breaks the loop on when a factor is found } } if (factor == number) // base case of recursion return number; return getLargestPrime(factor); // recursively calls itself } A: Here is my approach to quickly calculate the largest prime factor. It is based on fact that modified x does not contain non-prime factors. To achieve that, we divide x as soon as a factor is found. Then, the only thing left is to return the largest factor. It would be already prime. The code (Haskell): f max' x i | i > x = max' | x `rem` i == 0 = f i (x `div` i) i -- Divide x by its factor | otherwise = f max' x (i + 1) -- Check for the next possible factor g x = f 2 x 2 A: The following C++ algorithm is not the best one, but it works for numbers under a billion and its pretty fast #include <iostream> using namespace std; // ------ is_prime ------ // Determines if the integer accepted is prime or not bool is_prime(int n){ int i,count=0; if(n==1 || n==2) return true; if(n%2==0) return false; for(i=1;i<=n;i++){ if(n%i==0) count++; } if(count==2) return true; else return false; } // ------ nextPrime ------- // Finds and returns the next prime number int nextPrime(int prime){ bool a = false; while (a == false){ prime++; if (is_prime(prime)) a = true; } return prime; } // ----- M A I N ------ int main(){ int value = 13195; int prime = 2; bool done = false; while (done == false){ if (value%prime == 0){ value = value/prime; if (is_prime(value)){ done = true; } } else { prime = nextPrime(prime); } } cout << "Largest prime factor: " << value << endl; } A: Found this solution on the web by "James Wang" public static int getLargestPrime( int number) { if (number <= 1) return -1; for (int i = number - 1; i > 1; i--) { if (number % i == 0) { number = i; } } return number; } A: Prime factor using sieve : #include <bits/stdc++.h> using namespace std; #define N 10001 typedef long long ll; bool visit[N]; vector<int> prime; void sieve() { memset( visit , 0 , sizeof(visit)); for( int i=2;i<N;i++ ) { if( visit[i] == 0) { prime.push_back(i); for( int j=i*2; j<N; j=j+i ) { visit[j] = 1; } } } } void sol(long long n, vector<int>&prime) { ll ans = n; for(int i=0; i<prime.size() || prime[i]>n; i++) { while(n%prime[i]==0) { n=n/prime[i]; ans = prime[i]; } } ans = max(ans, n); cout<<ans<<endl; } int main() { ll tc, n; sieve(); cin>>n; sol(n, prime); return 0; } A: Here is my attempt in Clojure. Only walking the odds for prime? and the primes for prime factors ie. sieve. Using lazy sequences help producing the values just before they are needed. (defn prime? ([n] (let [oddNums (iterate #(+ % 2) 3)] (prime? n (cons 2 oddNums)))) ([n [i & is]] (let [q (quot n i) r (mod n i)] (cond (< n 2) false (zero? r) false (> (* i i) n) true :else (recur n is))))) (def primes (let [oddNums (iterate #(+ % 2) 3)] (lazy-seq (cons 2 (filter prime? oddNums))))) ;; Sieve of Eratosthenes (defn sieve ([n] (sieve primes n)) ([[i & is :as ps] n] (let [q (quot n i) r (mod n i)] (cond (< n 2) nil (zero? r) (lazy-seq (cons i (sieve ps q))) (> (* i i) n) (when (> n 1) (lazy-seq [n])) :else (recur is n))))) (defn max-prime-factor [n] (last (sieve n))) A: Guess, there is no immediate way but performing a factorization, as examples above have done, i.e. in a iteration you identify a "small" factor f of a number N, then continue with the reduced problem "find largest prime factor of N':=N/f with factor candidates >=f ". From certain size of f the expected search time is less, if you do a primality test on reduced N', which in case confirms, that your N' is already the largest prime factor of initial N. A: Recursion in C Algorithm could be * *Check if n is a factor or t *Check if n is prime. If so, remember n *Increment n *Repeat until n > sqrt(t) Here's an example of a (tail)recursive solution to the problem in C: #include <stdio.h> #include <stdbool.h> bool is_factor(long int t, long int n){ return ( t%n == 0); } bool is_prime(long int n0, long int n1, bool acc){ if ( n1 * n1 > n0 || acc < 1 ) return acc; else return is_prime(n0, n1+2, acc && (n0%n1 != 0)); } int gpf(long int t, long int n, long int acc){ if (n * n > t) return acc; if (is_factor(t, n)){ if (is_prime(n, 3, true)) return gpf(t, n+2, n); else return gpf(t, n+2, acc); } else return gpf(t, n+2, acc); } int main(int argc, char ** argv){ printf("%d\n", gpf(600851475143, 3, 0)); return 0; } The solution is composed of three functions. One to test if the candidate is a factor, another to test if that factor is prime, and finally one to compose those two together. Some key ideas here are: 1- Stopping the recursion at sqrt(600851475143) 2- Only test odd numbers for factorness 3- Only testing candidate factors for primeness with odd numbers A: #include<stdio.h> #include<conio.h> #include<math.h> #include <time.h> factor(long int n) { long int i,j; while(n>=4) { if(n%2==0) { n=n/2; i=2; } else { i=3; j=0; while(j==0) { if(n%i==0) {j=1; n=n/i; } i=i+2; } i-=2; } } return i; } void main() { clock_t start = clock(); long int n,sp; clrscr(); printf("enter value of n"); scanf("%ld",&n); sp=factor(n); printf("largest prime factor is %ld",sp); printf("Time elapsed: %f\n", ((double)clock() - start) / CLOCKS_PER_SEC); getch(); } A: I think it would be good to store somewhere all possible primes smaller then n and just iterate through them to find the biggest divisior. You can get primes from prime-numbers.org. Of course I assume that your number isn't too big :) A: This is probably not always faster but more optimistic about that you find a big prime divisor: * *N is your number *If it is prime then return(N) *Calculate primes up until Sqrt(N) *Go through the primes in descending order (largest first) * *If N is divisible by Prime then Return(Prime) Edit: In step 3 you can use the Sieve of Eratosthenes or Sieve of Atkins or whatever you like, but by itself the sieve won't find you the biggest prime factor. (Thats why I wouldn't choose SQLMenace's post as an official answer...) A: Here is the same function@Triptych provided as a generator, which has also been simplified slightly. def primes(n): d = 2 while (n > 1): while (n%d==0): yield d n /= d d += 1 the max prime can then be found using: n= 373764623 max(primes(n)) and a list of factors found using: list(primes(n)) A: It seems to me that step #2 of the algorithm given isn't going to be all that efficient an approach. You have no reasonable expectation that it is prime. Also, the previous answer suggesting the Sieve of Eratosthenes is utterly wrong. I just wrote two programs to factor 123456789. One was based on the Sieve, one was based on the following: 1) Test = 2 2) Current = Number to test 3) If Current Mod Test = 0 then 3a) Current = Current Div Test 3b) Largest = Test 3c) Goto 3. 4) Inc(Test) 5) If Current < Test goto 4 6) Return Largest This version was 90x faster than the Sieve. The thing is, on modern processors the type of operation matters far less than the number of operations, not to mention that the algorithm above can run in cache, the Sieve can't. The Sieve uses a lot of operations striking out all the composite numbers. Note, also, that my dividing out factors as they are identified reduces the space that must be tested. A: Compute a list storing prime numbers first, e.g. 2 3 5 7 11 13 ... Every time you prime factorize a number, use implementation by Triptych but iterating this list of prime numbers rather than natural integers. A: With Java: For int values: public static int[] primeFactors(int value) { int[] a = new int[31]; int i = 0, j; int num = value; while (num % 2 == 0) { a[i++] = 2; num /= 2; } j = 3; while (j <= Math.sqrt(num) + 1) { if (num % j == 0) { a[i++] = j; num /= j; } else { j += 2; } } if (num > 1) { a[i++] = num; } int[] b = Arrays.copyOf(a, i); return b; } For long values: static long[] getFactors(long value) { long[] a = new long[63]; int i = 0; long num = value; while (num % 2 == 0) { a[i++] = 2; num /= 2; } long j = 3; while (j <= Math.sqrt(num) + 1) { if (num % j == 0) { a[i++] = j; num /= j; } else { j += 2; } } if (num > 1) { a[i++] = num; } long[] b = Arrays.copyOf(a, i); return b; }
{ "language": "en", "url": "https://stackoverflow.com/questions/23287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "197" }
Q: Free ASP.Net and/or CSS Themes Where can I get some decent looking free ASP.Net or CSS themes? A: Microsoft hired one fo the kids from A List Apart to whip some out. The .Net projects are free of charge for download. http://msdn.microsoft.com/en-us/asp.net/aa336613.aspx A: I have used Open source Web Design in the past. They have quite a few css themes, don't know about ASP.Net A: I wouldn't bother looking for ASP.NET stuff specifically (probably won't find any anyways). Finding a good CSS theme easily can be used in ASP.NET. Here's some sites that I love for CSS goodness: http://www.freecsstemplates.org/ http://www.oswd.org/ http://www.openwebdesign.org/ http://www.styleshout.com/ http://www.freelayouts.com/ A: As always, http://www.csszengarden.com/. Note that the images aren't public domain.
{ "language": "en", "url": "https://stackoverflow.com/questions/23288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Source Control Beginners What would be the best version control system to learn as a beginner to source control? A: I'd suggest you try Subversion, for example with the 1-click SVN installer. Try searching SO for "Subversion", and you'll find loads of questions with answers that point to good tutorials. Good luck! A: I'd go straight for Git. I've used subversion before, but always felt like I was doing it wrong. Git made sense from day one. Useful resources: * *Linus Torvals on Git *Scott Chacon "Getting Git" A: There are a few core concepts that I think are important to learn: * *Check-ins/check-outs (obviously) *Local versions vs. server versions *Mapping/Binding a local workspace to a remote store or repository. *Merging your changes back into a file that contains changes from others. *Branching (what it is, when/why to use it) *Merging changes from a branch back into a main branch or trunk. Most modern source control systems require some knowledge of the above topics and should help facilitate you learning them. Then you have distributed source control, which I don't have any experience with but is supposed to be fairly complicated and may not be suitable for a beginner. Subversion is great because it has all of the modern features you'd want and is free. Git is also becoming an increasingly popular option and is another free or very low cost alternative to Subversion. Knowledge regarding the concepts of branching and merging become critical for using Git, however. You can use unfuddle as a free and easy way to experiment with both Git and Subversion. I use it to host a couple of subversion repositories for some side projects I've worked on in the past. A: I'm not and advanced source control user, but I'm learning. Here is my experience with source control products: * *A long time ago, the company I was working for at the time decided to use source control. They introduced the concept to developers and got eveyone willing to give it a try. They chose to use PVCS, and implemented it. Before too long, developers would have to coordinate to lock/unlock modules and objects and we really didn't see much benefit. *A few years later, I was playing around with making an open source project and at the time rubyforge was offering CVS repositories. I tried it out and it was marginally better than PVCS. Granted I was the only one using the repository. I did however become frustrated when I tried to rearrange the structure of my files because I didn't like the way I had initially imported them. It didn't really work out in CVS. *A few years after that I was working on another personal project and my web hosting provider offered easy to setup Subversion (SVN) repositories. It took me a little bit of research to get it up and running correctly, but once I got past the initial learning curve, I liked it. *Not long after that I realized that I liked having source control and that my current job didn't have it. So I evangelized, and after a long period of time, my team implemented Source Safe because we work in Visual Studio and are generally a Microsoft shop. I was eager to use it, but before long I found that I was losing files and that Visual Studio was putting things in the wrong place and that I'd work on a project for a while and then go to export my work to another location and find that it either wouldn't export or would only export some of the projects in a solution. This made me realize that even though I thought I was using a "version control system", the copy of the code that was most secure, robust and complete was my working copy. The exact opposite of what source control is supposed to do. *So last week I was so fed up with Source Safe that I went searching. After looking into a few solutions, I decided to try git. I won't say it's all been roses, since I have again had some learning curve to get it to do what I want it to do, However, I have liked it enough to convert all of my work and personal projects over to it. One of the really nice things about it is that I don't need a centralized repository so I can use it without going through a ton of red tape at work to get it installed. So in short I would reccommend git, I use Mysysgit in windows and it has the added bonus of giving me a bash shell. On Linux you can just install it from your package manager. If you don't like git, try subversion. If you don't like either of those you probably won't like CVS or PVCS either. Under no circumstances try Source Safe, it's awful. A: Anything but Visual Source Safe; preferably one which supports the concepts of branching and merging. As others have said, Subversion is a great choice, especially with the TortoiseSVN client. Be sure to check out (pardon the pun) Eric Sink's classic series of Source Control HOWTO articles. A: I found http://unfuddle.com saved me messing about with installing SVN or git. You can get a free account in there and use either of those - plus you can use your OpenID there. Then you avoid having to mess about setting it up right and focus on how you're going to use it! A: Vault from SourceGear.com is superb. It is free for single users and provides a superb VS 2005/2008 interface. I love it! rp A: @Ian Nelson: I agree with you that Source Safe is bad as a source control system, but keep in mind that using Source Safe is a lot better than "carrying around floppy disks" as Joel Spolsky said. For a beginner it might not be a bad idea, since the cost of having no source control at all is a lot higher. A: Each tool has it's strengths and weaknesses. It's very much a question of what your requirements are. Unfortunately with this issue, like many others, it's often not the best tool that is selected but the one that someone is familiar with. For instance, if you don't require many branches and your team is small and local, almost any vcs will do the job (except SourceSafe). Things change if you need branches (which almost by necessity means you also need to do merges), your team is distributed, you need advanced security (subcontractors are not allowed to entire source tree), task tracking, etc. There is also the question of cost in three different ways: cost of licenses, cost of maintenance (some tools are so complicated that you in practice need someone just to control the repositories) and cost of training. Therefore suggesting one tool over another is like suggesting what would be the best programming language. Just some pointers: * *StarTeam is the easiest of the tools I have used. It required very little training. I got a one-day training since I was to be the maintainer. This maintaining took me less than 30 minutes per week. Users I "trained" by writing a two-page manual and after that I had very few questions to answer. *Continuus was the other end of the scale as far as ease of use is concerned. On the other hand task handling was great and it offered good support for release management. Trouble is, even as a release manager I never thought ease of making releases (it was once you learned how, but that took a considerable amount of time) should be more important than the daily work that developers do. *Merging and branch creating differs wildly between tools. Some tools make this simple, like git and ClearCase (although the latter is very slow) some basically force you to do the merge by hand. If you need to do merges a lot, the cost can get high. ClearCase was also expensive in all three categories mentioned before (although it has to be said we used all the advanced stuff which isn't necessary). Git on the other hand lacks a good UI and some of concepts differ from what you might be used to. Git's security features are also lacking (gitosis addresses some issues but not all). *Most tools I have used are also quite slow. Tools like PVCS/Dimensions was just slow, no matter what (basic things like opening a directory in the repository), some very slow in more specific ways (like ClearCase). From the tools I have used I would select StarTeam if your developers are not very experienced (and if you don't mind paying the license, which is quite expensive) and git if you have some experienced vcs guys onboard who can set up the environment to other guys. Mercurial also looks like an interesting competitor and seems to have slightly better UI's. Continuus, PVCS/Dimensions and ClearCase are just too slow, too complex and too expensive for almost any project. If someone insist on selecting one of these, I would go for ClearCase. I haven't used Subversion which many seem to like (yet, I have a feeling this is about to change in the near future) so can't comment how it compares to the other tools I have used (usually as a build and/or release manager). As for the first tool to choose, problem with Git, Bazaar and Mercurial is they are distributed vcs's. This is different from the traditional server-client model where you have a central repository. For just learning the stuff I would recommend also reading about the concepts. Branching for instance is something that you might not understand correctly just by trying yourself (there are different branch strategies for different situations). Plus it is very different if you are the only one accessing the repository, merge conflicts for instance wouldn't be a problem (you might get to see them but you would easily also fix them since you know the code in both branches). Of course you would learn about check outs, check ins, and such but I don't think these issues are particularily difficult in the first place. Added problem with vcs's is that they tend to use different terms. In StarTeam which is otherwise easy to use they for some reason insist on using the terms "check out" and "check out and lock". The latter is what most people think the first does. There is a reason for this (you can edit files even if you don't have an exclusive lock), but it would still make much more sense to call these "Get" and "Check out" to avoid confusion. A: Anything, but I would learn a modern system like git or subversion myself. My first VCS was RCS, but I got the basics down. A: Well, if you are just wanting to learn on your own, I would say you should go with something free, like subversion. If you are a company who has never used source control before, then it really depends on your needs. A: My first exposure was CVS with WinCVS as a client. it was horrid. Next was Subversion, with TortoiseSVN and Eclipse's integration. It was intuitive, and heavenly. I think that using CVS with TortoiseCVS and Eclipse's would be nice as well, though I prefer the way SVN handles revisioning. The entire repository is versioned with each check in, not individual files. A: I'd also recommend Subversion. It does not take too long to set up, it is free, and there is a really good book available online that goes over the basics as well as some advanced topics: http://svnbook.red-bean.com/ A: Subversion with tortoisesvn. (tortoisesvn because you can see a lot of what goes on visually and will provide a good jumping off point for the command line stuff. ) There is tons of documentation out there and most likely you will see it at least one point in your career. Almost every company I have worked for and interviewed with runs SVN. A: If you're looking to learn a commercial product while getting started Perforce provides a free client and server, with the server supporting two users and five client workspaces. At my previous place of employment it was used religiously not only for code by our programmers, but for art assets and game levels, and my own documentation. A: Subversion is good place to start with. It is very stable and modern version control system. Best online resource to start learning about Subversion would be Version Control with Subversion. There are lot of choices as far as server and client softwares are concerned. I personally prefer (for Windows environment). * *VisualSVN server *TortoiseSVN shell-integrated client and *AnkhSVN Visual Studio Subversion Add-On Again, with Subversion there are lot of options available. Also, it is a continually evolving version control system (unlike outdated SourceSafe). It could be easily integrated with numerous automated build tools (CruiseControl, FinalBuilder) and bug/issue tracking systems (JIRA). If you are looking for state-of-the-art version control systems, go for Git(developed by Linus Torvalds). But if you are totally new to version control systems, I would suggest start with subversion.
{ "language": "en", "url": "https://stackoverflow.com/questions/23310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Notification of drop in drag-drop in Windows My C# program has a list of files that can be dragged from it and dropped into another program. My requirements are that the file be copied to a different directory first. So, can I be notified of the drop operation so that I can only copy the file if operation succeeds? I'd rather wait till I know it needs to be copied before actually performing the copy. Also, is it possible to know what program the drop operation is occurring in? Ideally I'd like to alter the filepath based on who or what its being dropped. The solution to this can be in any .NET language or C/C++ with COM. A: There are a few ambiguities in your question. What operation needs to be successful? For everything you want to know about drag and drop, browse through these search results (multiple pages worth): Raymond Chen on drag and drop A: So, you intend to modify the data being dropped based on the drop target? I don't think this is possible; after all, you populate the data when the drag is initiated.
{ "language": "en", "url": "https://stackoverflow.com/questions/23370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Painting javax.microedition.lcdui.Graphics on LWUIT Component What would be the best method for getting a custom element (that is using J2ME native Graphics) painted on LWUIT elements? The custom element is an implementation from mapping library, that paints it's content (for example Google map) to Graphics object. How would it be possible to paint the result directly on LWUIT elements (at the moment I am trying to paint it on a Component). Is the only way to write a wrapper in LWUIT package, that would expose the internal implementation of it? Edit: John: your solution looks like a lot of engineering :P What I ended up using is following wrapper: package com.sun.lwuit; public class ImageWrapper { private final Image image; public ImageWrapper(final Image lwuitBuffer) { this.image = lwuitBuffer; } public javax.microedition.lcdui.Graphics getGraphics() { return image.getGraphics().getGraphics(); } } Now I can get the 'native' Graphics element from LWUIT. Paint on it - effectively painting on LWUIT image. And I can use the image to paint on a component. And it still looks like a hack :) But the real problem is 50kB of code overhead, even after obfuscation. But this is a issue for another post :) /JaanusSiim A: I do not think any hacking is necessary. You can subclass the LWTUI Component class and then you can pain whatever you want on to the graphic context of the component. You do not get the native lcdui.Graphics object but an object with a same interface that is easy to use. If you really need to pass a lcdui.Graphics to some underlying library to display its output then I would suggest this: Somewhere in your component code (do only when the component contents really need to be changed): private Image buffer = null; // keep this int[] bufferArray = new int[desiredWidth * desiredHeight]; javax.microedition.lcdui.Image bufferImage = Image.createEmptyImage(desiredWidth, desiredHeight); thirPartyComponent.paint(bufferImage.getGraphics()); bufferImage.getRGB(bufferArray,0,1,0,0,desiredWidth, desiredHeight); bufferImage = null; //no longer needed buffer = Image.createImage(bufferArray, desiredWidth, desiredHeight); In the component paint(g) method: g.drawImage(0,0, buffer); By doing the hack you did you are losing portablity and also sice you are exposing implementation private object you might also break other things. Hope this helps. A: Based on the javadoc for LWUIT and J2ME and guessing that the custom J2ME class is a Canvas it looks like you would have to: * *Subclass LWUIT's Component class wrapping the custom J2ME component *Override the paint() method of the LWUIT Component *Subclass the J2ME Graphics class wrapping the LWUIT Graphics class and pass all the method calls through *Pass in the wrapped J2ME Graphics implementation to the custom J2ME component's paint method That third step is an ugly one. Check on the LWUIT mailing list to see if anyone has dome this before. From the published APIs I don't see another way to do it. Edit: The hack added in the question looks better than my hack for an Image. What I have may be better for a general case, but I don't know either LWUIT or J2ME well enough to really say that.
{ "language": "en", "url": "https://stackoverflow.com/questions/23372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Create an EXE from a SWF using Flex 3 without requiring AIR? I have a simple little test app written in Flex 3 (MXML and some AS3). I can compile it to a SWF just fine, but I'd like to make it into an EXE so I can give it to a couple of my coworkers who might find it useful. With Flash 8, I could just target an EXE instead of a SWF and it would wrap the SWF in a projector, and everything worked fine. Is there an equivalent to that using the Flex 3 SDK that doesn't end up requiring AIR? Note: I don't have Flex Builder, I'm just using the free Flex 3 SDK. A: In your Flex SDK folders you should see a 'runtimes\player\win\FlashPlayer.exe' which is a stand alone Flash player. Open your SWF with that and you'll see a 'Create Projector...' menu item in the File menu which will create the stand-alone EXE. A: imaginaryboy gets it right, I believe. Btw, since you don't have Flex Builder, you might look into the free and open source FlashDevelop if you're on Windows. It's my favorite environment for developing anything Actionscript (the Flex support is pretty great, too). A: There's also Zinc that also provides API:s for accessing the filesystem and other thinks that AIR does, but less restrictive.
{ "language": "en", "url": "https://stackoverflow.com/questions/23373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Best/fastest compression format for (sqlserver) databases? Has anyone found a good compression format for MS Sqlserver databases? If so, what do you use and are you pleased with how it performs? My company frequently will compress a database snapshot from one of our clients and download it so we have a local copy for testing and dev purposes. We tried zip in the past, but once the database files crossed the 4Gb boundary we had to use rar (zip is 32-bit only). The problem is rar takes a lot of time to compress, and we don't know if it gives us the best compression ratio either. This isn't a question about the compression utility so much as the compression format. We use WinRar, but are considering 7zip, which supports a number of formats. A: In sql 2008 you have native compression, if you have to do this a lot and don't have SQL server 2008 then take a look at something like Quest LiteSpeed which compreeses the backup automatically A: In the no-cost category, newer versions of gzip and bzip2 are supposed to include large file support (someone on the internet tells me that bzip2 1.0.1 and beyond is large file compatible thanks to Cyril Pilsko, while gzip 1.2 can be patched which the dowloadable binaries are, and the gzip 1.3 beta includes support). While I use 7zip on my windows pc for convenience, I tend to prefer bzip2 for speed vs compression. I have also heard of tricking the non-large-file versions by doing something like cat file | gzip > file.gz. Generally you're trading off time with compression level, but one of bzip2's claims is that it uncompresses very quickly, which in a disaster recovery situation should be your most important metric. In that regard, I believe EMC's tape backup solution (ELM?) used to skip compression on DB partitions by default. Also If you're really serious about packing it into a tiny space, you might try something like rzip, but I've never known anyone to actually use it. A: There's always UCL compression and or LZO compression, both are GPLed so be sure you know how you're using them if it is a commercial project. A: I've tried most compression algorithm for SQL bak compression and the clear winner is the old and all forgotten freearc! It's speed/compression ratio is the best I've seen. It even adds recovery blocks to the archive, so if it gets corrupted (e.g. bytes got lost) it can self-recover. Version 0.67 can be retrieved from the peazip package (sourceforge.net) Download the package peazip_portable-8.7.0.WIN64.zip and extract to grab the freearc compression tool. Command example: Arc.exe create -m3 --recovery "c:\temp\SQL.bak.arc" "c:\temp\SQL.bak" Example SQL bak file 444MB: arc, level 3: archive size: 48MB, time 0:38 min (even with recovery blocks!) 7z, level normel: arvhice size: 57MB, time 1:54 min (both 4 threads) It's a sad thing it kind of disappeared from the world and no one took over the project. But I think it was just to feature rich, the project was simple to comprehensive. Let me know if you need more links and source...I've tried to collect as much as possible, some has disappeared from the web since :-(
{ "language": "en", "url": "https://stackoverflow.com/questions/23376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Scheduled Tasks for Web Applications What are the different approaches for creating scheduled tasks for web applications, with or without a separate web/desktop application? A: If we're talking Microsoft platform, then I'd always develop a separate Windows Service to handle such batch tasks. You can always reference the same assemblies that are being used by your web application to avoid any nasty code duplication. A: Jeff discussed this on the Stack Overflow blog - https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/ Basically, Jeff proposed using the CacheItemRemovedCallback as a timer for calling certain tasks. I personally believe that automated tasks should be handled as a service, a Windows scheduled task, or a job in SQL Server. Under Linux, checkout cron. A: I think Stack Overflow itself is using an ApplicationCache expiration to run background code at intervals. A: If you're on a Linux host, you'll almost certainly be using cron. A: Under linux you can use cron jobs (http://www.unixgeeks.org/security/newbie/unix/cron-1.html) to schedule tasks. Use URL fetchers like wget or curl to make HTTP GET requests. Secure your URLs with authentication so that no one can execute the tasks without knowing the user/password. A: I think Windows' built-in Task Scheduler is the suggested tool for this job. That requires an outside application. A: This may or may not be what you're looking for, but read this article, "Simulate a Windows Service using ASP.NET to run scheduled jobs". I think StackOverflow may use this method or it was at least talked about using it. A: A very simple method that we've used where I work is this: * *Set up a webservice/web method that executes the task. This webservice can be secured with username/pass if desired. *Create a console app that calls this web service. If desired, you can have the console app send parameters and/or get back some sort of metrics for output to the console or external logging. *Schedule this executable in the task scheduler of choice. It's not pretty, but it is simple and reliable. Since the console app is essentially just a heartbeat to tell the app to go do its work, it does not need to share any libraries with the application. Another plus of this methodology is that it's fairly trivial to kick off manually when needed. A: Use URL fetchers like wget or curl to make HTTP GET requests. Secure your URLs with authentication so that no one can execute the tasks without knowing the user/password. You can also tell cron to run php scripts directly, for example. And you can set the permissions on the PHP file to prevent other people accessing them or better yet, don't have these utility scripts in a web accessible directory... A: Java and Spring -- Use quartz. Very nice and reliable -- http://static.springframework.org/spring/docs/1.2.x/reference/scheduling.html A: I think there are easier ways than using cron (Linux) or Task Scheduler (Windows). You can build this into your web-app using: (a) quartz scheduler, or if you don't want to integrate another 3rd party library into your application: (b) create a thread on startup which uses the standard Java 'java.util.Timer' class to run your tasks. A: I recently worked on a project that does exactly this (obviously it is an external service but I thought I would share). https://anticipated.io/ You can receive a webhook or an SQS event at a specific scheduled time. Dealing with these schedulers can be a pain so I thought I'd share in such case someone is looking to offload their concerns.
{ "language": "en", "url": "https://stackoverflow.com/questions/23382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What should I learn to increase my skills? My path to a 'fulltime'- developer stated as a analyst using VBA with Excel, Access, and then onto C#. I went to college part time once I discovered I had a passion for coding not business. I do about most of my coding in C#, but being an ASP.NET developer I also write in HTML, JavaScript, SQL etc. . . the usual suspects. I like to keep moving forward find the edge that will get me to the next level, the next job, and of course more money. Most importantly I just want to learning something new and challenge me. I have spent time recently learning LINQ, but was wondering what should I learn next? Something on the .NET Framework or a new language technology? A: If you want to be one of the best you need to specialise. If you become very good in many skills then you may never become truly excellent in one. I know because I have taken this route myself and have found it difficult to get employment at times. After all, who wants someone who is capable at many languages when there is someone who excels at the specific thing they need. If a company develops in C# then who would want someone who is OK at C# but also is good at C, Visual Basic, Perl and Cobol, when all they really want is the best possible C# developer for the money they can afford. After all, you will only ever be employed for one, maybe two of your skills. There are very few jobs for people who are good in 10 or 15 skills. If you are looking to a new skill then maybe check out the job boards and find which skills are particularly in need, but be aware that what is the flavour of the month this year may not even be on the scene next year, which will make all of that effort to learn the skill futile and wasted. What I would say is: * *do one thing, and do it well. This may include supporting skills (C#, ASP.Net, SQL, LINQ etc). *If you want to choose something else, then choose something complementary. *Possibly most importantly, choose something you will enjoy. Maybe Ruby on Rails is the current flavour of the month, but if you don't enjoy doing it, then don't do it. Really, it's not worth it. You will never wish, on your death bed, that you had worked more in something you didn't enjoy. Another direction you could look at is maybe not for a particular development skill, but look for something else, maybe soft skills like people management, better business understanding or even look to something like literary skills to help improve your communications skills. All of these will help to allow you to do what you want to do more, and cut down on the stuff you really don't enjoy, thus helping to make your job more enjoyable. Apologies for the waffling here. Hope you are still awake :) A: Yeah, the more I get into software, I start to see myself focusing less on the language and more on the design.. Yeah there are framework bits we need to get our head around but most of the time ( most not all ) you can look those up as-and-when you need them.. But a good design head? That takes years of experience to start getting it working right.. And that is what the companies really pay for.. "Build it and they will come" and all that... A: As you continue to gain more experience in ASP.Net, C#, etc - it's always good to go check out the competition and see if it sparks ideas on how you can do things better in what you're doing. Taking a look at something like Rails or Django might change how you look at designing or building your apps. A: If you're now proficient with the languages and technologies you use, then start spending more time focusing on the design, solution architecture, and systems integration. The "bigger picture" that will set you apart from your contemporaries. Check out some Martin Fowler books like "Patterns of Enterprise Application Architecture", or Eric Evans' "Domain-Driven Design". A: Maybe learn more about Usability (best practices, testing, etc.) if you haven't already done so. Steve Krug's "Don't Make Me Think" is a good book to start with. Jakob Nielsen always has interesting stuff as well. A: The more languages you know, the more marketable you are. Look and see what the more popular (market for, not fan base) languages are, then add on some cutting edge tech that is not in much use yet, rounded out by general programming skill. With your skill set I would recommend (as far as languages): * *Java as a starting point *For .Net add in the .Net MVC (you have LINQ or that would be here also) Language agnostic skills: * *Design Patterns (includes the MVC) *Domain Driven Design *Test Driven Design A: Here would be my suggestions: 1) Design Patterns - These are really neat as well as being very useful in some situations. 2) AJAX - Assuming you haven't already done some of this, it is an interesting part of Web Development from my view. 3) Determine which parts of the chain do you enjoy the most: Front-end work(HTML, CSS, Javascript), middleware(C# for business logic parts), or back-end(MS-SQL with stored procedures, indexes, triggers, and all that stuff). If it is all of it then try to stay where the team doing web development is small as otherwise you may be asked to choose. 4) Algorithm design and analysis - Do you know various sorting algorithms? Do you know various techniques to create an algorithm, e.g. greedy, recursion, divide and conquer, dynamic programming, using custom data types like heap in heapsort etc. This can be new and cool. 5) Determine if there is a part of the development process you favor: Analyst, designer, programmer, tester, debugger? All can have varying degrees of being near the code, IMO. A: @ Michael DSL=Domain Specific Language As for what you should learn, that depends on what you're interested in. Are you looking to challenge yourself while staying in the same medium (web-centric applications)? I would suggest learning about Apache and the LAMP (Linux, Apache, MySQL, PHP) architecture and challenge yourself to build a web application that you could readily build with ASP .NET using it. Want to learn something completely different? Try Prolog or LISP and see what you can do with those. Maybe you'd like to get into embedded software? Learn C to start. You have a wide variety of ways to improve your skills, and each one has career paths attached to them. (Well, maybe not Prolog, but it's fun!) A: Why don't you swap stacks and look at the LAMP stack? Or how about a functional language like haskell? Or write a DSL? Or an app for your phone?
{ "language": "en", "url": "https://stackoverflow.com/questions/23391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What's the best way to duplicate fork() in windows? How do I implement some logic that will allow me to reproduce on Windows the functionality that I have on Linux with the fork() system call, using Python? I'm specifically trying to execute a method on the SAPI Com component, while continuing the other logic in the main thread without blocking or waiting. A: Have a look at the process management functions in the os module. There are function for starting new processes in many different ways, both synchronously and asynchronously. I should note also that Windows doesn't provide functionality that is exactly like fork() on other systems. To do multiprocessing on Windows, you will need to use the threading module. A: In addition to the process management code in the os module that Greg pointed out, you should also take a look at the threading module: https://docs.python.org/library/threading.html from threading import Thread def separate_computations(x, y): print sum(x for i in range(y)) # really expensive multiplication Thread(target=separate_computations, args=[57, 83]).start() print "I'm continuing while that other function runs in another thread!" A: The Threading example from Eli will run the thread, but not do any of the work after that line. I'm going to look into the processing module and the subprocess module. I think the com method I'm running needs to be in another process, not just in another thread. A: You might also like using the processing module (http://pypi.python.org/pypi/processing). It has lot's of functionality for writing parallel systems with the same API as the threading module... A: Use the python multiprocessing module which will work everywhere. Here is a IBM developerWords article that shows how to convert from os.fork() to the multiprocessing module. A: fork() has in fact been duplicated in Windows, under Cygwin, but it's pretty hairy. The fork call in Cygwin is particularly interesting because it does not map well on top of the Win32 API. This makes it very difficult to implement correctly. See the The Cygwin User's Guide for a description of this hack. A: Possibly a version of spawn() for python? http://en.wikipedia.org/wiki/Spawn_(operating_system)
{ "language": "en", "url": "https://stackoverflow.com/questions/23397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Is it better to structure an SQL table to have a match, or return no result I've got an interesting design question. I'm designing the security side of our project, to allow us to have different versions of the program for different costs and also to allow Manager-type users to grant or deny access to parts of the program to other users. Its going to web-based and hosted on our servers. I'm using a simple Allow or Deny option for each 'Resource' or screen. We're going to have a large number of resources, and the user will be able to set up many different groups to put users in to control access. Each user can only belong to a single group. I've got two approaches to this in mind, and was curious which would be better for the SQL server in terms of performance. Option A The presence of an entry in the access table means access is allowed. This will not need a column in the database to store information. If no results are returned, then access is denied. I think this will mean a smaller table, but would queries search the whole table to determine there is no match? Option B A bit column is included in the database that controls the Allow/Deny. This will mean there is always a result to be found, and makes for a larger table. Thoughts? A: If it's only going to be Allow/Deny, then a simple linking table between Users and Resources would work fine. If there is an entry keyed to the User-Resource in the linking table, allow access. UserResources ------------- UserId FK->Users ResourceId FK->Resources and the sql would be something like if exists (select 1 from UserResources where UserId = @uid and ResourceId=@rid) set @allow=1; With a clustered index on (UserId and ResourceId), the query would be blindingly fast even with millions of records. A: I would vote for Option B. If you go with Option A and the assumption that if a user exists, they can get in, then you'll eventually run into the problem that you'll want to deny access to a user, without removing the user record. There will be lots of cases where you'll want to lock a user out, but won't want to completely destroy their account. One such instance (not necessarily linked to your use case), is when you fail to pay, and they cut off your account until you start paying again. They don't want to delete the record, because they still want to enable it when you pay up again, instead of recreating the account from scratch, and losing all user history. A: B. It allows for much better checks whether the data is complete (for example, when you add an allowable/deniable feature). Also, table size should only be a consideration for tables that you know will contain many records (as in, 100,000+). You even taking the time to type the table size consideration into this question already cost more than the extra hard drive space it would take. A: Approach A, but I would also include a explicit deny in addition to you implicit deney. I would make some use cases to be sure your end logic works but here are some examples. User1 is in group1 and group2. User2 is in group1 User3 is in group2 Folder1 allows group1 and deny group2. User1 is denied. User2 is allowed. User3 is denied. I believe your approach users1 would be allowed.
{ "language": "en", "url": "https://stackoverflow.com/questions/23399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Are there any alternatives to Gigaspaces? Anything thats as good and as stable and as feature-rich as gigaspaces? A: Gigaspaces is top notch as far as a Javaspaces implementation goes for scalability and performance. Are you restricted to a Javaspaces implementation? Blitz Javaspaces is top notch for a free product. A: Hazelcast Some comparisons: Comparison1 Comparison2 Cheers! P.S.: I am also interested in people opinions regarding this matter. A: As the first answer says, what are your criteria? There's a number of commercial and open-source tools in the same space (but not Javaspaces-based): * *Oracle Coherence *Gemstone's Gemfire *Terracotta (a degree of overlap, but not quite in the same space) *GridGain (does the grid bit, but not the distributed cache bit) A: I'd suggest taking a look at Gartner's "Competitive Landscape: In-Memory Data Grids" at http://www.gartner.com/technology/reprints.do?id=1-1HCCIMJ&ct=130718&st=sb
{ "language": "en", "url": "https://stackoverflow.com/questions/23402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: FLVPlayback component memory issues My website is entirely flash based, it moves around a 3D model which was given to me as chunks of video that I've converted to FLV files. I'm using the FLVPlayback component to control the video inside of my program. While running memory checks using System.totalMemory I've noticed that whenever a video is loaded, it will eat up a chunk of memory and even when I remove all the event listeners from it(they are all weakly referenced), remove the component from its parent, stop the video and null the component instance, it still will not give that memory back. This has been bothering me since I started working on this project because of the huge amount of video a user can potentially instantiate and load. Currently every video is loaded into a new FLVPlayback instance whenever it is required, but I have read that perhaps the best way to go about this problem is to simply have a global FLVPlayback instance and just reload the new video into the old instance, that way there would only be one FLVPlayback component in the application's memory. Has anyone else run into this problem as well? Have you found a better solution than using a global instance that you just re-use for every new video? A: I've never really liked the components, they're a bit dodgy. This particular problem seems to be common, and the somewhat annoying solution is, as you're suggesting, to only have one FLVPlayback and reuse that. Here's a blog post about it A: You can't help the memory problems much until Flash adds destructors and explicit object deletion, unfortunately. See this thread: Unloading a ByteArray in Actionscript 3 There's a limit to how much memory Flash applets can use; the GC seems to fire upon reaching that limit. I've seen my memory-easy applets use as much as ~200MB, just because they run for hours on end and the GC doesn't want to kick in. Oh, and I don't think using a single instance is an elegant solution, either. Currently I just write a dispose() function for my custom classes, waiting for some day when it can be turned into a proper destructor. A: Thanks for the responses, the links to the other blog questions were helpful as well, I had read all of Grant Skinner's info on garbage collection too, but searching through those links and going back and re-reading what he had originally said about GC helped refresh the old noggin. In addition to nulling and re-instantiating that single FLVPlayback component, I also realized that I wasn't correctly unloading and destroying my Loader instances either, so I got them cleaned up and now the program is running much more efficiently. I would say the memory usage has improved by around 90% for the site. @aib I will admit the single instance solution isn't elegant, but because flash just won't let go of those FLV files, I'm kind of stuck with it. @grapefrukt I detest the flash components, they typically cause more grief than time saved, however in this case I had a lot of cue points and navigation stuff going on with the video files and the FLVPlayback component was the best solution I found. Of course I'm still fairly new to the ActionScript world so perhaps I over-looked something. A: From what I gather after a lot of testing is that flash dynamically loads in libraries and components as needed but never garbage collects that data. For instance, if I have a website or an Air app that uses the FLVPlayback component, the actual component and libraries associated with it aren't loaded until a new FLVPlayback() instance is created. It will then load in the library and component into memory but you will never get that space back until the program / website is closed. That specific instance with the video inside of it will get garbage collected and release some memory as long as you remove listeners from it, take it off the stage, and set it to null. Also, if you are doing individual videos, the VideoPlayer is much lighter weight, and cleans up nicer. A: Unfortuantely, thats just the way flash handles it. Not particularly smart, but it works for most people.
{ "language": "en", "url": "https://stackoverflow.com/questions/23439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How Best to Compare Two Collections in Java and Act on Them? I have two collections of the same object, Collection<Foo> oldSet and Collection<Foo> newSet. The required logic is as follow: * *if foo is in(*) oldSet but not newSet, call doRemove(foo) *else if foo is not in oldSet but in newSet, call doAdd(foo) *else if foo is in both collections but modified, call doUpdate(oldFoo, newFoo) *else if !foo.activated && foo.startDate >= now, call doStart(foo) *else if foo.activated && foo.endDate <= now, call doEnd(foo) (*) "in" means the unique identifier matches, not necessarily the content. The current (legacy) code does many comparisons to figure out removeSet, addSet, updateSet, startSet and endSet, and then loop to act on each item. The code is quite messy (partly because I have left out some spaghetti logic already) and I am trying to refactor it. Some more background info: * *As far as I know, the oldSet and newSet are actually backed by ArrayList *Each set contains less than 100 items, most likely max out at 20 *This code is called frequently (measured in millions/day), although the sets seldom differ My questions: * *If I convert oldSet and newSet into HashMap<Foo> (order is not of concern here), with the IDs as keys, would it made the code easier to read and easier to compare? How much of time & memory performance is loss on the conversion? *Would iterating the two sets and perform the appropriate operation be more efficient and concise? A: I think the easiest way to do that is by using apache collections api - CollectionUtils.subtract(list1,list2) as long the lists are of the same type. A: Apache's commons.collections library has a CollectionUtils class that provides easy-to-use methods for Collection manipulation/checking, such as intersection, difference, and union. The org.apache.commons.collections.CollectionUtils API docs are here. A: You can use Java 8 streams, for example set1.stream().filter(s -> set2.contains(s)).collect(Collectors.toSet()); or Sets class from Guava: Set<String> intersection = Sets.intersection(set1, set2); Set<String> difference = Sets.difference(set1, set2); Set<String> symmetricDifference = Sets.symmetricDifference(set1, set2); Set<String> union = Sets.union(set1, set2); A: I'd move to lists and solve it this way: * *Sort both lists by id ascending using custom Comparator if objects in lists aren't Comparable *Iterate over elements in both lists like in merge phase in merge sort algorithm, but instead of merging lists, you check your logic. The code would be more or less like this: /* Main method */ private void execute(Collection<Foo> oldSet, Collection<Foo> newSet) { List<Foo> oldList = asSortedList(oldSet); List<Foo> newList = asSortedList(newSet); int oldIndex = 0; int newIndex = 0; // Iterate over both collections but not always in the same pace while( oldIndex < oldList.size() && newIndex < newIndex.size()) { Foo oldObject = oldList.get(oldIndex); Foo newObject = newList.get(newIndex); // Your logic here if(oldObject.getId() < newObject.getId()) { doRemove(oldObject); oldIndex++; } else if( oldObject.getId() > newObject.getId() ) { doAdd(newObject); newIndex++; } else if( oldObject.getId() == newObject.getId() && isModified(oldObject, newObject) ) { doUpdate(oldObject, newObject); oldIndex++; newIndex++; } else { ... } }// while // Check if there are any objects left in *oldList* or *newList* for(; oldIndex < oldList.size(); oldIndex++ ) { doRemove( oldList.get(oldIndex) ); }// for( oldIndex ) for(; newIndex < newList.size(); newIndex++ ) { doAdd( newList.get(newIndex) ); }// for( newIndex ) }// execute( oldSet, newSet ) /** Create sorted list from collection If you actually perform any actions on input collections than you should always return new instance of list to keep algorithm simple. */ private List<Foo> asSortedList(Collection<Foo> data) { List<Foo> resultList; if(data instanceof List) { resultList = (List<Foo>)data; } else { resultList = new ArrayList<Foo>(data); } Collections.sort(resultList) return resultList; } A: I have created an approximation of what I think you are looking for just using the Collections Framework in Java. Frankly, I think it is probably overkill as @Mike Deck points out. For such a small set of items to compare and process I think arrays would be a better choice from a procedural standpoint but here is my pseudo-coded (because I'm lazy) solution. I have an assumption that the Foo class is comparable based on it's unique id and not all of the data in it's contents: Collection<Foo> oldSet = ...; Collection<Foo> newSet = ...; private Collection difference(Collection a, Collection b) { Collection result = a.clone(); result.removeAll(b) return result; } private Collection intersection(Collection a, Collection b) { Collection result = a.clone(); result.retainAll(b) return result; } public doWork() { // if foo is in(*) oldSet but not newSet, call doRemove(foo) Collection removed = difference(oldSet, newSet); if (!removed.isEmpty()) { loop removed { Foo foo = removedIter.next(); doRemove(foo); } } //else if foo is not in oldSet but in newSet, call doAdd(foo) Collection added = difference(newSet, oldSet); if (!added.isEmpty()) { loop added { Foo foo = addedIter.next(); doAdd(foo); } } // else if foo is in both collections but modified, call doUpdate(oldFoo, newFoo) Collection matched = intersection(oldSet, newSet); Comparator comp = new Comparator() { int compare(Object o1, Object o2) { Foo f1, f2; if (o1 instanceof Foo) f1 = (Foo)o1; if (o2 instanceof Foo) f2 = (Foo)o2; return f1.activated == f2.activated ? f1.startdate.compareTo(f2.startdate) == 0 ? ... : f1.startdate.compareTo(f2.startdate) : f1.activated ? 1 : 0; } boolean equals(Object o) { // equal to this Comparator..not used } } loop matched { Foo foo = matchedIter.next(); Foo oldFoo = oldSet.get(foo); Foo newFoo = newSet.get(foo); if (comp.compareTo(oldFoo, newFoo ) != 0) { doUpdate(oldFoo, newFoo); } else { //else if !foo.activated && foo.startDate >= now, call doStart(foo) if (!foo.activated && foo.startDate >= now) doStart(foo); // else if foo.activated && foo.endDate <= now, call doEnd(foo) if (foo.activated && foo.endDate <= now) doEnd(foo); } } } As far as your questions: If I convert oldSet and newSet into HashMap (order is not of concern here), with the IDs as keys, would it made the code easier to read and easier to compare? How much of time & memory performance is loss on the conversion? I think that you would probably make the code more readable by using a Map BUT...you would probably use more memory and time during the conversion. Would iterating the two sets and perform the appropriate operation be more efficient and concise? Yes, this would be the best of both worlds especially if you followed @Mike Sharek 's advice of Rolling your own List with the specialized methods or following something like the Visitor Design pattern to run through your collection and process each item. A: public static boolean doCollectionsContainSameElements( Collection<Integer> c1, Collection<Integer> c2){ if (c1 == null || c2 == null) { return false; } else if (c1.size() != c2.size()) { return false; } else { return c1.containsAll(c2) && c2.containsAll(c1); } } A: For comaparing a list or set we can use Arrays.equals(object[], object[]). It will check for the values only. To get the Object[] we can use Collection.toArray() method. A: For a set that small is generally not worth it to convert from an Array to a HashMap/set. In fact, you're probably best off keeping them in an array and then sorting them by key and iterating over both lists simultaneously to do the comparison.
{ "language": "en", "url": "https://stackoverflow.com/questions/23445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: How do I format Visual Studio Test results file (.trx) into a more readable format? Have just started using Visual Studio Professional's built-in unit testing features, which as I understand, uses MS Test to run the tests. The .trx file that the tests produce is xml, but was wondering if there was an easy way to convert this file into a more "manager-friendly" format? My ultimate goal is to be able to automate the unit-testing and be able to produce a nice looking document that shows the tests run and how 100% of them passed :) A: Since this file is XML you could and should use xsl to transform it to another format. The IAmUnkown - blog has an entry about decoding/transforming the trx file into html. You can also use .NetSpecExporter from Bekk to create nice reports. Their product also uses XSL, so you could probably steal it from the downloaded file and apply it with whatever xsl-application you want. A: you can also try trx2html A: If your are using VS2008 I also have an answer on IAmUnknown. Which updates the above answer which is based on VS 2005 trx format here is a style sheet that creates a readable HTM file <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:t="http://microsoft.com/schemas/VisualStudio/TeamTest/2006" > <xsl:template match="/"> <html> <head> <style type="text/css"> h2 {color: sienna} p {margin-left: 20px} .resultsHdrRow { font-face: arial; padding: 5px } .resultsRow { font-face: arial; padding: 5px } </style> </head> <body> <h2>Test Results</h2> <h3>Summary</h3> <ul> <li>Tests found: <xsl:value-of select="t:TestRun/t:ResultSummary/t:Counters/@total"/></li> <li>Tests executed: <xsl:value-of select="t:TestRun/t:ResultSummary/t:Counters/@executed"/></li> <li>Tests passed: <xsl:value-of select="t:TestRun/t:ResultSummary/t:Counters/@passed"/></li> <li>Tests Failed: <xsl:value-of select="t:TestRun/t:ResultSummary/t:Counters/@failed"/></li> </ul> <table border="1" width="80%" > <tr class="resultsHdrRow"> <th align="left">Test</th> <th align="left">Outcome</th> </tr> <xsl:for-each select="/t:TestRun/t:Results/t:UnitTestResult" > <tr valign="top" class="resultsRow"> <td width='30%'><xsl:value-of select="@testName"/></td> <td width='70%'> <Div>Message: <xsl:value-of select="t:Output/t:ErrorInfo/t:Message"/></Div> <br/> <Div>Stack: <xsl:value-of select="t:Output/t:ErrorInfo/t:StackTrace"/></Div> <br/> <Div>Console: <xsl:value-of select="t:Output/t:StdOut"/></Div> </td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:stylesheet> A: If you need to validate the schema before parsing/transforming it, you can find the XSD file in the Visual Studio install dir (via http://blogs.msdn.com/b/dhopton/archive/2008/06/12/helpful-internals-of-trx-and-vsmdi-files.aspx): Note, that the XSD schemas are available with all visual studio installs in the: %VSINSTALLDIR%\xml\Schemas\vstst.xsd file directory, along with many other schemas. A: Recently I wrote one trx to html convertor which is python based, have a look https://github.com/avinash8526/Murgi
{ "language": "en", "url": "https://stackoverflow.com/questions/23446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: DSLs (Domain Specific Languages) in Finance Has anyone worked with DSLs (Domain Specific Languages) in the finance domain? I am planning to introduce some kind of DSL support in the application that I am working on and would like to share some ideas. I am in a stage of identifying which are the most stable domain elements and selecting the features which would be better implemented with the DSL. I have not yet defined the syntax for this first feature. A: Jay Fields and Obie Fernandez have written and talked extensively on the subject. * *Jay Fields intro on Domain Specific Languages *Jay Fields' series on Business Natural Language *Obie Fernandez Expressing Contract Terms in a DSL *A very good presentation on infoQ by Jay Fields You'll also find general stuff on implementing DSL in Martin Fowler's writings (but not specific to finance). * *DSL A: Domain specific languages (DSLs) are most commonly used to represent financial instruments. The canonical paper is Simon Peyton Jones' Composing Contracts: an Adventure in Financial Engineering which represents contracts using a combinator library in Haskell. The most prominent use of the combinator approach is LexiFi's MLFi language, which is built on top of OCaml (their CEO, Jean-Marc Eber, is a co-author on the Composing Contracts paper). Barclay's at one point copied the approach and described some additional benefits, such as the ability to generate human-readable mathematical pricing formulas (Commercial Uses: Going Functional on Exotic Trades). DSLs for financial contracts are typically built using an embedding in a functional language such as Haskell, Scala, or OCaml. The uptake of functional programming languages in the financial industry will continue to make this approach attractive. In addition to representing financial instruments, DSLs are also used in finance for: * *Modeling financial entities with ontology languages (Financial Industry Business Ontology) *Replacing computations typically described using spreadsheets (http://doi.acm.org/10.1145/1411204.1411236) *Modeling pension plans (Case Study: Financial Services) *Analyzing financial market data (The Hedgehog Language) I maintain a complete list of financial DSLs papers, talks, and other resources at http://www.dslfin.org/resources.html. If you'd like to meet professionals and researchers working with DSLs for financial systems, there's an upcoming workshop on October 1st at the MODELS 2013 conference in Miami, Florida: http://www.dslfin.org/ A: Financial contracts have been modeled elegantly as a DSL by Simon Peyton Jones and Jean-Marc-Erby. Their DSL, embedded in Haskell, is presented in the paper How to write a financial contract. A: We worked on the idea of creating a financial valuation DSL with Fairmat ( http://www.fairmat.com ) -it exposes a DSL which can be used to express pay-offs and payment dependencies -it contains an extension model for creating new types of analytic and implementations of theoretical dynamics using .NET/ C# with our underlying math library (see some open source examples at https://github.com/fairmat A: I think that the work of Simon Peyton Jones and Jean Marc Eber is the most impressive because of "Composing Contracts: an Adventure in Financial Engineering" and everything derived from that: "LexiFi and MLFi". Found Shahbaz Chaudhary's Scala implementation the most attractive given that MLFi is not generally available (and because Scala as functional language is more accessible that Haskell). See "Adventures in financial and software engineering" and the other material referenced from there. I will dare to replicate a snipped for an idea of what this implementation can do. object Main extends App { //Required for doing LocalDate comparisons...a scalaism implicit val LocalDateOrdering = scala.math.Ordering.fromLessThan[java.time.LocalDate]{case (a,b) => (a compareTo b) < 0} //custom contract def usd(amount:Double) = Scale(Const(amount),One("USD")) def buy(contract:Contract, amount:Double) = And(contract,Give(usd(amount))) def sell(contract:Contract, amount:Double) = And(Give(contract),usd(amount)) def zcb(maturity:LocalDate, notional:Double, currency:String) = When(maturity, Scale(Const(notional),One(currency))) def option(contract:Contract) = Or(contract,Zero()) def europeanCallOption(at:LocalDate, c1:Contract, strike:Double) = When(at, option(buy(c1,strike))) def europeanPutOption(at:LocalDate, c1:Contract, strike:Double) = When(at, option(sell(c1,strike))) def americanCallOption(at:LocalDate, c1:Contract, strike:Double) = Anytime(at, option(buy(c1,strike))) def americanPutOption(at:LocalDate, c1:Contract, strike:Double) = Anytime(at, option(sell(c1,strike))) //custom observable def stock(symbol:String) = Scale(Lookup(symbol),One("USD")) val msft = stock("MSFT") //Tests val exchangeRates = collection.mutable.Map( "USD" -> LatticeImplementation.binomialPriceTree(365,1,0), "GBP" -> LatticeImplementation.binomialPriceTree(365,1.55,.0467), "EUR" -> LatticeImplementation.binomialPriceTree(365,1.21,.0515) ) val lookup = collection.mutable.Map( "MSFT" -> LatticeImplementation.binomialPriceTree(365,45.48,.220), "ORCL" -> LatticeImplementation.binomialPriceTree(365,42.63,.1048), "EBAY" -> LatticeImplementation.binomialPriceTree(365,53.01,.205) ) val marketData = Environment( LatticeImplementation.binomialPriceTree(365,.15,.05), //interest rate (use a universal rate for now) exchangeRates, //exchange rates lookup ) //portfolio test val portfolio = Array( One("USD") ,stock("MSFT") ,buy(stock("MSFT"),45) ,option(buy(stock("MSFT"),45)) ,americanCallOption(LocalDate.now().plusDays(5),stock("MSFT"),45) ) for(contract <- portfolio){ println("===========") val propt = LatticeImplementation.contractToPROpt(contract) val rp = LatticeImplementation.binomialValuation(propt, marketData) println("Contract: "+contract) println("Random Process(for optimization): "+propt) println("Present val: "+rp.startVal()) println("Random Process: \n"+rp) } } The excellent work of Tomas Petricek in F# is very much worth exploring. Beyond the "DSL" paradigm I suggest we'd need contributions from a number of other powerful paradigms to have a complete way to represent the complex semantics of financial instruments and financial contracts while meeting the "big data" realities. * *Probabilistic programming: Figaro, Stan, etc *Big data analytics: R, Spark, SparkR *Scalable "data fabric" ("off heap memory"; across commodity hardware but also across languages): "DataFrames in Spark for Large Scale Data Science" (works with R, Scala/Java and Python) *Semantic web: "Financial Topic Models" and ontologies. Worth reviewing some languages mentioned here: http://www.dslfin.org/resources.html
{ "language": "en", "url": "https://stackoverflow.com/questions/23448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Resources for an Oracle beginner Can anyone recommend some good resources that highlight the differences between Oracle and the AS/400 database? I am trying to help someone with a lot of AS/400 experience implement an Oracle installation, and they need some guidance. A book or online resource would be ideal. A: I've done this a fair few times and the solutions out there really depend on the environment (enterprise / mission critical or development). The BEST way would be the Oracle AS/400 Gateway. Here are some important links in that area: Allow AS/400 apps to access oracle with the Oracle Access Manager: Installation Guide for the AS/400 Oracle Access Manager Allow your Oracle apps to access AS/400 tables and be queried using Oracle: Oracle Transparent Gateway for DB/2 ^^^Those products are fairly expensive but super powerful.^^^ Alternately, here are some more academic approaches to the situation: Here's a technical comparison of the two technologies... It's a little propagandaish*. Technical comparisons of Oracle and DB/2 Here's a document written from the opposite point of view - Someone moving from Oracle to DB2. I still find it's useful information: Leverage your Oracle 10g skills to learn DB2... And another IBM link that has some really great information all around: IBM Developer Network Search Results Hope this helps! *Yes, I know propagandaish is not a real word. A: These links are not AS/400 specific, but generally a good place starting with Oracle: http://tahiti.oracle.com http://asktom.oracle.com A: Simple Talk Publishing have just set up http://www.oracleoverflow.com/, which is a dedicated Stack Exchange site. You could try posting your questions there?
{ "language": "en", "url": "https://stackoverflow.com/questions/23472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: CruiseControl.NET and NAnt I have a CC.NET project configured to call a common NAnt build file, which does some stuff, and then calls a child NAnt build file. The child build file name is specified by CC.NET to the command build file using a property. The hurdle that I am trying to get over is that the common build file log gets overwritten by the child build file log, so I don't get the common build log in the CC.NET build log. Anyone have any ideas on how to fix this? I thought about changing the child build's log, but reading up on the NAnt <nant> task doesn't allow me to change the child's output log. A: Use the nant task, so you get one single build file. A: Is there any way that you could include the child nant file as opposed to executing it as a full-fledged child nant project? This would prevent the overwrite, but not sure if it's possible in your situation.
{ "language": "en", "url": "https://stackoverflow.com/questions/23503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Algorithm behind MD5Crypt I'm working with Subversion based on Windows and would like to write an easy utility in .NET for working with the Apache password file. I understand that it uses a function referred to as MD5Crypt, but I can't seem to find a description of the algorithm beyond that at some point it uses MD5 to create a hash. Can someone describe the MD5Crypt algorithm and password line format? A: A precise textual description of the crypt algorithm updated for use with sha256 and sha512 is at http://www.akkadia.org/drepper/SHA-crypt.txt It includes contrasts with the MD5 algorithm, so it should give you what you're looking for. A: You can find an implementation of md5crypt in the tcllib package. Download is available from sourceforge. You can also find an example of an apache-compatible md5crypt in the source code for the CAS Generic Handler A: MD5Crypt is basically a replacement for the old-fashioned unix crypt function. It was introduced in freebsd, and has been adopted by other groups as well. The basic idea is this: * *a hash is a good way to store a password * *you take the user's entered password and hash it *compare it to the stored hash *if the hash is the same, the passwords match But there's a problem: * *Suppose you pick the password "jeff" and I also pick the password "jeff". *Now both of our password hashes are the same. *So if I see the stored hash codes, I will know your password is the same as mine, "jeff". So, we can add a "salt" string to the password. * *This can be any random thing. *Suppose for your account it is "zuzu" and for my account it is "rjrj". *Now we hash the string "jeffzuzu" for your password, and "jeffrjrj" for my password. *Now we have different hash values for our password. *We can safely store the salt value with the hashed password, since even knowing the salt value won't help to decode the hash. You mention .net, there's a pointer over in another forum to this: System.Security.Cryptography.MD5CryptoServiceProvider md5 = new System.Security.Cryptography.MD5CryptoServiceProvider(); string hash =BitConverter.ToString((md5.ComputeHash( System.Text.ASCIIEncoding.Default.GetBytes(stringtohash) ) )); HTH! A: The process is rather involved... the salt and the password are hashed together not once, but 1000 times. Also, the base64 encoding uses a different alphabet, and the padding is removed from the end. The best thing would probably be to find a library to use, like glibc under cygwin. Since you code against Apache anyway, have a look at Apache's implementation of crypt-md5. The original algorithm (I think) in C can be found here. It differs from the above implementation only by the different magic number.
{ "language": "en", "url": "https://stackoverflow.com/questions/23511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Silverlight programmatic access to Sony RZ30N Video Feed I would like to bypass the web-server functionality of a Sony SNC-RZ30N network attached web cam and display the video feed in a Silverlight application. I can't seem to find any examples of interfacing with the camera programatically. Any leads would be much appreciated. Thx. Update 09/09/2008: Found a good site with Javascript examples to control the camera, but still no means to embed the video in an iFrame or the like: http://www2.zdo.com/archives/3-JavaScript-API-to-Control-SONY-SNC-RZ30N-Network-Camera.html Doug A: I don't know the details of the Sony network camera and the server side software. But what do you mean by web-server functionality - is that the UI that get served up to the users in form of a HTML page? Or is it something more, like a server capturing the video stream and transcoding it? I think the direction you need to take is to first find the URL end-point of your video stream. Since it's a network camera I assume the camera has a built in IP-stack/HTTP server serving up the video stream. Once you have that feed you probably have to transcode it into a video format consumable by Silverlight. There are multiple tools you can use, but for Silverlight the preferred tool Microsoft Expression Encoder. It supports live transcoding of webcam video streams. I think it supports both Direct Show devices as well as video streams.
{ "language": "en", "url": "https://stackoverflow.com/questions/23539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What WCF best practices do you follow in object model design? I've noticed that a handful of WCF applications choose to "break" their objects apart; that is, a project might have a DataObjects assembly that contains DataContracts/Members in addition to a meaningful class library that performs business logic. Is this an unnecessary level of abstraction? Is there any inherent evil associated with going through and tagging existing class libraries with DataContract information? Also, as an aside, how do you handle error conditions? Are thrown exceptions from the service (InvalidOperation, ArgumentException and so on) generally accepted, or is there usually a level around that? A: The key reason to separating internal business objects from the data contracts/message contracts is that you don't want internal changes to your app to necessarily change the service contract. If you're creating versioned web services (with more than 1 version of the implemented interfaces) then you often have a single version of your apps business objects with more than 1 version of the data contract/message contract objects. In addition, in complex Enterprise Integration situations you often have a canonical data format (Data and Message contracts) which is shared by a number of applications, which forces each application to map the canonical data format to its internal object model. If you want a tool to help with the nitty gritty of separating data contract/message contract etc. then check out Microsoft's Web Services Software Factory http://msdn.microsoft.com/en-us/library/cc487895.aspx which has some good recipes for solving the WCF plumbing. In regards to excpetions, WCF automatically wraps all exceptions in FaultExceptions, which are serialized as wire-format faults. It's also possible to throw generic Fault Exceptions which allows you to specify additional details to be included with the serialized fault. Since the faults thrown by a web service operation are part of its contract it's a good idea to declare the faults on the operation declaration: [FaultContract(typeof(AuthenticationFault))] [FaultContract(typeof(AuthorizationFault))] StoreLocationResponse StoreLocation(StoreLocationRequest request); Both the AuthenticationFault and AuthorizationFault types represent the additional details to be serialized and sent over the wire and can be thrown as follows: throw new FaultException<AuthenticationFault>(new AuthenticationFault()); If you want more details then shout; I've been living and breathing this stuff for so long I almost making a living doing it ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/23564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What does 'IISReset' do? On IIS 6, what does an IIS reset do? Please compare to recycling an app pool and stopping and starting an ASP.NET web site. If you replace a DLL or edit/replace the web.config on an ASP.NET web site is that the same as stopping and starting that web site? A: IISReset stops and restarts the entire web server (including non-ASP.NET apps) Recycling an app pool will only affect applications running in that app pool. Editing the web.config in a web application only affects that web application (recycles just that app). Editing the machine.config on the machine will recycle all app pools running. IIS will monitor the /bin directory of your application. Whenever a change is detected in those dlls, it will recycle the app and re-load those new dlls. It also monitors the web.config & machine.config in the same way and performs the same action for the applicable apps. A: Application Pool recycling restarts the w3wp.exe process for that application pool, hence it will only affect web sites running in that application pool. IISReset restarts ALL w3wp.exe processes and any other IIS related service, i.e. the NNTP or FTP Service. I think changing web.config or /bin does not recycle the whole application pool, but I'm not sure on that. A: IISReset restarts the entire webserver (including all associated sites). If you're just looking to reset a single ASP.NET website, you should just recycle that AppDomain. The most common way to reset an ASP.NET website is to edit the web.config file, but you can also create an admin page with the following: public partial class Recycle : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { HttpRuntime.UnloadAppDomain(); } } Here's a blog post I wrote with more info: Avoid IISRESET in ASP.NET Applications A: It operates on the whole IIS process tree, as opposed to just your application pools. C:\>iisreset /? IISRESET.EXE (c) Microsoft Corp. 1998-1999 Usage: iisreset [computername] /RESTART Stop and then restart all Internet services. /START Start all Internet services. /STOP Stop all Internet services. /REBOOT Reboot the computer. /REBOOTONERROR Reboot the computer if an error occurs when starting, stopping, or restarting Internet services. /NOFORCE Do not forcefully terminate Internet services if attempting to stop them gracefully fails. /TIMEOUT:val Specify the timeout value ( in seconds ) to wait for a successful stop of Internet services. On expiration of this timeout the computer can be rebooted if the /REBOOTONERROR parameter is specified. The default value is 20s for restart, 60s for stop, and 0s for reboot. /STATUS Display the status of all Internet services. /ENABLE Enable restarting of Internet Services on the local system. /DISABLE Disable restarting of Internet Services on the local system. A: It stops and starts the services that IIS consists of. You can think of it as closing the relevant program and starting it up again. A: When you change an ASP.NET website's configuration file, it restarts the application to reflect the changes... When you do an IIS reset, that restarts all applications running on that IIS instance. A: Editing the web.config file or updating a DLL in the bin folder just recycles the worker process for that application, not the whole pool. A: IISReset restarts the entire webserver (including all associated sites). If you're just looking to reset a single ASP.NET website, you should just recycle that Application Domain. A: Here what's technet has to say about iisreset You might need to restart Internet Information Services (IIS) before certain configuration changes take effect or when applications become unavailable. Restarting IIS is the same as first stopping IIS, and then starting it again, except it is accomplished with a single command.
{ "language": "en", "url": "https://stackoverflow.com/questions/23566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: Calculating Distance Between 2 Cities How do you calculate the distance between 2 cities? A: You use the Haversine formula. A: If you need to take the curvature of the earth into account, the Great-Circle distance is what you're looking for. The Wikipedia article probably does a better job of explaining how the formula works than me, and there's also this aviation formulary page that covers that goes into more detail. The formulas are only the first part of the puzzle though, if you need to make this work for arbitrary cities, you'll need a location database to get the lat/long from. Luckily you can get this for free from Geonames.org, although there are commercial db's available (ask google). So, in general, look up the two cities you want, get the lat/long co-orinates and plug them into the formula as in the Wikipedia Worked Example. Other suggestions: * *For a full commercial solution, there's PC Miler which is used by many trucking companies to calculate shipping rates. *Make calls to the Google Maps (or other) api. If you need to do many requests per day, consider caching the results on the server. *Also very important is to consider building an equivalence database for cities, suburbs, towns etc. if you think you'll ever need to group your data. This gets really complicated though, and you may not find a one-size-fits-all solution for your problem. Last but not least, Joel wrote an article about this problem a while back, so here you go: New Feature: Job Search A: This is very easy to do with geography type in SQL Server 2008. SELECT geography::Point(lat1, lon1, 4326).STDistance(geography::Point(lat2, lon2, 4326)) -- computes distance in meters using eliptical model, accurate to the mm 4326 is SRID for WGS84 elipsoidal Earth model A: You ca use the A* algorithm to find the shortest path between those two cities and this way you'll have the distance. A: If you are working in the plane and you want the Euclidean distance "as the crow flies": // Cities are points x0,y0 and x1,y1 in kilometers or miles or Smoots[1] dx = x1 - x0; dy = y1 - y0; dist = sqrt(dx*dx + dy*y); No trigonometry needed! Just the Pythagorean theorem and the fact that squares are always positive so you don't need dx = abs(x1 - x0), etc. to get a positive number to pass to sqrt(). Note that you could probably do this in one line and a compiler would probably reduce it the equivalent above code: dist = sqrt((x1-x0)*(x1-x0) + (y1-y0)*(y1-y0)); [1] http://en.wikipedia.org/wiki/Smoot A: If you're talking about the shortest distance between two real cities on a real spherical planet, like Earth, you want the great circle distance. A: You can get the distance between two cities from google map api. Here is an implementation of it in Python #!/usr/bin/python import requests from sys import argv def get_distance(origin,destination): gmap='http://maps.googleapis.com/maps/api/distancematrix/json' payload={"origins":origin,"destinations":destination,"sensor":'false' } try: a=requests.get(gmap,params=payload) data = a.json() origin = str(data['origin_addresses'][0]) destination= str(data['destination_addresses'][0]) distance = data['rows'][0]['elements'][0]['distance']['text'] return distance,origin,destination except Exception,e: print "The %s or %destination does not exists :(" %(origin,destination) exit() if __name__=="__main__": if len(argv)<3: print "sorry Check the format" else: origin=argv[1] destination=argv[2] distance,origin,destination=get_distance(origin,destination) print "%s ---> %s : %s" %(origin,destination,distance) Example link: https://gist.github.com/sarathsp06/cf063e47bcc515b51c84 A: You find the Lat/Lon of the city, then use a distance estimation algorithm for Lat/Lon coordinates. A: if you need a code example I think I have one I could dig up at home, but like many of the previous answers, you need a long / lat db to do the calculation A: It is better to use a look-up table for obtaining the distance between two cities. This makes sense because * The Formula to calculate the distance ais quite computationally intensive.. * Distance between cities is unlikely to change. So unless you needs are very specific (like terrain mapping from a satellite or some or topography algorithm or something else), you should really just save the list of cities and distances between them, into a table and look it up as needed. A: I've been doing a lot of work with this recently. I'm finding SQL2008's new features really make this easy. I can find all the points that are withing Xkm of a 100k record table in sub-second time...not too shabby. The great circle (spherical assumption) method in my testing was about 2.5 miles off when compared to the vincenty formula (elipsoidal assumption, which is what the earth is). The real trick is getting the lat and long..for that I'm using Google. A: @Jared - a minor correction to your code example. The last line of the first code example should read: dist = sqrt(dx*dx + dy*dy); A: I agree that once you have the info, if it's not going to change, store it somehow. @Marko Tinto Thanks for the T-SQL sample. For those who don't have access to SQL Server or prefer another method: If you need high accuracy, check out Wikipedia's entry on the Vincenty algorithm for more info. I believe there is a js implementation, which would (if not already) be easily ported to other languages. Also, at the bottom of that page is a link to geographicLib, which purports to be 1000 time more accurate than the Vincenty algorithm (if you have data that good, it might matter). Why would you use something like the Vincenty method? Because the earth is not a perfect sphere and methods like that allow for inputting a more accurate major and minor axis for modeling the earth. A: i use distancy so simple and clean
{ "language": "en", "url": "https://stackoverflow.com/questions/23569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: What are the advantages and disadvantages of using the GAC? And on top of that, are there cases where one has to use the global assembly cache or where one cannot use it? A: The GAC runs with Full Trust and can be used by applications outside of your Web App. For example, Timer Jobs in Sharepoint HAVE to be in the GAC because the sptimer service is a separate process. The "Full Trust" Part is also a possible source for security issues. Sure, you can work with Code Access Security, but I do not see too many Assemblies using CAS unfortunately :( The /bin Folder can be locked down to Medium which is normally fine. Daniel Larson has a post on CAS as well which details the differences a bit more. A: If you're shipping a reusable library consisting of multiple assemblies, but only few of them form a facade, you can consider installing the assemblies into GAC, if the package is installed to developer's PCs. Imagine, you ship 6 assemblies, and only one of these 6 assemblies contains a facade - i.e. other 5 are used only by the facade itself. You ship: * *MyProduct.Facade.dll - that's the only component intended to be used by developers *MyProduct.Core.dll - used by MyProduct.Facade.dll, but not intended to be used by developers *MyProduct.Component1.dll - the same *MyProduct.Component2.dll - the same *ThirdParty.Lib1.dll - third-party library used by MyProduct.Component1.dll *ThirdParty.Lib2.dll - the same *etc. Developers using your project would like to reference just MyProduct.Facade.dll in their own projects. But when their project runs, it must be able to load all the assemblies it references - recursively. How this can be achieved? In general, they must be available either in Bin folder, on in GAC: * *You may ask the developers to locate your installation folder and add references to all N assemblies you put there. This will ensure they'll be copied into Bin folder to be available in runtime. *You may install VS.NET project template already containing these 6 references. A bit complex, since you should inject the actual path to your assemblies into this template before its installation. This can be done only by installer, since this path depends on installation path. *You may ask developers to create a special post-build step in .csproj / .vbproj file copying the necessary dependencies to Bin folder. The same disadvantages. *Finally, you may install all your assemblies into GAC. In this case developers must add the reference just to MyProduct.Facade.dll from their project. Everything else will be available in runtime anyway. Note: last option doesn't make you to do the same while shipping the project to production PCs. You can either ship all the assemblies within Bin folder, or install them into GAC - all depends all your wish. So the solution described shows the advantage of putting third-party assemblies into GAC during the development. It doesn't related to production. As you may find, installation into GAC is mainly intended to solve the problem of location of required assemblies (dependencies). If an assembly is installed into GAC, you may consider it exists "nearby" any application. It's like adding path to .exe to your PATH variable, but in "managed way". - of course, this is rather simplified description ;) A: I think one of the biggest advantages of using the GAC is that you can have multiple versions of the same assembly registered and available to your applications. Personally, i don't like how it restricts movement from machine to machine (i don't like having to say, check out source on a new VPC and go through a bunch of steps to get it running because I have to register stuff in the GAC) A: * *Loading assemblies from GAC mean less overhead and security that your application will always load correct version of .NET library *You shouldn't ngen assemblies that are outside of GAC, because there will be almost no performance gain, in many cases even loss in performance. *You're already using GAC, because all standard .NET assemblies are actually in GAC and ngened (during installation). *Using GAC for your own libraries adds complexity into deployment, I would try to avoid it at all costs. *Your users need to be logged as administrators during installation if you want to put something into GAC, quite a problem for many types of applications. So to sum it all, start simple and if you later see major performance gains if you put your assemblies into GAC and NGEN them, go for it, otherwise don't bother. GAC is more suitable for frameworks where there is expectation for library to be shared among more applications, in 99% of cases, you don't need it. A: In all my life, I have had maybe one application where I had to put an assembly in the GAC, simply because these assemblies were part of a framework that a number of applications would use it, and it seemed right to put them into the GAC. A: Advantage: * *Only one place to update your assemblys *You use a little less hard drive space Disadvantage: * *If you need to update only one website, you can't. You may end with the other websites in the webserver broken Recommendation: Leave the GAC to MS and friends. The gigabyte is very cheap now. A: The GAC can also be used by assemblies that require elevated permissions to perform privileged operations on behalf of less trusted code (e.g. a partial trust ASP.NET application). For example, say you have a partial trust ASP.NET application which needs to perform a task that would require elevated privileges, i.e. Full Trust. The solution is to put the code that requires elevated privileges into a separate assembly. The assembly is marked with the AllowPartiallyTrustedCallers attribute and the class that contains the privileged logic is marked with the PermissionSet attribute, something like this: [PermissionSet(SecurityAction.Assert, Unrestricted=true)] Our assembly would be given a strong name (signed) and then deployed into the GAC. Now our partially trusted app(s) can utilise the trusted assembly in the GAC to carry out a specific and narrow set of privileged operations without losing the benefits of partial trust.
{ "language": "en", "url": "https://stackoverflow.com/questions/23578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: How do I search content, within audio files/streams? I have always wondered how many different search techniques existed, for searching text, for searching images and even for videos. However, I have never come across a solution that searched for content within audio files. For example: Let us assume that I have about 200 podcasts downloaded to my PC in the form of mp3, wav and ogg files. They are all named generically say podcast1.mp3, podcast2.mp3, etc. So, it is not possible to know what the content is, without actually hearing them. Lets say that, I am interested in finding out, which the podcasts talk about 'game programming'. I want the results to be shown as: * *Podcast1.mp3 - 3 result(s) at time index(es) - 0:16:21, 0:43:45, 1:12:31 *Podcast21.ogg - 1 result(s) at time index(es) - 0:12:01 So my questions: * *How could one approach this problem? *Are there are suitable algorithms developed to do something like this? One idea the cropped up in my mind was that, one could use a 'speech-to-text' software to get transcripts along with time indexes for each of the audio files, then parse the transcript to get the output. I was considering this as one of my hobby projects. Thanks! A: If you want to search for text (i.e. what is being said) inside an audio stream you would have to process it with some kind of speech recognition algorithm and store the text as meta data associated with the files. For video you could also do text recognition for text inside the video. Evernote already does this for text inside image files, but has no support for audio as far as I know. Something similar is possible when using audio to search for audio. I don't know the details of these algorithms, but I'm guessing they involve some kind of frequency analysis. Shazam is using this kind of technology to identify songs based on audio clips. Here are some Wikipedia articles that may be useful: * *Speech recognition *Fast Fourier transform *Frequency analysis (frequency spectrum) *Optical character recognition (OCR)
{ "language": "en", "url": "https://stackoverflow.com/questions/23592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Syncing library/project subversion respositories I'm developing a library alongside several projects that use it, and I've found myself frequently modifying the library at the same time as a project (e.g., adding a function to the library and immediately using it in the project). As a result, the project would no longer compile with previous versions of the library. So if I need to rollback a change or test a previous version of the project, I'd like to know what version of the library was used at check-in. I suppose I could do this manually (by just writing the version number in the log file), but it would be great if this could happen automatically. A: An option that might work for you is to use an svn:external reference to the library. When tagging the project, you can do one of two things: * *Update the svn:external to refer to a specific revision of the library; OR *Update the svn:external to refer to a new tag that you make on the library. Since the svn:external metadata will be part of the main project's commit history, you can always get the tag on the main project and it will refer to the correct version of the library. We do it and it works very well. It also comes in handy when you want to freeze the version of the library code that you depend on in preparation for a release. A: I think if I were going to do this, I would use tags. It would be pretty easy to write a script that would tag both repositories with the same ID each time you upgraded the library and used it in the project. Then, if you need to roll back to a previous version, you just see what its most recent tag was, and roll the library back to that version. UPDATE: Sorry, I've been in Mercurial land for a while, and forgot that subversion doesn't directly support tagging. Assuming you use the usual subversion directory structure / /trunk /tags /branches you just need to run svn copy trunk/ tags/TagName on both repos, with the same tag name. Subversion is pretty good about smart copies, so you don't need to worry about disk space. A: You might find piston provides a solution It's primarily used for importing ruby on rails plugins, but I don't see why it shouldn't work for any subversion repositories. Basically what it does is this: * *svn export latest revision of the remote path *commit these files into your local svn as if they were local files *attach metadata in the form of svn properties about the remote path and revision This means you can keep a reference to a particular version of a remote repo without having to have it constantly updated like with an svn external. if you want to update your local copy of the library to the latest remote version, you just do piston update You should also be able to look at the history of updates, by simply looking at the metadata - svn properties are versioned just like files and everything else A: One option is to use a single subversion repository and check-in changes that effect both library and project at the same time. That way you know that whatever revision of the project you are on requires the same revision of the library.
{ "language": "en", "url": "https://stackoverflow.com/questions/23603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Windows / Active Directory - User / Groups I'm looking for a way to find a the windows login associated with a specific group. I'm trying to add permissions to a tool that only allows names formatted like: DOMAIN\USER DOMAIN\GROUP I have a list of users in active directory format that I need to add: ou=group1;ou=group2;ou=group3 I have tried adding DOMAIN\Group1, but I get a 'user not found' error. P.S. should also be noted that I'm not a Lan admin A: Programatically or Manually? Manually, i prefer AdExplorer, which is a nice Active directory Browser. You just connect to your domain controller and then you can look for the user and see all the details. Of course, you need permissions on the Domain Controller, not sure which though. Programatically, it depends on your language of couse. On .net, the System.DirectoryServices Namespace is your friend. (I don't have any code examples here unfortunately) For Active Directory, I'm not really an expert apart from how to query it, but here are two links I found useful: http://www.computerperformance.co.uk/Logon/LDAP_attributes_active_directory.htm http://en.wikipedia.org/wiki/Active_Directory (General stuff about the Structure of AD) A: You need to go to the Active Directory Users Snap In after logging in as a domain admin on the machine: * *Go to start --> run and type in mmc. *In the MMC console go to File --> *Add/Remove Snap-In Click Add Select *Active Directory Users and Computers and select Add. *Hit Close and then hit OK. From here you can expand the domain tree and search (by right-clicking on the domain name). You may not need special privileges to view the contents of the Active Directory domain, especially if you are logged in on that domain. It is worth a shot to see how far you can get. When you search for someone, you can select the columns from View --> Choose Columns. This should help you search for the person or group you are looking for. A: You do not need domain admin rights to look at the active directory. By default, any (authenticated?) user can read the information that you need from the directory. If that wasn't the case, for example, a computer (which has an associated account as well) could not verify the account and password of its user. You only need admin rights to change the contents of the directory. I think it is possible to set more restricted permissions, but that's not likely the case. A: OU is an Organizational Unit (sort of like a Subfolder in Explorer), not a Group, Hence group1, 2 and 3 are not actually groups. You are looking for the DN Attribute, also called "distinguishedName". You can simply use DOMAIN\DN once you have that. Edit: For groups, the CN (Common Name) could also work. The full string from Active Directory normally looks like this: cn=Username,cn=Users,dc=DomainName,dc=com (Can be longer or shorter, but the important bit is that the "ou" part is worthless for what you're trying to achieve. A: Thanks adeel825 & Michael Stum. My problem is, though, i'm in a big corporation and do not have access to log in as the domain admin nor to view the active directory, so i guess my solution is to try and get that level of access. A: Well, AdExplorer runs on your Local Workstation (which is why I prefer it) and I believe that most users have read access to AD anyway because that's actually required for stuff to work, but I'm not sure about that. A: Install the "Windows Support Tools" that is on the Windows Server CD (CD 1 if it's Windows 2003 R2). If your CD/DVD drive is D: then it will be in D:\Support\Tools\SuppTools.msi This gives you a couple of additional tools to "get at" AD: LDP.EXE - good for reading information in AD, but the UI kinda stinks. ADSI Edit - another snap-in for MMC.EXE that you can both browse AD with and get to all those pesky AD attributes you're looking for. You can install these tools on your local workstation and access AD from there without domain admin privileges. If you can log on to the domain, you can at least query/read AD for this information.
{ "language": "en", "url": "https://stackoverflow.com/questions/23610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Reporting Systems for ASP.NET What are the best open source (open source and commercial) reporting tools for ASP.NET similar to Crystal Reports for ASP.NET? A: Microsoft Reporting Services, free and included with SQL Server 2005 and 2008. Of course, this is great if you need a separation of report design and application, which for Enterprise applications is a huge plus. However, if what you want is to be able to create "in application" dashboards, where "you" design the reports and have limited parameters you expose to the user, then I suggest looking into "control" based charting vendors like TeeChart . Pros/cons of each strategy: Crystal/Microsoft Reporting services will give you out of the box handling of things like report scheduling, export to excel and pdf, and separation between application and report design. The independent charting tools you can get give you better control, they render better on any size you need, easier to grammatically manipulate and can handle eye candy such as flash based (no flash charts in MS SSRS) A: +1 SSRS and ActiveReports. ryw, use ActiveReports and close the gates of Crystal Hell behind you forever. A: ActiveReports and DevExpress' reporting tools are both pretty good. The ReportViewer control works too (the price is right), but I find it more difficult to use. And SSRS reports can be embedded into your ASP.Net apps as well. A: As much as I despise Crystal Reports (we describe digging deep into it the seven layers of Crystal hell) -- it seems to be the best/most-flexible tool for the job. I hope someone comes along and knocks them off the block though. Microsoft Reporting Services is an alternative, but didn't have the features we needed. A: I would suggest taking a look at MS SSRS (Microsoft SQL Server Reporting Services). A: I agree that SSRS is generally the right choice. But for flashy and embedded in an HTML page, I like Dundas. Their stuff looks good out of the box, has an easy-to-understand API, and is painless to get up and running.
{ "language": "en", "url": "https://stackoverflow.com/questions/23614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What Javascript rich text editor will not break the browser's spellcheck? I'm using TinyMCE in an ASP.Net project, and I need a spell check. The only TinyMCE plugins I've found use PHP on the server side, and I guess I could just break down and install PHP on my server and do that, but quite frankly, what a pain. I don't want to do that. As it turns out, Firefox's built-in spell check will work fine for me, but it doesn't seem to work on TinyMCE editor boxes. I've enabled the gecko_spellcheck option, which is supposed to fix it, but it doesn't. Does anybody know of a nice rich-text editor that doesn't break the browser's spell check? A: TinyMCE only goes out of its way to disable spell-checking when you don't specify the gecko_spellcheck option (i verified this with their example code). Might want to double-check your tinyMCE.init() call - it should look something like this: tinyMCE.init({ mode : "textareas", theme : "simple", gecko_spellcheck : true }); A: Most rich text editors let you specify whether or not to disable the browser's spellchecker (as answered by others), with the exception of those running in Safari. There is currently no way to programmatically disable the Safari spellchecker (as there is in FF and IE7+), so most rich text editors choose to let Safari do its own thing by leaving the browser in control of the context menu. A: I know at least yahoo!'s Rich Text Editor will let you use the included spell checker in FireFox. I also tested FCKeditor, but that requires the users to install additional plugins on their computer.
{ "language": "en", "url": "https://stackoverflow.com/questions/23620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What's the best way to configure my Ruby compilation in Debian? When compiling from source, I never know which configure flags to use to optimize the compilation for my environment. Assume the following hardware/OS: * *Single Core, 2 GHz Intel *512MB Ram *Debian 4 I usually just go with ./configure --prefix=/usr/local Should I be doing anything else? A: I always use Debian packages. Compiling from sources can break your development environment during libraries conflicts and such problems are hard to detect. A: You might want to check those few options out, which may be required by a Ruby On Rails environment, in which case they should be compiled. Just make sure the directory corresponds to your current settings. --with-openssl-dir=/usr --with-readline-dir=/usr --with-zlib-dir=/usr A: I recommend mixing in a few packages from Debian Unstable feeds. They tend to be pretty stable, despite the name. They're also very up to date.
{ "language": "en", "url": "https://stackoverflow.com/questions/23623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Free or Open Source Collaboration/eLearning Software I am looking for open source or free data collaboration software. Specifically this is for a non-profit organization that wants to teach remote students how a foreign language. The idea is that an instructor would teach a class and there would be up to 10 students in the class at a time. The instructor would be able to post slides or other teaching material and the students would be able to see it on their computers remotely. Video is not required but audio is a must. Any recommendations? Also if there have been any reviews or feature comparison amongst these products, I would be interested in hearing about them. A: I know a few of the developers on the Carleton University developed Blindside Project. They are actively developing an open-source web conferencing and presentation tool for e-learning, with the intent of eventually offering university courses online. It's pretty fully featured software, and is meant to be installed as a server that can host many conference rooms at a time. It has voice, video, text, and a whiteboard/slideshow (Edit: supports PDF at the moment) capability. One feature I think it neat is that students can 'raise their hands' in the class to ask the instructor a question, where they can take the floor for a moment. Check out the demo on the site (if it's not working anymore I'll nudge the developers). Another pro is that the clients only need to have flash installed. I just logged onto the online demo and created this preview: This project is now called BigBlueButton : http://code.google.com/p/bigbluebutton/ Here is the demo: http://demo.bigbluebutton.org/ A: The BlindSide site also listed these other projects: * *ePresence *OpenMeetings *DimDim *WebHuddle All opensource as well. A: I have used DimDim a few times as part of an educational project. You can use it as part of a hosted service from DimDim themselves, and also has an open source version that you can download and run yourself. I have not used it in the last six months or so, but we did find it very useful for collaborating in a multimedia-style classroom, but like all media streaming, does require a decent broadband connection both on the server and client end. One further issue we discovered with it was that to avoid a lot of messy firewall issues (especially at educational institutions) you need to run it on its own machine on port 80, so if it is running in collaboration with another website (such as Moodle), you need a separate machine for DimDim and for Moodle. So far, to avoid a lot of technical issues that existed a year ago (but have probably been resolved), the project I was working on went with a hosted service at the time, but it is expected that it will end up running its own versions for control and cost reasons. A: I do not have personal experience with this product, but dokeos is recommended by several people on other sites. A: @adeel - I think this blog entry can give you some details at least about one user that has tried DimDim. A: We have used Dimdim in our company. Pretty easy to install on Windows (as they say one-click; it is really one-click). We have used in LAN environment. We didn't have any issues as long as we were using only audio. With video we faced lot of issues, do we didn't use it (but it might be probably us). I think dimdim supports only 3 users to have a microphone enabled at a time. But when we used this feature and switched the audio to different people, often it resulted in problems (like audit lost completely for everyone etc). Hope this is helpful. A: Although it's not what you're looking for, Moodle might be of use to you if you're looking into having online courses. A: You can try chamilo is open source LMS with social network features, it's a fork of Dokeos.
{ "language": "en", "url": "https://stackoverflow.com/questions/23640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Natural language date/time parser for .NET? Does anyone know of a .NET date/time parser similar to Chronic for Ruby (handles stuff like "tomorrow" or "3pm next thursday")? Note: I do write Ruby (which is how I know about Chronic) but this project must use .NET. A: A .NET port of Chronic exists. See https://github.com/robertwilczynski/nChronic. I created a fork of it with some improvements and bug fixes, here: https://github.com/dorony/nChronic (sent pull requests, the author still hasn't responded). A: I don't, but there's a Java port called jchronic. If nothing else, it could provide a good jumping-off point for your own. Or perhaps you could use a semi-automatic Java to C# translator like Octopus to help translate it. (Or something better, if anyone knows of anything.) Okay, another possible avenue: could you use the chronic code using IronRuby? A: @Blair Conrad - Good ideas! I tried to get Chronic running under IronRuby but had some problems with dependencies - I don't know that it's ready yet. I found a project on Codeplex (DateTimeEnglishParser) that is attempting to do the same thing. It doesn't handle years or time yet, but it's a good start. I've worked on the project a little and contributed a patch to better handle written numbers. It's an interesting problem, and has definitely helped me understand regular expressions better, so I think I'll keep working on it. A: There was a similar thread earlier and it gave a link to a library on CodeProject that seems to do what you want: http://www.codeproject.com/KB/edit/dateparser.aspx but unfortunately the library seems to be written in MFC so you would have to make a DLL out of it and then call it from your .NET program. A: Palmsey, I have just recently had the same requirment so I went ahead and wrote a simple parser. Its not the nicest code but it will handle things like: "Today at 2pm" "Tuesday at 2pm - 15th july 2010 at 2am" "Previous Year at 2am - Tommorow at 14:30" "18th july 2010 at 2:45pm" Stuck it on codeplex as maybe someone else will find it useful. Check it out: http://timestamper.codeplex.com/ A: I've checked several frameworks and Python's ParseDateTime worked the best. It can be used from .NET using IronPython. If anyone's interested in a full sample project, comment on the answer and I'll try to create one. EDIT As requested, here is a simple project that you can use with the library: http://www.assembla.com/code/relativedateparser/subversion/nodes Try the following usage case, for example: * *August 25th, 2008 *25 Aug 2008 *Aug 25 5pm *5pm August 25 *next saturday *tomorrow *next thursday at 4pm *at 4pm *eod *tomorrow eod *eod tuesday *eoy *eom *in 5 minutes *5 minutes from now *5 hours before now *2 hours before noon *2 days from tomorrow A: I'm not aware of one, but it sounded like a cool problem, so here's my whack at it (VB.NET): Private Function ConvertDateTimeToStringRelativeToNow(ByVal d As DateTime) As String Dim diff As TimeSpan = DateTime.Now().Subtract(d) If diff.Duration.TotalMinutes < 1 Then Return "Now" Dim str As String If diff.Duration.TotalDays > 365 Then str = CInt(diff.Duration.TotalDays / 365).ToString() & " years" ElseIf diff.Duration.TotalDays > 30 Then str = CInt(diff.TotalDays / 30).ToString() & " months" ElseIf diff.Duration.TotalHours > 24 Then str = CInt(diff.Duration.TotalHours / 24) & " days" ElseIf diff.Duration.TotalMinutes > 60 Then str = CInt(diff.Duration.TotalMinutes / 60) & " minutes" Else str = CInt(diff.Duration.TotalMinutes).ToString() & " minutes" End If If str.StartsWith("1") Then str = str.SubString(0, str.Length - 1) If diff.TotalDays > 0 Then str &= " ago" Else str &= " from now" End If Return str End Function It's really not as sophisticated as ones that already exist, but it works alright I guess. Could be a nice extension method. A: We developed exactly what you are looking for on an internal project. We are thinking of making this public if there is sufficient need for it. Take a look at this blog for more details: http://precisionsoftwaredesign.com/blog.php. Feel free to contact me if you are interested: [email protected] This library is now a SourceForge project. The page is at: http://www.SourceForge.net/p/naturaldate The assembly is in the downloads section, and the source is available with Mercurial. A: @ Burton: I think he meant the other way, at least from the example on the linked page: Chronic.parse('tomorrow') #=> Mon Aug 28 12:00:00 PDT 2006 Chronic.parse('monday', :context => :past) #=> Mon Aug 21 12:00:00 PDT 2006 Chronic.parse('this tuesday 5:00') #=> Tue Aug 29 17:00:00 PDT 2006 I thought I would take a stab at it too until I realized! (nice implementation though)
{ "language": "en", "url": "https://stackoverflow.com/questions/23689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: .NET Security Policy change by standard users? The .NET Security Policy can be changed from a script by using CasPol.exe. Say I will be distributing an application to several users on a local network. Most of those users will be unprivileged, standard accounts, so they will not have necessary permissions for the relevant command. I think I shall be looking into domain logon scripts. Is there any alternative scenarios? Any solutions for networks without a domain? Edit: I'm bound to use Framework version 2.0 A: The latest version of .Net 3.5 SP1 now allows you to run managed executables over a network share without using CasPol. See this post
{ "language": "en", "url": "https://stackoverflow.com/questions/23713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Running Apache alongside another web server? Has anyone had any success running two different web servers -- such as Apache and CherryPy -- alongside each other on the same machine? I am experimenting with other web servers right now, and I'd like to see if I can do my experiments while keeping my other sites up and running. You could say that this isn't so much a specific-software question as it is a general networking question. * *I know it's possible to run two web servers on different ports; but is there any way to configure them so that they can run on the same port (ie, they both run on port 80)? *The web servers would not be serving files from the same domains. For example, Apache might serve up documents from foo.domain.com, and the other web server would serve from bar.domain.com. I do know that this is not an ideal configuration. I'd just like to see if it can be done before I go sprinting down the rabbit hole. :) A: You can't have two processes bound to the same port on the same IP address. You can add another IP address to the box and have each server listen on one. Another option is to proxy pass one server to the other. With Apache, you could do something like: NameVirtualHost * <virtualhost *> ServerName other.site.com # assumes CherryPy listens on port 8080 ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ </Virtualhost> That's a pretty quick example, but you can always check the ProxyPass documentation. Remember though, the application being proxyed to will get 127.0.0.1 in it's logs instead of the requester's IP address. Some web servers (apache does with mod_rpaf) can substitute the X-Forwarded-For header in place of the wrong IP address. Possibly CherryPy has this? A: Your best bet would be putting Apache httpd in front of port 80 and relay requests meant for other servers through Apache by using modules. Most popular scenario would be Tomcat behind Apache where you'll be able to run both php and jsp applications. I'm not familiar with CherryPy, so I can only suggest you look for an Apache module for CherryPy. Edit: This looks promising: http://tools.cherrypy.org/wiki/BehindApache A: Alternatively, to Ishmaeel's correct answer, if you have a server with 2 network cards, you could have each server answer requests on different IP addresses.
{ "language": "en", "url": "https://stackoverflow.com/questions/23715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What is the best way to setup memcached on CentOS to work with Apache and PHP What is the simplest way to install memcached on CentOS for someone new to the world of Linux? What is the best way to enable it for Apache and PHP A: Unless Apache and PHP have some option to utilize memcached for internal workings (of which I am unaware of), you typically don't "enable" it for such apps. Instead, you would get a client library to use memcached from within your application, then start up memcached on whatever servers you want to provide memory with, then just use the client library API to store and retrieve cached data across multiple servers. A: The easiest way is to find a reilable source of the RPM's needed to install memcached and memcached for PHP. There is a blog post which addresses this concern: http://blog.gahooa.com/2009/02/08/update-on-fedora-vs-redhat-enterprise-linux/ We have been using EPEL (Extra Packages for Enterprise Linux) for exactly that on RedHat Enterprise 5.3. I believe it is a stated goal of EPEL to support Centos. http://fedoraproject.org/wiki/EPEL Essentially, it is a YUM repository which contains lots of extra packages from Fedora that were compiled for RHEL. Super easy to use.
{ "language": "en", "url": "https://stackoverflow.com/questions/23726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Do you have any tips to improve ReSharper and/or Visual Studio performance? I'm using visual studio 2008 and ReSharper 4 and it's kind of slow. My machine has 2 GB of RAM, dual core processor and a 7200 rpm hard disk. I know more RAM and a faster hard disk could improve performance, but do you have any tips to improve ReSharper/Visual Studio performance? A: Turn off the annoying RSS reader * *Tools, Options, Environment, Startup Turn off all the animations * *Tools, Options, Environment, Animate Environment Tools Install the recent Service Pack Clean out your WebCache * *AppData\Local\Microsoft\WebSiteCache A: Visual Studio optimisations: http://stackoverflow.com/questions/8440/visual-studio-optimizations#8453 Edit: The above SO post has unfortunately been deleted. Microsoft have provided some tips that essentially boil down to turning off features you don't need and reducing solution size by splitting up a solution into smaller self contained solutions where appropriate. JetBrains has also provided an article that list a whole range of tweaks you can make to both Resharper settings and Visual Studio settings to improve performance. A: * *Disable unused extensions under "Tools - Extensions". *Disable "Track changes" in "Tools - Options - Text Editor". *Disable "Rich client visual experience" in the "Options - Environment". *Disable IntelliTrace (Ultimate edition only) - in the "Tools - Options - IntelliTrace - General" *Disable "Track Active items" - "Tools - Options - Projects and Solutions - Track Active Item in Solution Explorer". *Disable ReSharper IntelliSense in the ReSharper options *Disable ReSharper "Analyse errors in whole solution" *Disable editing enhancements in ReSharper - in ReSharper options turn off all the boxes under "Options - Environment - Editor" This is a snippet from this blog-post A: Having too many projects within your solution also appears to be a factor when it comes to performance. I have no real evidence of this but from my experience, less projects equates to better performance. If consolidating projects is not an option then create an alternate solution file so you can add only the existing projects that are relevant to the work you are doing. A: I'm having the exact same issue, and from the JetBrains site, it looks like they sort-of know about it but aren't admitting anything. Turning off solution-wide analysis does seem to help quite a bit. A: Mainly open Visual Studio options (Tools | Options) and configure the preferences as follows: Environment | General: disable Automatically adjust visual experience based on client performance, disable Enable rich client visual experience, enable Use hardware graphics acceleration if available. These adjustments will reduce UI lags and improve overall performance. Please read this official turtial: Speed up ReSharper (and Visual Studio)
{ "language": "en", "url": "https://stackoverflow.com/questions/23737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Why is Peer-to-Peer programming a hard topic to obtain good research for? After reading a bit more about how Gnutella and other P2P networks function, I wanted to start my own peer-to-peer system. I went in thinking that I would find plenty of tutorials and language-agnostic guidelines which could be applied, however I was met with a vague simplistic overview. I could only find very small, precise P2P code which didn't do much more than use client/server architecture on all users, which wasn't really what I was looking for. I wanted something like Gnutella, but there doesn't seem to be any articles out in the open for joining the network. A: RFC 4981, with its huge bibliography, could be a very good starting point. A: You might have better success researching Bittorrent, I believe that the creator has written some papers, and it seems others are as well. BitTyrant Bittorent.org, see the developers section A: I don't know what platform you are trying to use, but here is a decent article on the subject for .NET. A: I had to write a basic Gnutella client in C# using Web Services and I think the class notes on the P2P stuff are still available here and here. A: I've found the TheoryOrg Unofficial BitTorrent Specification to be the best online source for Bittorrent information. Also, the Monotorrent code is fairly simple and easy to understand. There's also a project called "GCT" which implements JGroups style P2P for LAN/Multicast environments, and its code is similarly easy to understand (if a bit buggy). A: You can try to read Gnutella2 and try to implement messaging. For reading conceptual material you can read Distributed Systems by Andrew Tannenbaum. A: You can have a look at JXTA. It's intention was to be a generic, platform agnostic p2p framework, in contrast to other p2p implementations which are usually for a very specific purpose (such as Gnutella). Don't be fooled by it's Java appearance, there are binding available for C/C++/C#, but the core protocols are implemented in XML which should translate to any language. You can also download a free book here.
{ "language": "en", "url": "https://stackoverflow.com/questions/23738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you find a needle in a haystack? When implementing a needle search of a haystack in an object-oriented way, you essentially have three alternatives: 1. needle.find(haystack) 2. haystack.find(needle) 3. searcher.find(needle, haystack) Which do you prefer, and why? I know some people prefer the second alternative because it avoids introducing a third object. However, I can't help feeling that the third approach is more conceptually "correct", at least if your goal is to model "the real world". In which cases do you think it is justified to introduce helper objects, such as the searcher in this example, and when should they be avoided? A: If both Needle and Haystack are DAOs, then options 1 and 2 are out of the question. The reason for this is that DAOs should only be responsible for holding properties of the real world objects they are modeling, and only have getter and setter methods (or just direct property access). This makes serializing the DAOs to a file, or creating methods for a generic compare / generic copy easier to write, as the code wouldn't contain a whole bunch of "if" statements to skip these helper methods. This just leaves option 3, which most would agree to be correct behaviour. Option 3 has a few advantages, with the biggest advantage being unit testing. This is because both Needle and Haystack objects can be easily mocked up now, whereas if option 1 or 2 were used, the internal state of either Needle or Haystack would have to be modified before a search could be performed. Secondly, with the searcher now in a separate class, all search code can be held in one place, including common search code. Whereas if the search code was put into the DAO, common search code would either be stored in a complicated class hierarchy, or with a Searcher Helper class anyway. A: Of the three, I prefer option #3. The Single Responsibility Principle makes me not want to put searching capabilities on my DTOs or models. Their responsibility is to be data, not to find themselves, nor should needles need to know about haystacks, nor haystacks know about needles. For what it's worth, I think it takes most OO practitioners a LONG time to understand why #3 is the best choice. I did OO for a decade, probably, before I really grokked it. @wilhelmtell, C++ is one of the very few languages with template specialization that make such a system actually work. For most languages, a general purpose "find" method would be a HORRIBLE idea. A: Usually actions should be applied to what you are doing the action on... in this case the haystack, so I think option 2 is the most appropriate. You also have a fourth alternative that I think would be better than alternative 3: haystack.find(needle, searcher) In this case, it allows you to provide the manner in which you want to search as part of the action, and so you can keep the action with the object that is being operated on. A: This entirely depends on what varies and what stays the same. For example, I am working on a (non-OOP) framework where the find algorithm is different depending both on the type of the needle and the haystack. Apart from the fact that this would require double-dispatch in an object-oriented environment, it also means that it isn't meaningful to write either needle.find(haystack) or to write haystack.find(needle). On the other hand, your application could happily delegate finding to either of both classes, or stick with one algorithm altogether in which case the decision is arbitrary. In that case, I would prefer the haystack.find(needle) way because it seems more logical to apply the finding to the haystack. A: When implementing a needle search of a haystack in an object-oriented way, you essentially have three alternatives: * *needle.find(haystack) *haystack.find(needle) *searcher.find(needle, haystack) Which do you prefer, and why? Correct me if I'm wrong, but in all three examples you already have a reference to the needle you're looking for, so isn't this kinda like looking for your glasses when they're sitting on your nose? :p Pun aside, I think it really depends on what you consider the responsibility of the haystack to be within the given domain. Do we just care about it in the sense of being a thing which contains needles (a collection, essentially)? Then haystack.find(needlePredicate) is fine. Otherwise, farmBoy.find(predicate, haystack) might be more appropriate. A: To quote the great authors of SICP, Programs must be written for people to read, and only incidentally for machines to execute I prefer to have both methods 1 and 2 at hand. Using ruby as an example, it comes with .include? which is used like this haystack.include? needle => returns true if the haystack includes the needle Sometimes though, purely for readability reasons, I want to flip it round. Ruby doesn't come with an in? method, but it's a one-liner, so I often do this: needle.in? haystack => exactly the same as above If it's "more important" to emphasise the haystack, or the operation of searching, I prefer to write include?. Often though, neither the haystack or the search is really what you care about, just that the object is present - in this case I find in? better conveys the meaning of the program. A: It depends on your requirements. For instance, if you don't care about the searcher's properties (e.g. searcher strength, vision, etc.), then I would say haystack.find(needle) would be the cleanest solution. But, if you do care about the searcher's properties (or any other properties for that matter), I would inject an ISearcher interface into either the haystack constructor or the function to facilitate that. This supports both object oriented design (a haystack has needles) and inversion of control / dependency injection (makes it easier to unit test the "find" function). A: I can think of situations where either of the first two flavours makes sense: * *If the needle needs pre-processing, like in the Knuth-Morris-Pratt algorithm, needle.findIn(haystack) (or pattern.findIn(text))makes sense, because the needle object holds the intermediate tables created for the algorithm to work effectively *If the haystack needs pre-processing, like say in a trie, the haystack.find(needle) (or words.hasAWordWithPrefix(prefix)) works better. In both the above cases, one of needle or haystack is aware of the search. Also, they both are aware of each other. If you want the needle and haystack not to be aware of each other or of the search, searcher.find(needle, haystack) would be appropriate. A: Easy: Burn the haystack! Afterward, only the needle will remain. Also, you could try magnets. A harder question: How do you find one particular needle in a pool of needles? Answer: thread each one and attach the other end of each strand of thread to a sorted index (i.e. pointers) A: A mix of 2 and 3, really. Some haystacks don't have a specialized search strategy; an example of this is an array. The only way to find something is to start at the beginning and test each item until you find the one you want. For this kind of thing, a free function is probably best (like C++). Some haystacks can have a specialized search strategy imposed on them. An array can be sorted, allowing you to use binary searching, for example. A free function (or pair of free functions, e.g. sort and binary_search) is probably the best option. Some haystacks have an integrated search strategy as part of their implementation; associative containers (hashed or ordered sets and maps) all do, for instance. In this case, finding is probably an essential lookup method, so it should probably be a method, i.e. haystack.find(needle). A: The answer to this question should actually depend on the domain the solution is implemented for. If you happen to simulate a physical search in a physical haystack, you might have the classes * *Space *Straw *Needle *Seeker Space knows which objects are located at which coordinates implements the laws of nature (converts energy, detects collisions, etc.) Needle, Straw are located in Space react to forces Seeker interacts with space:     moves hand, applies magnetic field, burns hay, applies x-rays, looks for needle... Thus  seeker.find(needle, space)   or   seeker.find(needle, space, strategy) The haystack just happens to be in the space where you are looking for the needle. When you abstract away space as a kind of virtual machine (think of: the matrix) you could get the above with haystack instead of space (solution 3/3b): seeker.find(needle, haystack)     or     seeker.find(needle, haystack, strategy) But the matrix was the Domain, which should only be replaced by haystack, if your needle couldn't be anywhere else. And then again, it was just an anology. Interestingly this opens the mind for totally new directions: 1. Why did you loose the needle in the first place? Can you change the process, so you wouldn't loose it? 2. Do you have to find the lost needle or can you simply get another one and forget about the first? (Then it would be nice, if the needle dissolved after a while) 3. If you loose your needles regularly and you need to find them again then you might want to * *make needles that are able to find themselves, e.g. they regularly ask themselves: Am I lost? If the answer is yes, they send their GPS-calculated position to somebody or start beeping or whatever: needle.find(space) or needle.find(haystack)    (solution 1) *install a haystack with a camera on each straw, afterwards you can ask the haystack hive mind if it saw the needle lately: haystack.find(needle) (solution 2) *attach RFID tags to your needles, so you can easily triangulate them That all just to say that in your implementation you made the needle and the haystack and most of the time the matrix on some kind of level. So decide according to your domain: * *Is it the purpose of the haystack to contain needles? Then go for solution 2. *Is it natural that the needle gets lost just anywhere? Then go for solution 1. *Does the needle get lost in the haystack by accident? Then go for solution 3. (or consider another recovering strategy) A: haystack.find(needle), but there should be a searcher field. I know that dependency injection is all the rage, so it doesn't surprise me that @Mike Stone's haystack.find(needle, searcher) has been accepted. But I disagree: the choice of what searcher is used seems to me a decision for the haystack. Consider two ISearchers: MagneticSearcher iterates over the volume, moving and stepping the magnet in a manner consistent with the magnet's strength. QuickSortSearcher divides the stack in two until the needle is evident in one of the subpiles. The proper choice of searcher may depend upon how large the haystack is (relative to the magnetic field, for instance), how the needle got into the haystack (i.e., is the needle's position truly random or it it biased?), etc. If you have haystack.find(needle, searcher), you're saying "the choice of which is the best search strategy is best done outside the context of the haystack." I don't think that's likely to be correct. I think it's more likely that "haystacks know how best to search themselves." Add a setter and you can still manually inject the searcher if you need to override or for testing. A: There is another alternative, which is the approach utilized by the STL of C++: find(haystack.begin(), haystack.end(), needle) I think it's a great example of C++ shouting "in your face!" to OOP. The idea is that OOP is not a silver bullet of any kind; sometimes things are best described in terms of actions, sometimes in terms of objects, sometimes neither and sometimes both. Bjarne Stroustrup said in TC++PL that when you design a system you should strive to reflect reality under the constraints of effective and efficient code. For me, this means you should never follow anything blindly. Think about the things at hand (haystack, needle) and the context we're in (searching, that's what the expression is about). If the emphasis is about the searching, then using an algorithm (action) that emphasizes searching (i.e. is flexibly to fit haystacks, oceans, deserts, linked lists). If the emphasis is about the haystack, encapsulate the find method inside the haystack object, and so on. That said, sometimes you're in doubt and have hard times making a choice. In this case, be object oriented. If you change your mind later, I think it is easier to extract an action from an object then to split an action to objects and classes. Follow these guidelines, and your code will be clearer and, well, more beautiful. A: I would say that option 1 is completely out. The code should read in a way that tells you what it does. Option 1 makes me think that this needle is going to go find me a haystack. Option 2 looks good if a haystack is meant to contain needles. ListCollections are always going to contain ListItems, so doing collection.find(item) is natural and expressive. I think the introduction of a helper object is approproiate when: * *You don't control the implementation of the objects in question IE: search.find(ObsecureOSObject, file) *There isn't a regular or sensible relationship between the objects IE: nameMatcher.find(houses,trees.name) A: I am with Brad on this one. The more I work on immensely complex systems, the more I see the need to truly decouple objects. He's right. It's obvious that a needle shouldn't know anything about haystack, so 1 is definitely out. But, a haystack should know nothing about a needle. If I were modeling a haystack, I might implement it as a collection -- but as a collection of hay or straw -- not a collection of needles! However, I would take into consideration that stuff does get lost in a haystack, but I know nothing about what exactly that stuff. I think it's better to not make the haystack look for items in itself (how smart is a haystack anyway). The right approach to me is to have the haystack present a collection of things that are in it, but are not straw or hay or whatever gives a haystack its essence. class Haystack : ISearchableThingsOnAFarm { ICollection<Hay> myHay; ICollection<IStuffSmallEnoughToBeLostInAHaystack> stuffLostInMe; public ICollection<Hay> Hay { get { return myHay; } } public ICollection<IStuffSmallEnoughToBeLostInAHayStack> LostAndFound { get { return stuffLostInMe; } } } class Needle : IStuffSmallEnoughToBeLostInAHaystack { } class Farmer { Search(Haystack haystack, IStuffSmallEnoughToBeLostInAHaystack itemToFind) } There's actually more I was going to type and abstract into interfaces and then I realized how crazy I was getting. Felt like I was in a CS class in college... :P You get the idea. I think going as loosely coupled as possible is a good thing, but maybe I was getting a bit carried away! :) A: Personally, I like the second method. My reasoning is because the major APIs I have worked with use this approach, and I find it makes the most sense. If you have a list of things (haystack) you would search for (find()) the needle. A: @Peter Meyer You get the idea. I think going as loosely coupled as possible is a good thing, but maybe I was getting a bit carried away! :) Errr... yeah... I think the IStuffSmallEnoughToBeLostInAHaystack kind of is a red flag :-) A: You also have a fourth alternative that I think would be better than alternative 3: haystack.find(needle, searcher) I see your point, but what if searcher implements a searching interface that allows for searching other types of objects than haystacks, and finding other things than needles in them? The interface could also be implemented with different algorithms, for example: binary_searcher.find(needle, haystack) vision_searcher.find(pitchfork, haystack) brute_force_searcher.find(money, wallet) But, as others have already pointed out, I also think this is only helpful if you actually have multiple search algorithms or multiple searchable or findable classes. If not, I agree haystack.find(needle) is better because of its simplicity, so I am willing to sacrifice some "correctness" for it. A: haystack.magnet().filter(needle); A: The haystack shouldn't know about the needle, and the needle shouldn't know about the haystack. The searcher needs to know about both, but whether or not the haystack should know how to search itself is the real point in contention. So I'd go with a mix of 2 and 3; the haystack should be able to tell someone else how to search it, and the searcher should be able to use that information to search the haystack. A: class Haystack {//whatever }; class Needle {//whatever }: class Searcher { virtual void find() = 0; }; class HaystackSearcher::public Searcher { public: HaystackSearcher(Haystack, object) virtual void find(); }; Haystack H; Needle N; HaystackSearcher HS(H, N); HS.find(); A: Is the code trying to find a specific needle or just any needle? It sounds like a stupid question, but it changes the problem. Looking for a specific needle the code in the question makes sense. Looking for any needle it would be more like needle = haystack.findNeedle() or needle = searcher.findNeedle(haystack) Either way, I prefer having a searcher that class. A haystack doesn't know how to search. From a CS perspective it is just a data store with LOTS of crap that you don't want. A: haystack can contain stuffs one type of stuff is needle finder is something that is responsible for searching of stuff finder can accept a pile of stuffs as the source of where to find thing finder can also accept a stuff description of thing that it need to find so, preferably, for a flexible solution you would do something like: IStuff interface Haystack = IList<IStuff> Needle : IStuff Finder .Find(IStuff stuffToLookFor, IList<IStuff> stuffsToLookIn) In this case, your solution will not get tied to just needle and haystack but it is usable for any type that implement the interface so if you want to find a Fish in the Ocean, you can. var results = Finder.Find(fish, ocean) A: If you have a reference to needle object why do you search for it? :) The problem domain and use-cases tell you that you do not need exact position of needle in a haystack (like what you could get from list.indexOf(element)), you just need a needle. And you do not have it yet. So my answer is something like this Needle needle = (Needle)haystack.searchByName("needle"); or Needle needle = (Needle)haystack.searchWithFilter(new Filter(){ public boolean isWhatYouNeed(Object obj) { return obj instanceof Needle; } }); or Needle needle = (Needle)haystack.searchByPattern(Size.SMALL, Sharpness.SHARP, Material.METAL); I agree that there are more possible solutions which are based on different search strategies, so they introduce searcher. There were anough comments on this, so I do not pay attentiont to it here. My point is solutions above forget about use-cases - what is the point to search for something if you already have reference to it? In the most natural use-case you do not have a needle yet, so you do not use variable needle. A: Brad Wilson points out that objects should have a single responsibility. In the extreme case, an object has one responsibility and no state. Then it can become... a function. needle = findNeedleIn(haystack); Or you could write it like this: SynchronizedHaystackSearcherProxyFactory proxyFactory = SynchronizedHaystackSearcherProxyFactory.getInstance(); StrategyBasedHaystackSearcher searcher = new BasicStrategyBasedHaystackSearcher( NeedleSeekingStrategies.getMethodicalInstance()); SynchronizedHaystackSearcherProxy proxy = proxyFactory.createSynchronizedHaystackSearcherProxy(searcher); SearchableHaystackAdapter searchableHaystack = new SearchableHaystackAdapter(haystack); FindableSearchResultObject foundObject = null; while (!HaystackSearcherUtil.isNeedleObject(foundObject)) { try { foundObject = proxy.find(searchableHaystack); } catch (GruesomeInjuryException exc) { returnPitchforkToShed(); // sigh, i hate it when this happens HaystackSearcherUtil.cleanUp(hay); // XXX fixme not really thread-safe, // but we can't just leave this mess HaystackSearcherUtil.cleanup(exc.getGruesomeMess()); // bug 510000884 throw exc; // caller will catch this and get us to a hospital, // if it's not already too late } } return (Needle) BarnyardObjectProtocolUtil.createSynchronizedFindableSearchResultObjectProxyAdapterUnwrapperForToolInterfaceName(SimpleToolInterfaces.NEEDLE_INTERFACE_NAME).adapt(foundObject.getAdaptable()); A: Definitely the third, IMHO. The question of a needle in a haystack is an example of an attempt to find one object in a large collection of others, which indicates it will need a complex search algorithm (possibly involving magnets or (more likely) child processes) and it doesn't make much sense for a haystack to be expected to do thread management or implement complex searches. A searching object, however, is dedicated to searching and can be expected to know how to manage child threads for a fast binary search, or use properties of the searched-for element to narrow the area (ie: a magnet to find ferrous items). A: Another possible way can be to create two interfaces for Searchable object e.g. haystack and ToBeSearched object e.g. needle. So, it can be done in this way public Interface IToBeSearched {} public Interface ISearchable { public void Find(IToBeSearched a); } Class Needle Implements IToBeSearched {} Class Haystack Implements ISearchable { public void Find(IToBeSearched needle) { //Here goes the original coding of find function } } A: haystack.iterator.findFirst(/* pass here a predicate returning true if its argument is a needle that we want */) iterator can be interface to whatever immutable collection, with collections having common findFirst(fun: T => Boolean) method doing the job. As long as the haystack is immutable, no need to hide any useful data from "outside". And, of course, it's not good to tie together implementation of a custom non-trivial collection and some other stuff that does have haystack. Divide and conquer, okay? A: In most cases I prefer to be able to perform simple helper operations like this on the core object, but depending on the language, the object in question may not have a sufficient or sensible method available. Even in languages like JavaScript) that allow you to augment/extend built-in objects, I find it can be both convenient and problematic (e.g. if a future version of the language introduces a more efficient method that gets overridden by a custom one). This article does a good job of outlining such scenarios.
{ "language": "en", "url": "https://stackoverflow.com/questions/23755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: Colorizing images in Java I'm working on some code to colorize an image in Java. Basically what I'd like to do is something along the lines of GIMP's colorize command, so that if I have a BufferedImage and a Color, I can colorize the Image with the given color. Anyone got any ideas? My current best guess at doing something like this is to get the rgb value of each pixel in the BufferedImage and add the RGB value of the Color to it with some scaling factor. A: Let Y = 0.3*R + 0.59*G + 0.11*B for each pixel in the image, then set them to be ((R1+Y)/2,(G1+Y)/2,(B1+Y)/2) if (R1,G1,B1) is what you are colorizing with. A: I have never used GIMP's colorize command. However, if your getting the RGB value of each pixel and adding RGB value to it you should really use a LookupOp. Here is some code that I wrote to apply a BufferedImageOp to a BufferedImage. Using Nicks example from above heres how I would do it. Let Y = 0.3*R + 0.59*G + 0.11*B for each pixel (R1,G1,B1) is what you are colorizing with protected LookupOp createColorizeOp(short R1, short G1, short B1) { short[] alpha = new short[256]; short[] red = new short[256]; short[] green = new short[256]; short[] blue = new short[256]; int Y = 0.3*R + 0.59*G + 0.11*B for (short i = 0; i < 256; i++) { alpha[i] = i; red[i] = (R1 + i*.3)/2; green[i] = (G1 + i*.59)/2; blue[i] = (B1 + i*.11)/2; } short[][] data = new short[][] { red, green, blue, alpha }; LookupTable lookupTable = new ShortLookupTable(0, data); return new LookupOp(lookupTable, null); } It creates a BufferedImageOp that will mask out each color if the mask boolean is true. Its simple to call too. BufferedImageOp colorizeFilter = createColorizeOp(R1, G1, B1); BufferedImage targetImage = colorizeFilter.filter(sourceImage, null); If this is not what your looking for I suggest you look more into BufferedImageOp's. This is would also be more efficient since you would not need to do the calculations multiple times on different images. Or do the calculations over again on different BufferedImages as long as the R1,G1,B1 values don't change. A: This works exactly like the Colorize function in GIMP and it preserves the transparency. I've also added a few things like Contrast and Brightness, Hue, Sat, and Luminosity - 0circle0 Google Me --> ' Sprite Creator 3' import java.awt.Color; import java.awt.image.BufferedImage; public class Colorizer { public static final int MAX_COLOR = 256; public static final float LUMINANCE_RED = 0.2126f; public static final float LUMINANCE_GREEN = 0.7152f; public static final float LUMINANCE_BLUE = 0.0722f; double hue = 180; double saturation = 50; double lightness = 0; int[] lum_red_lookup; int[] lum_green_lookup; int[] lum_blue_lookup; int[] final_red_lookup; int[] final_green_lookup; int[] final_blue_lookup; public Colorizer() { doInit(); } public void doHSB(double t_hue, double t_sat, double t_bri, BufferedImage image) { hue = t_hue; saturation = t_sat; lightness = t_bri; doInit(); doColorize(image); } private void doInit() { lum_red_lookup = new int[MAX_COLOR]; lum_green_lookup = new int[MAX_COLOR]; lum_blue_lookup = new int[MAX_COLOR]; double temp_hue = hue / 360f; double temp_sat = saturation / 100f; final_red_lookup = new int[MAX_COLOR]; final_green_lookup = new int[MAX_COLOR]; final_blue_lookup = new int[MAX_COLOR]; for (int i = 0; i < MAX_COLOR; ++i) { lum_red_lookup[i] = (int) (i * LUMINANCE_RED); lum_green_lookup[i] = (int) (i * LUMINANCE_GREEN); lum_blue_lookup[i] = (int) (i * LUMINANCE_BLUE); double temp_light = (double) i / 255f; Color color = new Color(Color.HSBtoRGB((float) temp_hue, (float) temp_sat, (float) temp_light)); final_red_lookup[i] = (int) (color.getRed()); final_green_lookup[i] = (int) (color.getGreen()); final_blue_lookup[i] = (int) (color.getBlue()); } } public void doColorize(BufferedImage image) { int height = image.getHeight(); int width; while (height-- != 0) { width = image.getWidth(); while (width-- != 0) { Color color = new Color(image.getRGB(width, height), true); int lum = lum_red_lookup[color.getRed()] + lum_green_lookup[color.getGreen()] + lum_blue_lookup[color.getBlue()]; if (lightness > 0) { lum = (int) ((double) lum * (100f - lightness) / 100f); lum += 255f - (100f - lightness) * 255f / 100f; } else if (lightness < 0) { lum = (int) (((double) lum * (lightness + 100f)) / 100f); } Color final_color = new Color(final_red_lookup[lum], final_green_lookup[lum], final_blue_lookup[lum], color.getAlpha()); image.setRGB(width, height, final_color.getRGB()); } } } public BufferedImage changeContrast(BufferedImage inImage, float increasingFactor) { int w = inImage.getWidth(); int h = inImage.getHeight(); BufferedImage outImage = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB); for (int i = 0; i < w; i++) { for (int j = 0; j < h; j++) { Color color = new Color(inImage.getRGB(i, j), true); int r, g, b, a; float fr, fg, fb; r = color.getRed(); fr = (r - 128) * increasingFactor + 128; r = (int) fr; r = keep256(r); g = color.getGreen(); fg = (g - 128) * increasingFactor + 128; g = (int) fg; g = keep256(g); b = color.getBlue(); fb = (b - 128) * increasingFactor + 128; b = (int) fb; b = keep256(b); a = color.getAlpha(); outImage.setRGB(i, j, new Color(r, g, b, a).getRGB()); } } return outImage; } public BufferedImage changeGreen(BufferedImage inImage, int increasingFactor) { int w = inImage.getWidth(); int h = inImage.getHeight(); BufferedImage outImage = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB); for (int i = 0; i < w; i++) { for (int j = 0; j < h; j++) { Color color = new Color(inImage.getRGB(i, j), true); int r, g, b, a; r = color.getRed(); g = keep256(color.getGreen() + increasingFactor); b = color.getBlue(); a = color.getAlpha(); outImage.setRGB(i, j, new Color(r, g, b, a).getRGB()); } } return outImage; } public BufferedImage changeBlue(BufferedImage inImage, int increasingFactor) { int w = inImage.getWidth(); int h = inImage.getHeight(); BufferedImage outImage = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB); for (int i = 0; i < w; i++) { for (int j = 0; j < h; j++) { Color color = new Color(inImage.getRGB(i, j), true); int r, g, b, a; r = color.getRed(); g = color.getGreen(); b = keep256(color.getBlue() + increasingFactor); a = color.getAlpha(); outImage.setRGB(i, j, new Color(r, g, b, a).getRGB()); } } return outImage; } public BufferedImage changeRed(BufferedImage inImage, int increasingFactor) { int w = inImage.getWidth(); int h = inImage.getHeight(); BufferedImage outImage = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB); for (int i = 0; i < w; i++) { for (int j = 0; j < h; j++) { Color color = new Color(inImage.getRGB(i, j), true); int r, g, b, a; r = keep256(color.getRed() + increasingFactor); g = color.getGreen(); b = color.getBlue(); a = color.getAlpha(); outImage.setRGB(i, j, new Color(r, g, b, a).getRGB()); } } return outImage; } public BufferedImage changeBrightness(BufferedImage inImage, int increasingFactor) { int w = inImage.getWidth(); int h = inImage.getHeight(); BufferedImage outImage = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB); for (int i = 0; i < w; i++) { for (int j = 0; j < h; j++) { Color color = new Color(inImage.getRGB(i, j), true); int r, g, b, a; r = keep256(color.getRed() + increasingFactor); g = keep256(color.getGreen() + increasingFactor); b = keep256(color.getBlue() + increasingFactor); a = color.getAlpha(); outImage.setRGB(i, j, new Color(r, g, b, a).getRGB()); } } return outImage; } public int keep256(int i) { if (i <= 255 && i >= 0) return i; if (i > 255) return 255; return 0; } } A: I wanted to do the exact same thing as the question poster wanted to do but the above conversion did not remove colors like the GIMP does (ie green with a red overlay made an unpleasant brown color etc). So I downloaded the source code for GIMP and converted the c code over to Java. Posting it in this thread just in case anyone else wants to do the same (since it is the first thread that comes up in Google). The conversion still changes the white color when it should not, it's probably a casting issue from double to int. The class converts a BufferedImage in-place. public class Colorize { public static final int MAX_COLOR = 256; public static final float LUMINANCE_RED = 0.2126f; public static final float LUMINANCE_GREEN = 0.7152f; public static final float LUMINANCE_BLUE = 0.0722f; double hue = 180; double saturation = 50; double lightness = 0; int [] lum_red_lookup; int [] lum_green_lookup; int [] lum_blue_lookup; int [] final_red_lookup; int [] final_green_lookup; int [] final_blue_lookup; public Colorize( int red, int green, int blue ) { doInit(); } public Colorize( double t_hue, double t_sat, double t_bri ) { hue = t_hue; saturation = t_sat; lightness = t_bri; doInit(); } public Colorize( double t_hue, double t_sat ) { hue = t_hue; saturation = t_sat; doInit(); } public Colorize( double t_hue ) { hue = t_hue; doInit(); } public Colorize() { doInit(); } private void doInit() { lum_red_lookup = new int [MAX_COLOR]; lum_green_lookup = new int [MAX_COLOR]; lum_blue_lookup = new int [MAX_COLOR]; double temp_hue = hue / 360f; double temp_sat = saturation / 100f; final_red_lookup = new int [MAX_COLOR]; final_green_lookup = new int [MAX_COLOR]; final_blue_lookup = new int [MAX_COLOR]; for( int i = 0; i < MAX_COLOR; ++i ) { lum_red_lookup [i] = ( int )( i * LUMINANCE_RED ); lum_green_lookup[i] = ( int )( i * LUMINANCE_GREEN ); lum_blue_lookup [i] = ( int )( i * LUMINANCE_BLUE ); double temp_light = (double)i / 255f; Color color = new Color( Color.HSBtoRGB( (float)temp_hue, (float)temp_sat, (float)temp_light ) ); final_red_lookup [i] = ( int )( color.getRed() ); final_green_lookup[i] = ( int )( color.getGreen() ); final_blue_lookup [i] = ( int )( color.getBlue() ); } } public void doColorize( BufferedImage image ) { int height = image.getHeight(); int width; while( height-- != 0 ) { width = image.getWidth(); while( width-- != 0 ) { Color color = new Color( image.getRGB( width, height ) ); int lum = lum_red_lookup [color.getRed ()] + lum_green_lookup[color.getGreen()] + lum_blue_lookup [color.getBlue ()]; if( lightness > 0 ) { lum = (int)((double)lum * (100f - lightness) / 100f); lum += 255f - (100f - lightness) * 255f / 100f; } else if( lightness < 0 ) { lum = (int)(((double)lum * lightness + 100f) / 100f); } Color final_color = new Color( final_red_lookup[lum], final_green_lookup[lum], final_blue_lookup[lum], color.getAlpha() ); image.setRGB( width, height, final_color.getRGB() ); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/23763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Effective strategy for leaving an audit trail/change history for DB applications? What are some strategies that people have had success with for maintaining a change history for data in a fairly complex database. One of the applications that I frequently use and develop for could really benefit from a more comprehensive way of tracking how records have changed over time. For instance, right now records can have a number of timestamp and modified user fields, but we currently don't have a scheme for logging multiple change, for instance if an operation is rolled back. In a perfect world, it would be possible to reconstruct the record as it was after each save, etc. Some info on the DB: * *Needs to have the capacity to grow by thousands of records per week *50-60 Tables *Main revisioned tables may have several million records each *Reasonable amount of foreign keys and indexes set *Using PostgreSQL 8.x A: The only problem with using Triggers is that it adds to performance overhead of any insert/update/delete. For higher scalability and performance, you would like to keep the database transaction to a minimum. Auditing via triggers increase the time required to do the transaction and depending on the volume may cause performance issues. another way is to explore if the database provides any way of mining the "Redo" logs as is the case in Oracle. Redo logs is what the database uses to recreate the data in case it fails and has to recover. A: Similar to a trigger (or even with) you can have every transaction fire a logging event asynchronously and have another process (or just thread) actually handle the logging. There would be many ways to implement this depending upon your application. I suggest having the application fire the event so that it does not cause unnecessary load on your first transaction (which sometimes leads to locks from cascading audit logs). In addition, you may be able to improve performance to the primary database by keeping the audit database in a separate location. A: One strategy you could use is MVCC, Multi-Value Concurrency Control. In this scheme, you never do updates to any of your tables, you just do inserts, maintaining version numbers for each record. This has the advantage of providing an exact snapshot from any point in time, and it also completely sidesteps the update lock problems that plague many databases. But it makes for a huge database, and selects all require an extra clause to select the current version of a record. A: I use SQL Server, not PostgreSQL, so I'm not sure if this will work for you or not, but Pop Rivett had a great article on creating an audit trail here: Pop rivett's SQL Server FAQ No.5: Pop on the Audit Trail Build an audit table, then create a trigger for each table you want to audit. Hint: use Codesmith to build your triggers. A: If you are using Hibernate, take a look at JBoss Envers. From the project homepage: The Envers project aims to enable easy versioning of persistent JPA classes. All that you have to do is annotate your persistent class or some of its properties, that you want to version, with @Versioned. For each versioned entity, a table will be created, which will hold the history of changes made to the entity. You can then retrieve and query historical data without much effort. This is somewhat similar to Eric's approach, but probably much less effort. Don't know, what language/technology you use to access the database, though. A: In the past I have used triggers to construct db update/insert/delete logging. You could insert a record each time one of the above actions is done on a specific table into a logging table that keeps track of the action, what db user did it, timestamp, table it was performed on, and previous value. There is probably a better answer though as this would require you to cache the value before the actual delete or update was performed I think. But you could use this to do rollbacks.
{ "language": "en", "url": "https://stackoverflow.com/questions/23770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Cleanest Way to Find a Match In a List What is the best way to find something in a list? I know LINQ has some nice tricks, but let's also get suggestions for C# 2.0. Lets get the best refactorings for this common code pattern. Currently I use code like this: // mObjList is a List<MyObject> MyObject match = null; foreach (MyObject mo in mObjList) { if (Criteria(mo)) { match = mo; break; } } or // mObjList is a List<MyObject> bool foundIt = false; foreach (MyObject mo in mObjList) { if (Criteria(mo)) { foundIt = true; break; } } A: Using a Lambda expression: List<MyObject> list = new List<MyObject>(); // populate the list with objects.. return list.Find(o => o.Id == myCriteria); A: @ Konrad: So how do you use it? Let's say I want to match mo.ID to magicNumber. In C# 2.0 you'd write: result = mObjList.Find(delegate(int x) { return x.ID == magicNumber; }); 3.0 knows lambdas: result = mObjList.Find(x => x.ID == magicNumber); A: Put the code in a method and you save a temporary and a break (and you recycle code, as a bonus): T Find<T>(IEnumerable<T> items, Predicate<T> p) { foreach (T item in items) if (p(item)) return item; return null; } … but of course this method already exists anyway for Lists, even in .NET 2.0. A: Evidently the performance hit of anonymous delegates is pretty significant. Test code: static void Main(string[] args) { for (int kk = 0; kk < 10; kk++) { List<int> tmp = new List<int>(); for (int i = 0; i < 100; i++) tmp.Add(i); int sum = 0; long start = DateTime.Now.Ticks; for (int i = 0; i < 1000000; i++) sum += tmp.Find(delegate(int x) { return x == 3; }); Console.WriteLine("Anonymous delegates: " + (DateTime.Now.Ticks - start)); start = DateTime.Now.Ticks; sum = 0; for (int i = 0; i < 1000000; i++) { int match = 0; for (int j = 0; j < tmp.Count; j++) { if (tmp[j] == 3) { match = tmp[j]; break; } } sum += match; } Console.WriteLine("Classic C++ Style: " + (DateTime.Now.Ticks - start)); Console.WriteLine(); } } Results: Anonymous delegates: 710000 Classic C++ Style: 340000 Anonymous delegates: 630000 Classic C++ Style: 320000 Anonymous delegates: 630000 Classic C++ Style: 330000 Anonymous delegates: 630000 Classic C++ Style: 320000 Anonymous delegates: 610000 Classic C++ Style: 340000 Anonymous delegates: 630000 Classic C++ Style: 330000 Anonymous delegates: 650000 Classic C++ Style: 330000 Anonymous delegates: 620000 Classic C++ Style: 330000 Anonymous delegates: 620000 Classic C++ Style: 340000 Anonymous delegates: 620000 Classic C++ Style: 400000 In every case, using anonymous delegates is about 100% slower than the other way.
{ "language": "en", "url": "https://stackoverflow.com/questions/23787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to handle including needed classes in PHP I'm wondering what the best practice is for handling the problem with having to "include" so many files in my PHP scripts in order to ensure that all the classes I need to use are accessible to my script. Currently, I'm just using include_once to include the classes I access directly. Each of those would include_once the classes that they access. I've looked into using the __autoload function, but hat doesn't seem to work well if you plan to have your class files organized in a directory tree. If you did this, it seems like you'd end up walking the directory tree until you found the class you were looking for. Also, I'm not sure how this effects classes with the same name in different namespaces. Is there an easier way to handle this? Or is PHP just not suited to "enterprisey" type applications with lots of different objects all located in separate files that can be in many different directories. A: I my applications I usually have setup.php file that includes all core classes (i.e. framework and accompanying libraries). My custom classes are loaded using autoloader aided by directory layout map. Each time new class is added I run command line builder script that scans whole directory tree in search for model classes then builds associative array with class names as keys and paths as values. Then, __autoload function looks up class name in that array and gets include path. Here's the code: autobuild.php define('MAP', 'var/cache/autoload.map'); error_reporting(E_ALL); require 'setup.php'; print(buildAutoloaderMap() . " classes mapped\n"); function buildAutoloaderMap() { $dirs = array('lib', 'view', 'model'); $cache = array(); $n = 0; foreach ($dirs as $dir) { foreach (new RecursiveIteratorIterator(new RecursiveDirectoryIterator($dir)) as $entry) { $fn = $entry->getFilename(); if (!preg_match('/\.class\.php$/', $fn)) continue; $c = str_replace('.class.php', '', $fn); if (!class_exists($c)) { $cache[$c] = ($pn = $entry->getPathname()); ++$n; } } } ksort($cache); file_put_contents(MAP, serialize($cache)); return $n; } autoload.php define('MAP', 'var/cache/autoload.map'); function __autoload($className) { static $map; $map or ($map = unserialize(file_get_contents(MAP))); $fn = array_key_exists($className, $map) ? $map[$className] : null; if ($fn and file_exists($fn)) { include $fn; unset($map[$className]); } } Note that file naming convention must be [class_name].class.php. Alter the directories classes will be looked in autobuild.php. You can also run autobuilder from autoload function when class not found, but that may get your program into infinite loop. Serialized arrays are darn fast. @JasonMichael: PHP 4 is dead. Get over it. A: You can define multiple autoloading functions with spl_autoload_register: spl_autoload_register('load_controllers'); spl_autoload_register('load_models'); function load_models($class){ if( !file_exists("models/$class.php") ) return false; include "models/$class.php"; return true; } function load_controllers($class){ if( !file_exists("controllers/$class.php") ) return false; include "controllers/$class.php"; return true; } A: You can also programmatically determine the location of the class file by using structured naming conventions that map to physical directories. This is how Zend do it in Zend Framework. So when you call Zend_Loader::loadClass("Zend_Db_Table"); it explodes the classname into an array of directories by splitting on the underscores, and then the Zend_Loader class goes to load the required file. Like all the Zend modules, I would expect you can use just the loader on its own with your own classes but I have only used it as part of a site using Zend's MVC. But there have been concerns about performance under load when you use any sort of dynamic class loading, for example see this blog post comparing Zend_Loader with hard loading of class files. As well as the performance penalty of having to search the PHP include path, it defeats opcode caching. From a comment on that post: When using ANY Dynamic class loader APC can’t cache those files fully as its not sure which files will load on any single request. By hard loading the files APC can cache them in full. A: __autoload works well if you have a consistent naming convention for your classes that tell the function where they're found inside the directory tree. MVC lends itself particularly well for this kind of thing because you can easily split the classes into models, views and controllers. Alternatively, keep an associative array of names to file locations for your class and let __autoload query this array. A: __autoload will work, but only in PHP 5. A: Of the suggestions so far, I'm partial to Kevin's, but it doesn't need to be absolute. I see a couple different options to use with __autoload. * *Put all class files into a single directory. Name the file after the class, ie, classes/User.php or classes/User.class.php. *Kevin's idea of putting models into one directory, controllers into another, etc. Works well if all of your classes fit nicely into the MVC framework, but sometimes, things get messy. *Include the directory in the classname. For example, a class called Model_User would actually be located at classes/Model/User.php. Your __autoload function would know to translate an underscore into a directory separator to find the file. *Just parse the whole directory structure once. Either in the __autoload function, or even just in the same PHP file where it's defined, loop over the contents of the classes directory and cache what files are where. So, if you try to load the User class, it doesn't matter if it's in classes/User.php or classes/Models/User.php or classes/Utility/User.php. Once it finds User.php somewhere in the classes directory, it will know what file to include when the User class needs to be autoloaded. A: @Kevin: I was just trying to point out that spl_autoload_register is a better alternative to __autoload since you can define multiple loaders, and they won't conflict with each other. Handy if you have to include libraries that define an __autoload function as well. Are you sure? The documentation says differently: If your code has an existing __autoload function then this function must be explicitly registered on the __autoload stack. This is because spl_autoload_register() will effectively replace the engine cache for the __autoload function by either spl_autoload() or spl_autoload_call(). => you have to explicitly register any library's __autoload as well. But apart from that you're of course right, this function is the better alternative.
{ "language": "en", "url": "https://stackoverflow.com/questions/23802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to create a simple install system for VB6 on XP/Vista and newer? Heavy emphasis on simple. I've never made an installer and I'd rather not have to learn much. A system that I could hand a pile of files to and it would make some smart guesses about where to put them would be ideal. Go ahead and answer the general question. However In my cases I'm stuck with some extra constraints. The program to be installed is written in VB6 (or is it 5?) and a few previous versions of VB, so it's not going to be updated any time soon. I have a running install and will have a Clean VM to play with So I'll be doing a loop of: run the install, find where it's broken, fix it, add that to the installer, revert the VM, try again. If anyone has a better approach I'm open to suggestions. I MUST get it working on XP and I'd really like to also have something that will work on newer versions of Windows as well. A: I've used InnoSetup several years ago, before Vista, and was very happy with it then. I only had a few files to install and a Start menu icon. It worked great, and was easy to learn. A: Dependency Walker is super useful for finding out which dll is missing from the installer. Once you know the dll, you can find what merge module it is in using the Merge Module Finder. A: InnoSetup or NSIS, whichever seems easier to you. ISTool is a nice GUI tool for InnoSetup which makes creating setup scripts even easier. A: I have worked with NSIS and getting past some of its minor complexities its a fantastic system. its free, offers tons of plugin ability and managed to do everything I needed to do. A: Creating a full setup package for a program is almost a subject area in itself. There are many factors to consider and most of us aren't running Windows 95 anymore. The world is not as simple as it once was. There are a lot of things that need to be addressed, and some of these "setup" issues mean changing the program too. For example the "protected folders" concept that seemed to be new to people when Vista UAC came on the scene. I guess they were all running as admin or something? In its simplest form it means you don't put writeable files next to the EXE in Programs (aka "Program Files") anymore. Another factor is that the way the registry is used has changed. I'm not talking about registry virtualization, though that's part of it as well. But COM registration can be done both per-machine and per-user and even turning UAC off can muck this up. See Per-User COM Registrations and Elevated Processes with UAC on Windows Vista SP1. The result is that a setup package shouldn't be running regsvr32 (or otherwise calling the self-reg entrypoint of a COM library). See "Remarks" at SelfReg Table. Windows Installer is the way to go forward in most cases. VB6 programmers have Visual Studio Installer 6.0 version 1.1 available as a free download for creating MSI packages. See "COM Servers" at the VFP article Using Microsoft Visual Studio Installer for Distributing Visual FoxPro 6.0 Applications for some valuable information. This isn't the easiest option but there is a VB Setup Wizard in VSI 1.1 to help get the basics right. Doing advanced things like creating a [CommonAppData] subfolder and setting Everyone rights on it has to be done in a post-build step outside the IDE. That's where 3rd party tools can be useful to give you more control without resorting to Orca or post-build Installer scripts. Those guys making scripted "legacy" installers try to keep up, but the scripting gets more and more complicated. The results are sometimes iffy. Windows 7 introduces a few new wrinkles of its own. While ClickOnce isn't really the best option for VB6, nothing says you can't use reg-free COM for XCopy installs of many programs. Reg-free COM can even be a good option for use in an Installer package for that matter. So in the end the "simplest" way to deploy VB6 programs is probably going to be reg-free COM XCopy packages wrapped in a self-extracting EXE that will fire off a script to create a Start Menu shortcut. If you can live without the shortcut this is even easier: just unzip the package where it needs to go! See Make My Manifest or alternative tools for reg-free COM packaging. This requires that the target systems be running XP (preferably SP2) or later. The only possible glitch here is that XP did not include the VB6 SP6 runtimes until XP SP3, so you'll want to test your program against the VB6 SP5 runtimes first. Well one more glitch: you can't use ActiveX EXEs this way, they still require registration. A: My advice is this. Try to keep the installer as simple as possible. Windows Installer is a very complicated piece of software and when things don't work right it can be hard to figure out what's going on. I'm sure we have all experienced the endless loop of Windows Installer trying to repair a file that you no longer have the source .msi file for. Most of the time using Windows Installer is like using a sledge hammer to crack a nut. I use InnoSetup for my own stuff and InstallShield at work (against my will). Start with a simple script based installer and only use Windows Installer if you have a good reason to. Note that support for installing assemblies to the GAC may be missing for some non Windows Installer setup tools (such as InnoSetup). A: I used to LOVE Inno Setup. Emphasis on "used to". When you run the single file installer (what you'd typically do), it unpacks the real setup program into a folder under the temp folder and then tries to execute it. The problem is... some anti-virus programs don't allow this. The author is aware of this and refuses to do anything about it. The folder name is random, so cannot be added to any exemption list your anti-virus program may use. Again. The author is aware of this and suggests that I tell my users to turn off their anti-virus programs during installation. (Like that's going to happen)
{ "language": "en", "url": "https://stackoverflow.com/questions/23836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How I hide empty Velocity variable names? I am using Struts + Velocity in a Java application, but after I submit a form, the confirmation page (Velocity template) shows the variable names instead an empty label, like the Age in following example: Name: Fernando Age: {person.age} Sex: Male I would like to know how to hide it! A: You can mark variables as "silent" like this: $!variable If $variable is null, nothing will be rendered. If it is not null, its value will render as it normally would. A: You will also need to be sure and use the proper syntax. Your example is missing the dollar before the variable. It should be $!{person.age}, not just {person.age}.
{ "language": "en", "url": "https://stackoverflow.com/questions/23853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: Closing and Disposing a WCF Service The Close method on an ICommunicationObject can throw two types of exceptions as MSDN outlines here. I understand why the Close method can throw those exceptions, but what I don't understand is why the Dispose method on a service proxy calls the Close method without a try around it. Isn't your Dispose method the one place where you want make sure you don't throw any exceptions? A: It seems to be a common design pattern in .NET code. Here is a citation from Framework design guidelines Consider providing method Close(), in addition to the Dispose(), if close is standard terminology in the area. When doing so, it is important that you make the Close implementation identical to Dispose ... Here is a blog post in which you can find workaround for this System.ServiceModel.ClientBase design problem A: Yes, typically Dispose is one of the places you want to ensure exceptions aren't thrown. However, based on this MSDN forum thread there were some historical reasons for this behavior. As such, the recommended pattern is the try{Close}/catch{Abort} paradigm.
{ "language": "en", "url": "https://stackoverflow.com/questions/23867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Best practices for refactoring classic ASP? I've got to do some significant development in a large, old, spaghetti-ridden ASP system. I've been away from ASP for a long time, focusing my energies on Rails development. One basic step I've taken is to refactor pages into subs and functions with meaningful names, so that at least it's easy to understand @ the top of the file what's generally going on. Is there a worthwhile MVC framework for ASP? Or a best practice at how to at least get business logic out of the views? (I remember doing a lot of includes back in the day -- is that still the way to do it?) I'd love to get some unit testing going for business logic too, but maybe I'm asking too much? Update: There are over 200 ASP scripts in the project, some thousands of lines long ;) UGH! We may opt for the "big rewrite" but until then, when I'm in changing a page, I want to spend a little extra time cleaning up the spaghetti. A: Since a complete rewrite of a working system can be very dangerous i can only give you a small tip: Set up exuberant tags, ctags, on your project. This way you can jump to the definition of a function and sub easy, which i think helps a lot. On separating logic from "views". VBScript supports som kind of OO with classes. I tend to write classes which do the logic which I include on the asp-page which acts as a "view". Then i hook together the view with the class like Username: <%= MyAccount.UserName %>. The MyAccount class can also have methods like: MyAccount.Login() and so on. Kind of primitive, but at least you can capsulate some code and hide it from the HTML. A: My advice would be to carry on refactoring, classic ASP supports classes, so you should be able to move all everything but the display code into included ASP files which just contain classes. See this article of details of moving from old fashioned asp towards ASP.NET Refactoring ASP Regarding a future direction, I wouldn't aim for ASP.NET web forms, instead I'd go for Microsoft's new MVC framework an add-on to of ASP.NET) It will be much simpler migrating to this from classic ASP. A: I use ASPUnit for unit testing some of our classic ASP and find it to be helpful. It may be old, but so is ASP. It's simple, but it does work and you can customize or extend it if necessary. I've also found Working Effectively with Legacy Code by Michael Feathers to be a helpful guide for finding ways to get some of that old code under test. Include files can help as long as you keep it simple. At one point I tried creating an include for each class and that didn't work out too well. I like having a couple main includes with common business logic, and for complicated pages sometimes an include with logic for each of those pages. I suppose you could do MVC with a similar setup. A: Assumptions The documentation for the Classic ASP system is rather light. Management is not looking for a rewrite. Since you have been doing ruby on rails, your (VB/C#) ASP.NET is passable at best. My experience I too inherited a classic ASP system that was slapped together willy-nilly by ex excel-vba types. There was a lot of this stuff <font size=3>crap</font> (and sometimes missing closing tags; Argggh!). Over the course of 2.5 years I added a security system, a common library, CSS+XHTML and was able to coerce the thing to validate xhtml1.1 (sans proper mime type, unfortunately) and built a fairly robust and ajaxy reporting system that's being used daily by 80 users. I used jEdit, with cTags (as mentioned by jamting above), and a bunch of other plugins. My Advice Try to create a master include file from which to import all the stuff that's commonly used. Stuff like login/logout, database access, web services, javascript libs, etc. Do use classes. They are ultra-primitive (no inheritance) but as jamting said, they can be convenient. Indent the scripts properly. Comment Write an external architecture document. I personally use LyX, because it's brain-dead to produce a nicely formatted pdf, but you can use whatever you like. If you use a wiki, get the graphviz add-in installed and use it. It's super easy to make quick diagrams that can be easily modified. Since I have no idea how substantial the enhancements need to be, I suggest having a good high-level to mid-level architecture document will be quite useful in planning the enhancements. On the business logic unit tests, the only thing I found that works is setting up an xml-rpc listener in asp that imports the main library and exposes the functions (not subroutines though) in any of the main library's sub-includes, and then build, separately, a unit test system in a language with better support for the stuff that calls the ASP functions through xml-rpc. I use python, but I think Ruby should do the trick. (Does that make sense?). The cool thing is that the person writing the unit-test part of the software does not need to even look at the ASP code, as long as they have decent descriptions of the functions to call, so they can be someone beside you. There is a project called aspunit at sourceforge but the last release was in 2004 and it's marked as inactive. Never used it but it's pure vbscript. A cursory look at the code tells me it looks like the authors knew what they were doing. Finally, if you need help, I have some availability to do contract telecommuting work (maybe 8 hours/week max). Follow the link trail for contact info. Good luck! HTH. A: Is there any chance you could move from ASP to ASP.Net? Or are you looking at keeping it in classic ASP, but just cleaning it up. If at all possible, I would recommend moving as much as possible moving to .Net. It looks like you may be rewriting/reorganizing a lot of code anyway, so moving to .Net may not be a lot of extra effort. A: Presumably someone else wrote most or all of the system that you're now maintaining. Look for the usual bad habits (repeated code, variables that are too widely scoped, nested if statements, etc.), and refactor as you would any other language. Keep an eye out for recurring things in the same file or different files and abstract them into functions. If the code was written/maintained by various people, there might be some issues with inconsistent coding style. I find that bringing the code back into line makes it easier to see things that can be refactored. "Thousands of lines long" makes me suspicious that there may also be situations where loosely-related things are being displayed on the same page. There again, you want to abstract them into separate subroutines. Eventually you want to be writing objects to help encapsulate stuff like database connectivity, but it will be a while before you get there. A: This is very old, but couldn't resist adding my two cents. If you must rewrite, and must continue to use classic ASP: * *use JScript! much more powerful, you get inheritance, and there some good side benefits like using the same methods for server-side validation as you use for client-side *you can absolutely do MVC - I wrote an MVC framework, and it was not that many lines of code *you can also generate your model classes automatically with a bit of work. I have some code for this that worked quite well *make sure you are doing parameterized queries, and always returning disconnected recordsets A: Software Development Project Management practices indicates that softwares like this are requiring to retire. I know how hard it is to do the right thing, even more when the responsible manager knows sht and is scared of everything other than the wost way possible. But still. It's necessary to start working on the development of a new software. It's simply impossible to maintain this one forever, and the loger they wait for retiring it the worse. If you don't have proper specification/requirements documentation (I think no asp software in the world does, given the noobatry hability of those coders), you'll need both a group of users that know the software features and a manager to be responsible for validating the requirements. You'll need to review every feature and document its requirements. During that process you'll go learning more about the software and its business. Once you have enough info, you can start developing a new one.
{ "language": "en", "url": "https://stackoverflow.com/questions/23899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How can I graph the Lines of Code history for git repo? Basically I want to get the number of lines-of-code in the repository after each commit. The only (really crappy) ways I have found is to use git filter-branch to run wc -l *, and a script that runs git reset --hard on each commit, then runs wc -l To make it a bit clearer, when the tool is run, it would output the lines of code of the very first commit, then the second and so on. This is what I want the tool to output (as an example): me@something:~/$ gitsloc --branch master 10 48 153 450 1734 1542 I've played around with the ruby 'git' library, but the closest I found was using the .lines() method on a diff, which seems like it should give the added lines (but does not: it returns 0 when you delete lines for example) require 'rubygems' require 'git' total = 0 g = Git.open(working_dir = '/Users/dbr/Desktop/code_projects/tvdb_api') last = nil g.log.each do |cur| diff = g.diff(last, cur) total = total + diff.lines puts total last = cur end A: The first thing that jumps to mind is the possibility of your git history having a nonlinear history. You might have difficulty determining a sensible sequence of commits. Having said that, it seems like you could keep a log of commit ids and the corresponding lines of code in that commit. In a post-commit hook, starting from the HEAD revision, work backwards (branching to multiple parents if necessary) until all paths reach a commit that you've already seen before. That should give you the total lines of code for each commit id. Does that help any? I have a feeling that I've misunderstood something about your question. A: You might also consider gitstats, which generates this graph as an html file. A: You may get both added and removed lines with git log, like: git log --shortstat --reverse --pretty=oneline From this, you can write a similar script to the one you did using this info. In python: #!/usr/bin/python """ Display the per-commit size of the current git branch. """ import subprocess import re import sys def main(argv): git = subprocess.Popen(["git", "log", "--shortstat", "--reverse", "--pretty=oneline"], stdout=subprocess.PIPE) out, err = git.communicate() total_files, total_insertions, total_deletions = 0, 0, 0 for line in out.split('\n'): if not line: continue if line[0] != ' ': # This is a description line hash, desc = line.split(" ", 1) else: # This is a stat line data = re.findall( ' (\d+) files changed, (\d+) insertions\(\+\), (\d+) deletions\(-\)', line) files, insertions, deletions = ( int(x) for x in data[0] ) total_files += files total_insertions += insertions total_deletions += deletions print "%s: %d files, %d lines" % (hash, total_files, total_insertions - total_deletions) if __name__ == '__main__': sys.exit(main(sys.argv)) A: http://github.com/ITikhonov/git-loc worked right out of the box for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/23907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: OpenGL Rotation I'm trying to do a simple rotation in OpenGL but must be missing the point. I'm not looking for a specific fix so much as a quick explanation or link that explains OpenGL rotation more generally. At the moment I have code like this: glPushMatrix(); glRotatef(90.0, 0.0, 1.0, 0.0); glBegin(GL_TRIANGLES); glVertex3f( 1.0, 1.0, 0.0 ); glVertex3f( 3.0, 2.0, 0.0 ); glVertex3f( 3.0, 1.0, 0.0 ); glEnd(); glPopMatrix(); But the result is not a triangle rotated 90 degrees. Edit Hmm thanks to Mike Haboustak - it appeared my code was calling a SetCamera function that use glOrtho. I'm too new to OpenGL to have any idea of what this meant but disabling this and rotating in the Z-axis produced the desired result. A: Ensure that you're modifying the modelview matrix by putting the following before the glRotatef call: glMatrixMode(GL_MODELVIEW); Otherwise, you may be modifying either the projection or a texture matrix instead. A: Do you get a 1 unit straight line? It seems that 90deg rot. around Y is going to have you looking at the side of a triangle with no depth. You should try rotating around the Z axis instead and see if you get something that makes more sense. OpenGL has two matrices related to the display of geometry, the ModelView and the Projection. Both are applied to coordinates before the data becomes visible on the screen. First the ModelView matrix is applied, transforming the data from model space into view space. Then the Projection matrix is applied with transforms the data from view space for "projection" on your 2D monitor. ModelView is used to position multiple objects to their locations in the "world", Projection is used to position the objects onto the screen. Your code seems fine, so I assume from reading the documentation you know what the nature of functions like glPushMatrix() is. If rotating around Z still doesn't make sense, verify that you're editing the ModelView matrix by calling glMatrixMode. A: The "accepted answer" is not fully correct - rotating around the Z will not help you see this triangle unless you've done some strange things prior to this code. Removing a glOrtho(...) call might have corrected the problem in this case, but you still have a couple of other issues. Two major problems with the code as written: * *Have you positioned the camera previously? In OpenGL, the camera is located at the origin, looking down the Z axis, with positive Y as up. In this case, the triangle is being drawn in the same plane as your eye, but up and to the right. Unless you have a very strange projection matrix, you won't see it. gluLookat() is the easiest command to do this, but any command that moves the current matrix (which should be MODELVIEW) can be made to work. *You are drawing the triangle in a left handed, or clockwise method, whereas the default for OpenGL is a right handed, or counterclockwise coordinate system. This means that, if you are culling backfaces (which you are probably not, but will likely move onto as you get more advanced), you would not see the triangle as expected. To see the problem, put your right hand in front of your face and, imagining it is in the X-Y plane, move your fingers in the order you draw the vertices (1,1) to (3,2) to (3,1). When you do this, your thumb is facing away from your face, meaning you are looking at the back side of the triangle. You need to get into the habit of drawing faces in a right handed method, since that is the common way it is done in OpenGL. The best thing I can recommend is to use the NeHe tutorials - http://nehe.gamedev.net/. They begin by showing you how to set up OpenGL in several systems, move onto drawing triangles, and continue slowly and surely to more advanced topics. They are very easy to follow. A: Regarding Projection matrix, you can find a good source to start with here: http://msdn.microsoft.com/en-us/library/bb147302(VS.85).aspx It explains a bit about how to construct one type of projection matrix. Orthographic projection is the very basic/primitive form of such a matrix and basically what is does is taking 2 of the 3 axes coordinates and project them to the screen (you can still flip axes and scale them but there is no warp or perspective effect). transformation of matrices is most likely one of the most important things when rendering in 3D and basically involves 3 matrix stages: * *Transform1 = Object coordinates system to World (for example - object rotation and scale) *Transform2 = World coordinates system to Camera (placing the object in the right place) *Transform3 = Camera coordinates system to Screen space (projecting to screen) Usually the 3 matrix multiplication result is referred to as the WorldViewProjection matrix (if you ever bump into this term), since it transforms the coordinates from Model space through World, then to Camera and finally to the screen representation. Have fun
{ "language": "en", "url": "https://stackoverflow.com/questions/23918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Factorial Algorithms in different languages I want to see all the different ways you can come up with, for a factorial subroutine, or program. The hope is that anyone can come here and see if they might want to learn a new language. Ideas: * *Procedural *Functional *Object Oriented *One liners *Obfuscated *Oddball *Bad Code *Polyglot Basically I want to see an example, of different ways of writing an algorithm, and what they would look like in different languages. Please limit it to one example per entry. I will allow you to have more than one example per answer, if you are trying to highlight a specific style, language, or just a well thought out idea that lends itself to being in one post. The only real requirement is it must find the factorial of a given argument, in all languages represented. Be Creative! Recommended Guideline: # Language Name: Optional Style type - Optional bullet points Code Goes Here Other informational text goes here I will ocasionally go along and edit any answer that does not have decent formatting. A: F#: Functional Straight forward: let rec fact x = if x < 0 then failwith "Invalid value." elif x = 0 then 1 else x * fact (x - 1) Getting fancy: let fact x = [1 .. x] |> List.fold_left ( * ) 1 A: Batch (NT): @echo off set n=%1 set result=1 for /l %%i in (%n%, -1, 1) do ( set /a result=result * %%i ) echo %result% Usage: C:>factorial.bat 15 A: Recursive Prolog fac(0,1). fac(N,X) :- N1 is N -1, fac(N1, T), X is N * T. Tail Recursive Prolog fac(0,N,N). fac(X,N,T) :- A is N * X, X1 is X - 1, fac(X1,A,T). fac(N,T) :- fac(N,1,T). A: ruby recursive (factorial=Hash.new{|h,k|k*h[k-1]})[1]=1 usage: factorial[5] => 120 A: Scheme Here is a simple recursive definition: (define (factorial x) (if (= x 0) 1 (* x (factorial (- x 1))))) In Scheme tail-recursive functions use constant stack space. Here is a version of factorial that is tail-recursive: (define factorial (letrec ((fact (lambda (x accum) (if (= x 0) accum (fact (- x 1) (* accum x)))))) (lambda (x) (fact x 1)))) A: D Templates: Functional template factorial(int n : 1) { const factorial = 1; } template factorial(int n) { const factorial = n * factorial!(n-1); } or template factorial(int n) { static if(n == 1) const factorial = 1; else const factorial = n * factorial!(n-1); } Used like this: factorial!(5) A: Oddball examples? What about using the gamma function! Since, Gamma n = (n-1)!. OCaml: Using Gamma let rec gamma z = let pi = 4.0 *. atan 1.0 in if z < 0.5 then pi /. ((sin (pi*.z)) *. (gamma (1.0 -. z))) else let consts = [| 0.99999999999980993; 676.5203681218851; -1259.1392167224028; 771.32342877765313; -176.61502916214059; 12.507343278686905; -0.13857109526572012; 9.9843695780195716e-6; 1.5056327351493116e-7; |] in let z = z -. 1.0 in let results = Array.fold_right (fun x y -> x +. y) (Array.mapi (fun i x -> if i = 0 then x else x /. (z+.(float i))) consts ) 0.0 in let x = z +. (float (Array.length consts)) -. 1.5 in let final = (sqrt (2.0*.pi)) *. (x ** (z+.0.5)) *. (exp (-.x)) *. result in final let factorial_gamma n = int_of_float (gamma (float (n+1))) A: Freshman Haskell programmer fac n = if n == 0 then 1 else n * fac (n-1) Sophomore Haskell programmer, at MIT (studied Scheme as a freshman) fac = (\(n) -> (if ((==) n 0) then 1 else ((*) n (fac ((-) n 1))))) Junior Haskell programmer (beginning Peano player) fac 0 = 1 fac (n+1) = (n+1) * fac n Another junior Haskell programmer (read that n+k patterns are “a disgusting part of Haskell” [1] and joined the “Ban n+k patterns”-movement [2]) fac 0 = 1 fac n = n * fac (n-1) Senior Haskell programmer (voted for Nixon Buchanan Bush — “leans right”) fac n = foldr (*) 1 [1..n] Another senior Haskell programmer (voted for McGovern Biafra Nader — “leans left”) fac n = foldl (*) 1 [1..n] Yet another senior Haskell programmer (leaned so far right he came back left again!) -- using foldr to simulate foldl fac n = foldr (\x g n -> g (x*n)) id [1..n] 1 Memoizing Haskell programmer (takes Ginkgo Biloba daily) facs = scanl (*) 1 [1..] fac n = facs !! n Pointless (ahem) “Points-free” Haskell programmer (studied at Oxford) fac = foldr (*) 1 . enumFromTo 1 Iterative Haskell programmer (former Pascal programmer) fac n = result (for init next done) where init = (0,1) next (i,m) = (i+1, m * (i+1)) done (i,_) = i==n result (_,m) = m for i n d = until d n i Iterative one-liner Haskell programmer (former APL and C programmer) fac n = snd (until ((>n) . fst) (\(i,m) -> (i+1, i*m)) (1,1)) Accumulating Haskell programmer (building up to a quick climax) facAcc a 0 = a facAcc a n = facAcc (n*a) (n-1) fac = facAcc 1 Continuation-passing Haskell programmer (raised RABBITS in early years, then moved to New Jersey) facCps k 0 = k 1 facCps k n = facCps (k . (n *)) (n-1) fac = facCps id Boy Scout Haskell programmer (likes tying knots; always “reverent,” he belongs to the Church of the Least Fixed-Point [8]) y f = f (y f) fac = y (\f n -> if (n==0) then 1 else n * f (n-1)) Combinatory Haskell programmer (eschews variables, if not obfuscation; all this currying’s just a phase, though it seldom hinders) s f g x = f x (g x) k x y = x b f g x = f (g x) c f g x = f x g y f = f (y f) cond p f g x = if p x then f x else g x fac = y (b (cond ((==) 0) (k 1)) (b (s (*)) (c b pred))) List-encoding Haskell programmer (prefers to count in unary) arb = () -- "undefined" is also a good RHS, as is "arb" :) listenc n = replicate n arb listprj f = length . f . listenc listprod xs ys = [ i (x,y) | x<-xs, y<-ys ] where i _ = arb facl [] = listenc 1 facl n@(_:pred) = listprod n (facl pred) fac = listprj facl Interpretive Haskell programmer (never “met a language” he didn't like) -- a dynamically-typed term language data Term = Occ Var | Use Prim | Lit Integer | App Term Term | Abs Var Term | Rec Var Term type Var = String type Prim = String -- a domain of values, including functions data Value = Num Integer | Bool Bool | Fun (Value -> Value) instance Show Value where show (Num n) = show n show (Bool b) = show b show (Fun _) = "" prjFun (Fun f) = f prjFun _ = error "bad function value" prjNum (Num n) = n prjNum _ = error "bad numeric value" prjBool (Bool b) = b prjBool _ = error "bad boolean value" binOp inj f = Fun (\i -> (Fun (\j -> inj (f (prjNum i) (prjNum j))))) -- environments mapping variables to values type Env = [(Var, Value)] getval x env = case lookup x env of Just v -> v Nothing -> error ("no value for " ++ x) -- an environment-based evaluation function eval env (Occ x) = getval x env eval env (Use c) = getval c prims eval env (Lit k) = Num k eval env (App m n) = prjFun (eval env m) (eval env n) eval env (Abs x m) = Fun (\v -> eval ((x,v) : env) m) eval env (Rec x m) = f where f = eval ((x,f) : env) m -- a (fixed) "environment" of language primitives times = binOp Num (*) minus = binOp Num (-) equal = binOp Bool (==) cond = Fun (\b -> Fun (\x -> Fun (\y -> if (prjBool b) then x else y))) prims = [ ("*", times), ("-", minus), ("==", equal), ("if", cond) ] -- a term representing factorial and a "wrapper" for evaluation facTerm = Rec "f" (Abs "n" (App (App (App (Use "if") (App (App (Use "==") (Occ "n")) (Lit 0))) (Lit 1)) (App (App (Use "*") (Occ "n")) (App (Occ "f") (App (App (Use "-") (Occ "n")) (Lit 1)))))) fac n = prjNum (eval [] (App facTerm (Lit n))) Static Haskell programmer (he does it with class, he’s got that fundep Jones! After Thomas Hallgren’s “Fun with Functional Dependencies” [7]) -- static Peano constructors and numerals data Zero data Succ n type One = Succ Zero type Two = Succ One type Three = Succ Two type Four = Succ Three -- dynamic representatives for static Peanos zero = undefined :: Zero one = undefined :: One two = undefined :: Two three = undefined :: Three four = undefined :: Four -- addition, a la Prolog class Add a b c | a b -> c where add :: a -> b -> c instance Add Zero b b instance Add a b c => Add (Succ a) b (Succ c) -- multiplication, a la Prolog class Mul a b c | a b -> c where mul :: a -> b -> c instance Mul Zero b Zero instance (Mul a b c, Add b c d) => Mul (Succ a) b d -- factorial, a la Prolog class Fac a b | a -> b where fac :: a -> b instance Fac Zero One instance (Fac n k, Mul (Succ n) k m) => Fac (Succ n) m -- try, for "instance" (sorry): -- -- :t fac four Beginning graduate Haskell programmer (graduate education tends to liberate one from petty concerns about, e.g., the efficiency of hardware-based integers) -- the natural numbers, a la Peano data Nat = Zero | Succ Nat -- iteration and some applications iter z s Zero = z iter z s (Succ n) = s (iter z s n) plus n = iter n Succ mult n = iter Zero (plus n) -- primitive recursion primrec z s Zero = z primrec z s (Succ n) = s n (primrec z s n) -- two versions of factorial fac = snd . iter (one, one) (\(a,b) -> (Succ a, mult a b)) fac' = primrec one (mult . Succ) -- for convenience and testing (try e.g. "fac five") int = iter 0 (1+) instance Show Nat where show = show . int (zero : one : two : three : four : five : _) = iterate Succ Zero Origamist Haskell programmer (always starts out with the “basic Bird fold”) -- (curried, list) fold and an application fold c n [] = n fold c n (x:xs) = c x (fold c n xs) prod = fold (*) 1 -- (curried, boolean-based, list) unfold and an application unfold p f g x = if p x then [] else f x : unfold p f g (g x) downfrom = unfold (==0) id pred -- hylomorphisms, as-is or "unfolded" (ouch! sorry ...) refold c n p f g = fold c n . unfold p f g refold' c n p f g x = if p x then n else c (f x) (refold' c n p f g (g x)) -- several versions of factorial, all (extensionally) equivalent fac = prod . downfrom fac' = refold (*) 1 (==0) id pred fac'' = refold' (*) 1 (==0) id pred Cartesianally-inclined Haskell programmer (prefers Greek food, avoids the spicy Indian stuff; inspired by Lex Augusteijn’s “Sorting Morphisms” [3]) -- (product-based, list) catamorphisms and an application cata (n,c) [] = n cata (n,c) (x:xs) = c (x, cata (n,c) xs) mult = uncurry (*) prod = cata (1, mult) -- (co-product-based, list) anamorphisms and an application ana f = either (const []) (cons . pair (id, ana f)) . f cons = uncurry (:) downfrom = ana uncount uncount 0 = Left () uncount n = Right (n, n-1) -- two variations on list hylomorphisms hylo f g = cata g . ana f hylo' f (n,c) = either (const n) (c . pair (id, hylo' f (c,n))) . f pair (f,g) (x,y) = (f x, g y) -- several versions of factorial, all (extensionally) equivalent fac = prod . downfrom fac' = hylo uncount (1, mult) fac'' = hylo' uncount (1, mult) Ph.D. Haskell programmer (ate so many bananas that his eyes bugged out, now he needs new lenses!) -- explicit type recursion based on functors newtype Mu f = Mu (f (Mu f)) deriving Show in x = Mu x out (Mu x) = x -- cata- and ana-morphisms, now for *arbitrary* (regular) base functors cata phi = phi . fmap (cata phi) . out ana psi = in . fmap (ana psi) . psi -- base functor and data type for natural numbers, -- using a curried elimination operator data N b = Zero | Succ b deriving Show instance Functor N where fmap f = nelim Zero (Succ . f) nelim z s Zero = z nelim z s (Succ n) = s n type Nat = Mu N -- conversion to internal numbers, conveniences and applications int = cata (nelim 0 (1+)) instance Show Nat where show = show . int zero = in Zero suck = in . Succ -- pardon my "French" (Prelude conflict) plus n = cata (nelim n suck ) mult n = cata (nelim zero (plus n)) -- base functor and data type for lists data L a b = Nil | Cons a b deriving Show instance Functor (L a) where fmap f = lelim Nil (\a b -> Cons a (f b)) lelim n c Nil = n lelim n c (Cons a b) = c a b type List a = Mu (L a) -- conversion to internal lists, conveniences and applications list = cata (lelim [] (:)) instance Show a => Show (List a) where show = show . list prod = cata (lelim (suck zero) mult) upto = ana (nelim Nil (diag (Cons . suck)) . out) diag f x = f x x fac = prod . upto Post-doc Haskell programmer (from Uustalu, Vene and Pardo’s “Recursion Schemes from Comonads” [4]) -- explicit type recursion with functors and catamorphisms newtype Mu f = In (f (Mu f)) unIn (In x) = x cata phi = phi . fmap (cata phi) . unIn -- base functor and data type for natural numbers, -- using locally-defined "eliminators" data N c = Z | S c instance Functor N where fmap g Z = Z fmap g (S x) = S (g x) type Nat = Mu N zero = In Z suck n = In (S n) add m = cata phi where phi Z = m phi (S f) = suck f mult m = cata phi where phi Z = zero phi (S f) = add m f -- explicit products and their functorial action data Prod e c = Pair c e outl (Pair x y) = x outr (Pair x y) = y fork f g x = Pair (f x) (g x) instance Functor (Prod e) where fmap g = fork (g . outl) outr -- comonads, the categorical "opposite" of monads class Functor n => Comonad n where extr :: n a -> a dupl :: n a -> n (n a) instance Comonad (Prod e) where extr = outl dupl = fork id outr -- generalized catamorphisms, zygomorphisms and paramorphisms gcata :: (Functor f, Comonad n) => (forall a. f (n a) -> n (f a)) -> (f (n c) -> c) -> Mu f -> c gcata dist phi = extr . cata (fmap phi . dist . fmap dupl) zygo chi = gcata (fork (fmap outl) (chi . fmap outr)) para :: Functor f => (f (Prod (Mu f) c) -> c) -> Mu f -> c para = zygo In -- factorial, the *hard* way! fac = para phi where phi Z = suck zero phi (S (Pair f n)) = mult f (suck n) -- for convenience and testing int = cata phi where phi Z = 0 phi (S f) = 1 + f instance Show (Mu N) where show = show . int Tenured professor (teaching Haskell to freshmen) fac n = product [1..n] A: PowerShell function factorial( [int] $n ) { $result = 1; if ( $n -gt 1 ) { $result = $n * ( factorial ( $n - 1 ) ) } $result } Here's a one-liner: $n..1 | % {$result = 1}{$result *= $_}{$result} A: Java 1.6: recursive, memoized (for subsequent calls) private static Map<BigInteger, BigInteger> _results = new HashMap() public static BigInteger factorial(BigInteger n){ if (0 >= n.compareTo(BigInteger.ONE)) return BigInteger.ONE.max(n); if (_results.containsKey(n)) return _results.get(n); BigInteger result = factorial(n.subtract(BigInteger.ONE)).multiply(n); _results.put(n, result); return result; } A: Bash: Recursive In bash and recursive, but with the added advantage that it deals with each iteration in a new process. The max it can calculate is !20 before overflowing, but you can still run it for big numbers if you don't care about the answer and want your system to fall over ;) #!/bin/bash echo $(($1 * `( [[ $1 -gt 1 ]] && ./$0 $(($1 - 1)) ) || echo 1`)); A: This is one of the faster algorithms, up to 170!. It fails inexplicably beyond 170!, and it's relatively slow for small factorials, but for factorials between 80 and 170 it's blazingly fast compared to many algorithms. curl http://www.google.com/search?q=170! There's also an online interface, try it out now! Let me know if you find a bug, or faster implementation for large factorials. EDIT: This algorithm is slightly slower, but gives results beyond 170: curl http://www58.wolframalpha.com/input/?i=171! It also simplifies them into various other representations. A: C/C++: Procedural unsigned long factorial(int n) { unsigned long factorial = 1; int i; for (i = 2; i <= n; i++) factorial *= i; return factorial; } PHP: Procedural function factorial($n) { for ($factorial = 1, $i = 2; $i <= $n; $i++) $factorial *= $i; return $factorial; } @Niyaz: You didn't specify return type for the function A: The problem with most of the above is that they will run out of precision at about 25! (12! with 32 bit ints) or just overflow. Here's a c# implementation to break through these limits! class Number { public Number () { m_number = "0"; } public Number (string value) { m_number = value; } public int this [int column] { get { return column < m_number.Length ? m_number [m_number.Length - column - 1] - '0' : 0; } } public static implicit operator Number (string rhs) { return new Number (rhs); } public static bool operator == (Number lhs, Number rhs) { return lhs.m_number == rhs.m_number; } public static bool operator != (Number lhs, Number rhs) { return lhs.m_number != rhs.m_number; } public override bool Equals (object obj) { return this == (Number) obj; } public override int GetHashCode () { return m_number.GetHashCode (); } public static Number operator + (Number lhs, Number rhs) { StringBuilder result = new StringBuilder (new string ('0', lhs.m_number.Length + rhs.m_number.Length)); int carry = 0; for (int i = 0 ; i < result.Length ; ++i) { int sum = carry + lhs [i] + rhs [i], units = sum % 10; carry = sum / 10; result [result.Length - i - 1] = (char) ('0' + units); } return TrimLeadingZeros (result); } public static Number operator * (Number lhs, Number rhs) { StringBuilder result = new StringBuilder (new string ('0', lhs.m_number.Length + rhs.m_number.Length)); for (int multiplier_index = rhs.m_number.Length - 1 ; multiplier_index >= 0 ; --multiplier_index) { int multiplier = rhs.m_number [multiplier_index] - '0', column = result.Length - rhs.m_number.Length + multiplier_index; for (int i = lhs.m_number.Length - 1 ; i >= 0 ; --i, --column) { int product = (lhs.m_number [i] - '0') * multiplier, units = product % 10, tens = product / 10, hundreds = 0, unit_sum = result [column] - '0' + units; if (unit_sum > 9) { unit_sum -= 10; ++tens; } result [column] = (char) ('0' + unit_sum); int tens_sum = result [column - 1] - '0' + tens; if (tens_sum > 9) { tens_sum -= 10; ++hundreds; } result [column - 1] = (char) ('0' + tens_sum); if (hundreds > 0) { int hundreds_sum = result [column - 2] - '0' + hundreds; result [column - 2] = (char) ('0' + hundreds_sum); } } } return TrimLeadingZeros (result); } public override string ToString () { return m_number; } static string TrimLeadingZeros (StringBuilder number) { while (number [0] == '0' && number.Length > 1) { number.Remove (0, 1); } return number.ToString (); } string m_number; } static void Main (string [] args) { Number a = new Number ("1"), b = new Number (args [0]), one = new Number ("1"); for (Number c = new Number ("1") ; c != b ; ) { c = c + one; a = a * c; } Console.WriteLine (string.Format ("{0}! = {1}", new object [] { b, a })); } FWIW: 10000! is over 35500 character long. Skizz A: Lambda Calculus Input and output are Church numerals (i.e. natural number k is \f n. f^k n; so 3 = \f n. f (f (f n))) (\x. x x) (\y f. f (y y f)) (\y n. n (\x y z. z) (\x y. x) (\f n. f n) (\f. n (y (\f m. n (\g h. h (g f)) (\x. m) (\x. x)) f))) A: The code below is tongue in cheek, however when you consider that the return value is limited to n < 34 for uint32, <65 uint64 before we run out of space for the return value with a uint, hard coding 33 values isn't that crazy :) public static int Factorial(int n) { switch (n) { case 1: return 1; case 2: return 2; case 3: return 6; case 4: return 24; default: throw new Exception("Sorry, I can only count to 4"); } } A: C++: Template Metaprogramming Uses the classic enum hack. template<unsigned int n> struct factorial { enum { result = n * factorial<n - 1>::result }; }; template<> struct factorial<0> { enum { result = 1 }; }; Usage. const unsigned int x = factorial<4>::result; Factorial is calculated completely at compile time based on the template parameter n. Therefore, factorial<4>::result is a constant once the compiler has done its work. A: Ruby: functional def factorial(n) return 1 if n == 1 n * factorial(n -1) end A: Icon Recursive function procedure factorial(n) return (0<n) * factorial(n-1) | 1 end I've cheated a bit allowing negatives to return 1. If you want it to fail given a negative argument it's slightly less concise: return (0<n) * factorial(n-1) | (n=0 & 1) Then write(factorial(3)) write(factorial(-1)) write(factorial(20)) prints 6 2432902008176640000 Iterative generator procedure factorials() local f,n f := 1; n := 0 repeat suspend f *:= (n +:= 1) end Then every write(factorials() \ 5) prints 1 2 6 24 120 To understand this: evaluation is goal-directed and backtracks on failure. There is no boolean type, and binary operators which would return a boolean in other languages, either fail or return their second argument - with the exception of |, which in a single-value context returns its first argument if it succeeds, otherwise tries its second argument. (in a multiple-value context it returns its first argument then its second argument) suspend is like yield in other languages, except that a generator is not explicitly called multiple times to return its results. Instead, every asks its argument for all values but doesn't return anything by default; it's useful with side-effects (in this case I/O). \ limits the number of values returned by a generator, which in the case of factorials would be infinite. A: Clojure Tail-recursive (defn fact ([n] (fact n 1)) ([n acc] (if (= n 0) acc (recur (- n 1) (* acc n))))) Short and simple (defn fact [n] (apply * (range 1 (+ n 1)))) A: Haskell factorial n = product [1..n] A: Nothing is as fast as bash & bc: function fac { seq $1 | paste -sd* | bc; } $ fac 42 1405006117752879898543142606244511569936384000000000 $ A: Whitespace . . . . . . . . . . . . . . . . . . . . . . . . . . It was hard to get it to show here properly, but now I tried copying it from the preview and it works. You need to input the number and press enter. A: I find the following implementations just hilarious: The Evolution of a Haskell Programmer Evolution of a Python programmer Enjoy! A: Mathematica : using pure recursive functions (If[#>1,# #0[#-1],1])& A: Lua function factorial (n) if (n <= 1) then return 1 end return n*factorial(n-1) end And here is a stack overflow caught in the wild: > print (factorial(234132)) stdin:3: stack overflow stack traceback: stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' ... stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:3: in function 'factorial' stdin:1: in main chunk [C]: ? A: Agda 2: Functional, dependently typed. data Nat = zero | suc (m::Nat) add (m::Nat) (n::Nat) :: Nat = case m of (zero ) -> n (suc p) -> suc (add p n) mul (m::Nat) (n::Nat)::Nat = case m of (zero ) -> zero (suc p) -> add n (mul p n) factorial (n::Nat)::Nat = case n of (zero ) -> suc zero (suc p) -> mul n (factorial p) A: Delphi facts: array[2..12] of integer; function TForm1.calculate(f: integer): integer; begin if f = 1 then Result := f else if f > High(facts) then Result := High(Integer) else if (facts[f] > 0) then Result := facts[f] else begin facts[f] := f * Calculate(f-1); Result := facts[f]; end; end; initialize for i := Low(facts) to High(facts) do facts[i] := 0; After the first time a factorial higher or equal to the desired value has been calculated, this algorithm just returns the factorial in constant time O(1). It takes in account that int32 only can hold up to 12! A: Nemerle: Functional def fact(n) { | 0 => 1 | x => x * fact(x-1) } A: #Language: T-SQL #Style: Recursive, divide and conquer Just for fun - in T-SQL using a divide and conquer recursive method. Yes, recursive - in SQL without stack overflow. create function factorial(@b int=1, @e int) returns float as begin return case when @b>=@e then @e else convert(float,dbo.factorial(@b,convert(int,@b+(@e-@b)/2))) * convert(float,dbo.factorial(convert(int,@b+1+(@e-@b)/2),@e)) end end call it like this: print dbo.factorial(1,170) -- the 1 being the starting number A: PostScript: Tail Recursive /fact0 { dup 2 lt { pop } { 2 copy mul 3 1 roll 1 sub exch pop fact0 } ifelse } def /fact { 1 exch fact0 } def A: Forth (recursive): : factorial ( n -- n ) dup 1 > if dup 1 - recurse * else drop 1 then ; A: Scala The factorial can be defined functionally as: def fact(n: Int): BigInt = 1 to n reduceLeft(_*_) or more traditionally as def fact(n: Int): BigInt = if (n == 0) 1 else fact(n-1) * n and we can make ! a valid method on Ints: object extendBuiltins extends Application { class Factorizer(n: Int) { def ! = 1 to n reduceLeft(_*_) } implicit def int2fact(n: Int) = new Factorizer(n) println("10! = " + (10!)) } A: Compile time in C++ template<unsigned i> struct factorial { static const unsigned value = i * factorial<i-1>::value; }; template<> struct factorial<0> { static const unsigned value = 1; }; Use in code as: Factorial<5>::value A: Java Script: Creative method using "interview question" counting bits fnc. function nu(x) { var r=0 while( x ) { x &= x-1 r++ } return r } function fac(n) { var r= Math.pow(2,n-nu(n)) for ( var i=3 ; i <= n ; i+= 2 ) r *= Math.pow(i,Math.floor(Math.log(n/i)/Math.LN2)+1) return r } Works up to 21! then Chrome switches to scientific notation. Inspiration thanks lack of sleep and Knuth, et al's "concrete mathematics". A: Brainfuck: with bignum support! Accepts as input a non-negative integer followed by newline, and outputs the corresponding factorial followed by newline. >>>>,----------[>>>>,----------]>>>>++<<<<<<<<[>++++++[<---- -->-]<-<<<<]>>>>[[>>+<<-]>>[<<+>+>-]<->+<[>>>>+<<<-<[-]]>[-] >>]>[-<<<<<[<<<<]>>>>[[>>+<<-]>>[<<+>+>-]>>]>>>>[-[>+<-]+>>> >]<<<<[<<<<]<<<<[<<<<]>>>>>[>>>[>>>>]>>>>[>>>>]<<<<[[>>>>+<< <<-]<<<<]>>>>+<<<<<<<[<<<<]>>>>-[>>>[>>>>]>>>>[>>>>]<<<<[>>> +<<<-]>>>[<<<+>>+>-]<-[>>+<<[-]]<<[<<<<]>>>>[>[>+<-]>[<<+>+> -]<<[>>>+<<<-]>>>[<<<+>>+>-]<->+++++++++[-<[-[>>>>+<<<<-]]>> >>[<<<<+>>>>-]<<<]<[>>+<<<<[-]>>[<<+>>-]]>>]<<<<[<<<<]<<<[<< <<]>>>>-]>>>>]>>>[>[-]>>>]<<<<[>>+<<-]>>[<<+>+>-]<->+<[>-<[- ]]>[-<<-<<<<[>>+<<-]>>[<<+>+>-]<->+<[>-<[-]]>]<<[<<<<]<<<<-[ >>+<<-]>>[<<+>+>-]+<[>-<[-]]>[-<<++++++++++<<<<-[>>+<<-]>>[< <+>+>-]+<[>-<[-]]>]<<[<<<<]>>>>[[>>+<<-]>>[<<+>+>-]<->+<[>>> >+<<<-<[-]]>[-]>>]>]>>>[>>>>]<<<<[>+++++++[<+++++++>-]<--.<< <<]++++++++++. Unlike the brainf*ck answer posted earlier, this does not overflow any memory locations. (That implementation put n! in a single memory location, effectively limiting it to n less than 6 under standard bf rules.) This program will output n! for any value of n, limited only by time and memory (or bf implementation). For example, using Urban Muller's compiler on my machine, it takes 12 seconds to compute 1000! I think that's pretty good, considering the program can only move left/right and increment/decrement by one. Believe it or not, this is the first bf program I've written; it took about 10 hours, which were mostly spent debugging. Unfortunately, I later found out that Daniel B Cristofani has written a factorial generator, which just outputs ever-larger factorials, never terminating: >++++++++++>>>+>+[>>>+[-[<<<<<[+<<<<<]>>[[-]>[<<+>+>-]<[>+<- ]<[>+<-[>+<-[>+<-[>+<-[>+<-[>+<-[>+<-[>+<-[>+<-[>[-]>>>>+>+< <<<<<-[>+<-]]]]]]]]]]]>[<+>-]+>>>>>]<<<<<[<<<<<]>>>>>>>[>>>> >]++[-<<<<<]>>>>>>-]+>>>>>]<[>++<-]<<<<[<[>+<-]<<<<]>>[->[-] ++++++[<++++++++>-]>>>>]<<<<<[<[>+>+<<-]>.<<<<<]>.>>>>] His program is much shorter, but he's practically a professional bf golfer. A: Agda2 It is Agda2, using the very nice Agda2 syntax. module fac where data Nat : Set where -- Peano numbers zero : Nat suc : Nat -> Nat {-# BUILTIN NATURAL Nat #-} {-# BUILTIN SUC suc #-} {-# BUILTIN ZERO zero #-} infixl 10 _+_ -- Addition over Peano numbers _+_ : Nat -> Nat -> Nat zero + n = n (suc n) + m = suc (n + m) infixl 20 _*_ -- Multiplication over Peano numbers _*_ : Nat -> Nat -> Nat zero * n = zero n * zero = zero (suc n) * (suc m) = suc n + (suc n * m) _! : Nat -> Nat -- Factorial function, syntax: "x !" zero ! = suc zero (suc n) ! = (suc n) * (n !) A: Python: functional, recursive one-liner using short circuit boolean evaluation. factorial = lambda n: ((n <= 1) and 1) or factorial(n-1) * n A: C# Lookup: Nothing to calculate really, just look it up. To extend it,add another 8 numbers to the table and 64 bit integers are at at their limit. Beyond that, a BigNum class is called for. public static int Factorial(int f) { if (f<0 || f>12) { throw new ArgumentException("Out of range for integer factorial"); } int [] fact={1,1,2,6,24,120,720,5040,40320,362880,3628800, 39916800,479001600}; return fact[f]; } A: Lazy K Your pure functional programming nightmares come true! The only Esoteric Turing-complete Programming Language that has: * *A purely functional foundation, core, and libraries---in fact, here's the complete API: S K I *No lambdas even! *No numbers or lists needed or allowed *No explicit recursion but yet, allows recursion *A simple infinite lazy stream-based I/O mechanism Here's the Factorial code in all its parenthetical glory: K(SII(S(K(S(S(KS)(S(K(S(KS)))(S(K(S(KK)))(S(K(S(K(S(K(S(K(S(SI(K(S(K(S(S(KS)K)I)) (S(S(KS)K)(SII(S(S(KS)K)I))))))))K))))))(S(K(S(K(S(SI(K(S(K(S(SI(K(S(K(S(S(KS)K)I)) (S(S(KS)K)(SII(S(S(KS)K)I))(S(S(KS)K))(S(SII)I(S(S(KS)K)I))))))))K))))))) (S(S(KS)K)(K(S(S(KS)K)))))))))(K(S(K(S(S(KS)K)))K))))(SII))II) Features: * *No subtraction or conditionals *Prints all factorials (if you wait long enough) *Uses a second layer of Church numerals to convert the Nth factorial to N! asterisks followed by a newline *Uses the Y combinator for recursion In case you are interested in trying to understand it, here is the Scheme source code to run through the Lazier compiler: (lazy-def '(fac input) '((Y (lambda (f n a) ((lambda (b) ((cons 10) ((b (cons 42)) (f (1+ n) b)))) (* a n)))) 1 1)) (for suitable definitions of Y, cons, 1, 10, 42, 1+, and *). EDIT: Lazy K Factorial in Decimal (10KB of gibberish or else I would paste it). For example, at the Unix prompt: $ echo "4" | ./lazy facdec.lazy 24 $ echo "5" | ./lazy facdec.lazy 120 Rather slow for numbers above, say, 5. The code is sort of bloated because we have to include library code for all of our own primitives (code written in Hazy, a lambda calculus interpreter and LC-to-Lazy K compiler written in Haskell). A: XSLT 1.0 The input file, factorial.xml: <?xml version="1.0"?> <?xml-stylesheet href="factorial.xsl" type="text/xsl" ?> <n> 20 </n> The XSLT file, factorial.xsl: <?xml version="1.0"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" > <xsl:output method="text"/> <!-- 0! = 1 --> <xsl:template match="text()[. = 0]"> 1 </xsl:template> <!-- n! = (n-1)! * n--> <xsl:template match="text()[. > 0]"> <xsl:variable name="x"> <xsl:apply-templates select="msxsl:node-set( . - 1 )/text()"/> </xsl:variable> <xsl:value-of select="$x * ."/> </xsl:template> <!-- Calculate n! --> <xsl:template match="/n"> <xsl:apply-templates select="text()"/> </xsl:template> </xsl:stylesheet> Save both files in the same directory and open factorial.xml in IE. A: Perl 6: Functional multi factorial ( Int $n where { $n <= 0 } ){ return 1; } multi factorial ( Int $n ){ return $n * factorial( $n-1 ); } This will also work: multi factorial(0) { 1 } multi factorial(Int $n) { $n * factorial($n - 1) } Check Jonathan Worthington's journal on use.perl.org, for more information about the last example. A: Perl 6:Procedural sub factorial ( int $n ){ my $result = 1; loop ( ; $n > 0; $n-- ){ $result *= $n; } return $result; } A: C: Edit: Actually C++ I guess, because of the variable declaration in the for loop. int factorial(int x) { int product = 1; for (int i = x; i > 0; i--) { product *= i; } return product; } A: Javascript: factorial = function( n ) { return n > 0 ? n * factorial( n - 1 ) : 1; } I'm not sure what a Factorial is but that does what the other programs do in javascript. A: Python: Recursive def fact(x): return (1 if x==0 else x * fact(x-1)) Using iterator import operator def fact(x): return reduce(operator.mul, xrange(1, x+1)) A: two of many Mathematica solutions (although ! is built-in and efficient): (* returns pure function *) (FixedPoint[(If[#[[2]]>1,{#[[1]]*#[[2]],#[[2]]-1},#])&,{1,n}][[1]])& (* not using built-in, returns pure function, don't use: might build 1..n list *) (Times @@ Range[#])& A: Visual Basic: Linq <Extension()> _ Public Function Product(ByVal xs As IEnumerable(Of Integer)) As Integer Return xs.Aggregate(1, Function(a, b) a * b) End Function Public Function Fact(ByVal n As Integer) As Integer Return Aggregate x In Enumerable.Range(1, n) Into Product() End Function This shows how to use the Aggregate keyword in VB. C# can't do this (although C# can of course call the extension method directly). A: Scheme : Functional - Tail Recursive (define (factorial n) (define (fac-times n acc) (if (= n 0) acc (fac-times (- n 1) (* acc n)))) (if (< n 0) (display "Wrong argument!") (fac-times n 1))) A: Ruby: Iterative def factorial(n) (1 .. n).inject{|a, b| a*b} end Ruby: Recursive def factorial(n) n == 1 ? 1 : n * factorial(n-1) end A: #Language: T-SQL #Style: Big Numbers Here's another T-SQL solution -- supports big numbers in a most Rube Goldbergian manner. Lots of set-based ops. Tried to keep it uniquely SQL. Horrible performance (400! took 33 seconds on a Dell Latitude D830) create function bigfact(@x varchar(max)) returns varchar(max) as begin declare @c int declare @n table(n int,e int) declare @f table(n int,e int) set @c=0 while @c<len(@x) begin set @c=@c+1 insert @n(n,e) values(convert(int,substring(@x,@c,1)),len(@x)-@c) end -- our current factorial insert @f(n,e) select 1,0 while 1=1 begin declare @p table(n int,e int) delete @p -- product insert @p(n,e) select sum(f.n*n.n), f.e+n.e from @f f cross join @n n group by f.e+n.e -- normalize while 1=1 begin delete @f insert @f(n,e) select sum(n),e from ( select (n % 10) as n,e from @p union all select (n/10) % 10,e+1 from @p union all select (n/100) %10,e+2 from @p union all select (n/1000)%10,e+3 from @p union all select (n/10000) % 10,e+4 from @p union all select (n/100000)% 10,e+5 from @p union all select (n/1000000)%10,e+6 from @p union all select (n/10000000) % 10,e+7 from @p union all select (n/100000000)% 10,e+8 from @p union all select (n/1000000000)%10,e+9 from @p ) f group by e having sum(n)>0 set @c=0 select @c=count(*) from @f where n>9 if @c=0 break delete @p insert @p(n,e) select n,e from @f end -- decrement update @n set n=n-1 where e=0 -- normalize while 1=1 begin declare @e table(e int) delete @e insert @e(e) select e from @n where n<0 if @@rowcount=0 break update @n set n=n+10 where e in (select e from @e) update @n set n=n-1 where e in (select e+1 from @e) end set @c=0 select @c=count(*) from @n where n>0 if @c=0 break end select @c=max(e) from @f set @x='' declare @l varchar(max) while @c>=0 begin set @l='0' select @l=convert(varchar(max),n) from @f where e=@c set @x=@x+@l set @c=@c-1 end return @x end Example: print dbo.bigfact('69') returns: 171122452428141311372468338881272839092270544893520369393648040923257279754140647424000000000000000 A: Language Name: ChucK Moog moog => dac; 4.0 => moog.gain; for (0 => int i; i < 8; i++) { <<< factorial(i) >>>; } fun int factorial(int n) { 1 => int result; if (n != 0) { n * factorial(n - 1) => result; } Std.mtof(result % 128) => moog.freq; 0.25::second => now; return result; } And it sounds like this. Not terribly interesting, but, hey, it's just a factorial function! A: Simple solutions are the best: #include <stdexcept>; long fact(long f) { static long fact [] = { 1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880, 3628800, 39916800, 479001600, 1932053504, 1278945280, 2004310016, 2004189184 }; static long max = sizeof(fact)/sizeof(long); if ((f < 0) || (f >= max)) { throw std::range_error("Factorial Range Error"); } return fact[f]; } A: Common Lisp: Lisp as God intended it to be used (that is, with LOOP) (defun fact (n) (loop for i from 1 to n for acc = 1 then (* acc i) finally (return acc))) Now, if someone can come up with a version based on FORMAT... A: Common Lisp: FORMAT (obfuscated) Okay, so I'll give it a try myself. (defun format-fact (stream arg colonp atsignp &rest args) (destructuring-bind (n acc) arg (format stream "~[~A~:;~*~/format-fact/~]" (1- n) acc (list (1- n) (* acc n))))) (defun fact (n) (parse-integer (format nil "~/format-fact/" (list n 1)))) There has to be a nicer, even more obscure FORMAT-based implementation. This one is pretty straight-forward and boring, simply using FORMAT as an IF replacement. Obviously, I'm not a FORMAT expert. A: AWK #!/usr/bin/awk -f { result=1; for(i=$1;i>0;i--){ result=result*i; } print result; } A: #Language: T-SQL, C# #Style: Custom Aggregate Another crazy way would be to create a custom aggregate and apply it over a temporary table of the integers 1..n. /* ProductAggregate.cs */ using System; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; [Serializable] [SqlUserDefinedAggregate(Format.Native)] public struct product { private SqlDouble accum; public void Init() { accum = 1; } public void Accumulate(SqlDouble value) { accum *= value; } public void Merge(product value) { Accumulate(value.Terminate()); } public SqlDouble Terminate() { return accum; } } add this to sql create assembly ProductAggregate from 'ProductAggregate.dll' with permission_set=safe -- mod path to point to actual dll location on disk. create aggregate product(@a float) returns float external name ProductAggregate.product create the table (there should be a built-in way to do this in SQL -- hmm. a question for SO?) select 1 as n into #n union select 2 union select 3 union select 4 union select 5 then finally select dbo.product(n) from #n A: Haskell: factorial n = product [1..n] A: Eiffel class APPLICATION inherit ARGUMENTS create make feature -- Initialization make is -- Run application. local l_fact: NATURAL_64 do l_fact := factorial(argument(1).to_natural_64) print("Result is: " + l_fact.out) end factorial(n: NATURAL_64): NATURAL_64 is -- require positive_n: n >= 0 do if n = 0 then Result := 1 else Result := n * factorial(n-1) end end end -- class APPLICATION A: befunge-93 v >v"Please enter a number (1-16) : "0< ,: >$*99g1-:99p#v_.25*,@ ^_&:1-99p>:1-:!|10 < ^ < An esoteric language by Chris Pressey of Cat's Eye Technologies. A: Perl (Y-combinator/Functional) print sub { my $f = shift; sub { my $f1 = shift; $f->( sub { $f1->( $f1 )->( @_ ) } ) }->( sub { my $f2 = shift; $f->( sub { $f2->( $f2 )->( @_ ) } ) } ) }->( sub { my $h = shift; sub { my $n = shift; return 1 if $n <=1; return $n * $h->($n-1); } })->(5); Everything after 'print' and before the '->(5)' represents the subroutine. The factorial part is in the final "sub {...}". Everything else is to implement the Y-combinator. A: J fact=. verb define */ >:@i. y ) A: Smalltalk, using a closure fac := [ :x | x = 0 ifTrue: [ 1 ] ifFalse: [ x * (fac value: x -1) ]]. Transcript show: (fac value: 24) "-> 620448401733239439360000" NB does not work in Squeak, requires full closures. A: Smalltalk, memoized Define a method on Dictionary Dictionary >> fac: x ^self at: x ifAbsentPut: [ x * (self fac: x - 1) ] usage d := Dictionary new. d at: 0 put: 1. d fac: 24 A: Smalltalk, 1-Liner (1 to: 24) inject: 1 into: [ :a :b | a * b ] A: Mathematica: non-recursive fact[n_] := Times @@ Range[n] Which is syntactic sugar for Apply[Times, Range[n]]. I think that's the best way to do it, not counting the built-in n!, of course. Note that that automatically uses bignums. A: Common Lisp version: (defun ! (n) (reduce #'* (loop for i from 2 below (+ n 1) collect i))) Seems to be quite fast. * (! 42) 1405006117752879898543142606244511569936384000000000 A: Delphi iterative While recursion can be the only decent solution to a problem, for factorials it is not. To describe it, yes. To program it, no. Iteration is cheapest. This function calculates factorials for somewhat larger arguments. function Factorial(aNumber: Int64): String; var F: Double; begin F := 0; while aNumber > 1 do begin F := F + log10(aNumber); dec(aNumber); end; Result := FloatToStr(Power(10, Frac(F))) + ' * 10^' + IntToStr(Trunc(F)); end; 1000000! = 8.2639327850046 * 10^5565708 A: Logo ? to factorial :n > ifelse :n = 0 [output 1] [output :n * factorial :n - 1] > end And to invoke: ? print factorial 5 120 This is using the UCBLogo dialect of logo. A: Perl, pessimal: # Because there are just so many other ways to get programs wrong... use strict; use warnings; sub factorial { my ($x)=@_; for(my $f=1;;$f++) { my $tmp=$f; foreach my $g (1..$x) { $tmp/=$g; } return $f if $tmp == 1; } } I trust I get extra points for not using the '*' operator... A: *NIX Shell Linux version: seq -s'*' 42 | bc BSD version: jot -s'*' 42 | bc A: FORTH, iterative 1 liner : FACT 1 SWAP 1 + 1 DO I * LOOP ; A: Scheme evolution Regular Scheme program: (define factorial (lambda (n) (if (= n 0) 1 (* n (factorial (- n 1)))))) Should work, but notice that calling this function on large numbers will extend the stack on every recursion, which is bad in languages like C and Java. Continuation-passing style (define factorial (lambda (n) (factorial_cps n (lambda (k) k)))) (define factorial_cps (lambda (n k) (if (zero? n) (k 1) (factorial (- n 1) (lambda (v) (k (* n v))))))) Ah, this way, we don't grow our stack every recursion because we can extend a continuation instead. However, C doesn't have continuations. Representation-independent CPS (define factorial (lambda (n) (factorial_cps n (k_)))) (define factorial_cps (lambda (n k) (if (zero? n) (apply_k 1) (factorial (- n 1) (k_extend n k)))) (define apply_k (lambda (ko v) (ko v))) (define kt_empty (lambda () (lambda (v) v))) (define kt_extend (lambda () (lambda (v) (apply_k k (* n v))))) Notice that responsibility for representation of the continuations used in the original CPS program has been shifted to the kt_ helper procedures. Representation-independent CPS using ParentheC unions Since representation of the continuations is in the helper procedures, we can switch to using ParentheC instead, with kt_ being a type designator. (define factorial (lambda (n) (factorial_cps n (kt_empty)))) (define factorial_cps (lambda (n k) (if (zero? n) (apply_k 1) (factorial (- n 1) (kt_extend n k)))) (define-union kt (empty) (extend n k)) (define apply_k (lambda () (union-case kh kt [(empty) v] [(extend n k) (begin (set! kh k) (set! v (* n v)) (apply_k))]))) Trampolined, registerized ParentheC program That's not enough. We now replace all function calls by instead setting global variables and a program counter. Procedures are now labels suitable for GOTO statements. (define-registers n k kh v) (define-program-counter pc) (define-label main (begin (set! n 5) ; what is the factorial of 5?? (set! pc factorial_cps) (mount-trampoline kt_empty k pc) (printf "Factorial of 5: ~d\n" v))) (define-label factorial_cps (if (zero? n) (begin (set! kh k) (set! v 1) (set! pc apply_k)) (begin (set! k (kt_extend n k)) (set! n (- n 1)) (set! pc factorial_cps)))) (define-union kt (empty dismount) ; get off the trampoline! (extend n k)) (define-label apply_k (union-case kh kt [(empty dismount) (dismount-trampoline dismount)] [(extend n k) (begin (set! kh k) (set! v (* n v)) (set! pc apply_k))])) Oh look, we have a main procedure now too. Now all that's left to do is save this file as fact5.pc and run it through ParentheC's pc2c: > (load "pc2c.ss") > (pc2c "fact5.pc" "fact5.c" "fact5.h") Could it be? We got fact5.c and fact5.h. Let's see... $ gcc fact5.c -o fact5 $ ./fact5 Factorial of 5: 120 Success! We have converted a recursive Scheme program into a non-recursive C program! And it only took several hours and many forehead-shaped impressions in the wall to do it! For convenience, fact5.c and and fact5.h. A: Python: Functional, One-liner factorial = lambda n: reduce(lambda x,y: x*y, range(1, n+1), 1) NOTE: * *It supports big integers. Example: print factorial(100) 93326215443944152681699238856266700490715968264381621468592963895217599993229915\ 608941463976156518286253697920827223758251185210916864000000000000000000000000 * *It does not work for n < 0. A: Polyglot: 5 languages, all using bignums So, I wrote a polyglot which works in the three languages I often write in, as well as one from my other answer to this question and one I just learned today. It's a standalone program, which reads a single line containing a nonnegative integer and prints a single line containing its factorial. Bignums are used in all languages, so the maximum computable factorial depends only on your computer's resources. * *Perl: uses built-in bignum package. Run with perl FILENAME. *Haskell: uses built-in bignums. Run with runhugs FILENAME or your favorite compiler's equivalent. *C++: requires GMP for bignum support. To compile with g++, use g++ -lgmpxx -lgmp -x c++ FILENAME to link against the right libraries. After compiling, run ./a.out. Or use your favorite compiler's equivalent. *brainf*ck: I wrote some bignum support in this post. Using Muller's classic distribution, compile with bf < FILENAME > EXECUTABLE. Make the output executable and run it. Or use your favorite distribution. *Whitespace: uses built-in bignum support. Run with wspace FILENAME. Edit: added Whitespace as a fifth language. Incidentally, do not wrap the code with <code> tags; it breaks the Whitespace. Also, the code looks much nicer in fixed-width. char //# b=0+0{- |0*/; #>>>>,----------[>>>>,-------- #define a/*#--]>>>>++<<<<<<<<[>++++++[<------>-]<-<<< #Perl ><><><> <> <> <<]>>>>[[>>+<<-]>>[<<+>+>-]<-> #C++ --><><> <><><>< > < > < +<[>>>>+<<<-<[-]]>[-] #Haskell >>]>[-<<<<<[<<<<]>>>>[[>>+<<-]>>[<<+>+>-]>>] #Whitespace >>>>[-[>+<-]+>>>>]<<<<[<<<<]<<<<[<<<< #brainf*ck > < ]>>>>>[>>>[>>>>]>>>>[>>>>]<<<<[[>>>>*/ exp; ;//;#+<<<<-]<<<<]>>>>+<<<<<<<[<<<<][.POLYGLOT^5. #include <gmpxx.h>//]>>>>-[>>>[>>>>]>>>>[>>>>]<<<<[>> #define eval int main()//>+<<<-]>>>[<<<+>>+>-> #include <iostream>//<]<-[>>+<<[-]]<<[<<<<]>>>>[>[>>> #define print std::cout << // > <+<-]>[<<+>+>-]<<[>>> #define z std::cin>>//<< +<<<-]>>>[<<<+>>+>-]<->+++++ #define c/*++++[-<[-[>>>>+<<<<-]]>>>>[<<<<+>>>>-]<<*/ #define abs int $n //>< <]<[>>+<<<<[-]>>[<<+>>-]]>>]< #define uc mpz_class fact(int $n){/*<<<[<<<<]<<<[<< use bignum;sub#<<]>>>>-]>>>>]>>>[>[-]>>>]<<<<[>>+<<-] z{$_[0+0]=readline(*STDIN);}sub fact{my($n)=shift;#>> #[<<+>+>-]<->+<[>-<[-]]>[-<<-<<<<[>>+<<-]>>[<<+>+>+*/ uc;if($n==0){return 1;}return $n*fact($n-1); }//;# eval{abs;z($n);print fact($n);print("\n")/*2;};#-]<-> '+<[>-<[-]]>]<<[<<<<]<<<<-[>>+<<-]>>[<<+>+>-]+<[>-+++ -}-- <[-]]>[-<<++++++++++<<<<-[>>+<<-]>>[<<+>+>-++ fact 0 = 1 -- ><><><>< > <><>< ]+<[>-<[-]]>]<<[<<+ + fact n=n*fact(n-1){-<<]>>>>[[>>+<<-]>>[<<+>+++>+-} main=do{n<-readLn;print(fact n)}-- +>-]<->+<[>>>>+<<+ {-x<-<[-]]>[-]>>]>]>>>[>>>>]<<<<[>+++++++[<+++++++>-] <--.<<<<]+written+by+++A+Rex+++2009+.';#+++x-}--x*/;} A: APL (oddball/one-liner): ×/⍳X * *⍳X expands X into an array of the integers 1..X *×/ multiplies every element in the array Or with the built-in operator: !X Source: http://www.webber-labs.com/mpl/lectures/ppt-slides/01.ppt A: Perl6 sub factorial ($n) { [*] 1..$n } I hardly know about Perl6. But I guess this [*] operator is same as Haskell's product. This code runs on Pugs, and maybe Parrot (I didn't check it.) Edit This code also works. sub postfix:<!> ($n) { [*] 1..$n } # This function(?) call like below ... It looks like mathematical notation. say 10!; A: x86-64 Assembly: Procedural You can call this from C (only tested with GCC on linux amd64). Assembly was assembled with nasm. section .text global factorial ; factorial in x86-64 - n is passed in via RDI register ; takes a 64-bit unsigned integer ; returns a 64-bit unsigned integer in RAX register ; C declaration in GCC: ; extern unsigned long long factorial(unsigned long long n); factorial: enter 0,0 ; n is placed in rdi by caller mov rax, 1 ; factorial = 1 mov rcx, 2 ; i = 2 loopstart: cmp rcx, rdi ja loopend mul rcx ; factorial *= i inc rcx jmp loopstart loopend: leave ret A: Recursively in Inform 7 (it reminds you of COBOL because it's for writing text adventures; proportional font is deliberate): To decide what number is the factorial of (n - a number):     if n is zero, decide on one;     otherwise decide on the factorial of (n minus one) times n. If you want to actually call this function ("phrase") from a game you need to define an action and grammar rule: "The factorial game" [this must be the first line of the source] There is a room. [there has to be at least one!] Factorialing is an action applying to a number. Understand "factorial [a number]" as factorialing. Carry out factorialing:     Let n be the factorial of the number understood;     Say "It's [n]". A: lolcode: sorry I couldn't resist xD HAI CAN HAS STDIO? I HAS A VAR I HAS A INT I HAS A CHEEZBURGER I HAS A FACTORIALNUM IM IN YR LOOP UP VAR!!1 TIEMZD INT!![CHEEZBURGER] UP FACTORIALNUM!!1 IZ VAR BIGGER THAN FACTORIALNUM? GTFO IM OUTTA YR LOOP U SEEZ INT KTHXBYE A: Haskell: ones = 1 : ones integers = head ones : zipWith (+) integers (tail ones) factorials = head integers : zipWith (*) factorials (tail integers) A: C#: LINQ public static int factorial(int n) { return (Enumerable.Range(1, n).Aggregate(1, (previous, value) => previous * value)); } A: Erlang: tail recursive fac(0) -> 1; fac(N) when N > 0 -> fac(N, 1). fac(1, R) -> R; fac(N, R) -> fac(N - 1, R * N). A: Brainf*ck +++++ >+<[[->>>>+<<<<]>>>>[-<<<<+>>+>>]<<<<>[->>+<<]<>>>[-<[->>+<<]>>[-<<+<+>>>]<]<[-]><<<-] Written by Michael Reitzenstein. A: BASIC: old school 10 HOME 20 INPUT N 30 LET ANS = 1 40 FOR I = 1 TO N 50 ANS = ANS * I 60 NEXT I 70 PRINT ANS A: C++ factorial(int n) { for(int i=1, f = 1; i<=n; i++) f *= i; return f; } A: Java: functional int factorial(int x) { return x == 0 ? 1 : x * factorial(x-1); } A: Haskell: Functional fact 0 = 1 fact n = n * fact (n-1) A: This one not only calculates n!, it is also O(n!). It may have problems if you want to calculate anything "big" though. long f(long n) { long r=1; for (long i=1; i<n; i++) r=r*i; return r; } long factorial(long n) { // iterative implementation should be efficient long result; for (long i=0; i<f(n); i++) result=result+1; return result; } A: Bourne Shell: Functional factorial() { if [ $1 -eq 0 ] then echo 1 return fi a=`expr $1 - 1` expr $1 \* `factorial $a` } Also works for Korn Shell and Bourne Again Shell. :-) A: Lisp recursive: (defun factorial (x) (if (<= x 1) 1 (* x (factorial (- x 1))))) A: JavaScript Using anonymous functions: var f = function(n){ if(n>1){ return arguments.callee(n-1)*n; } return 1; } A: C: One liner, procedural int f(int n) { for (int i = n - 1; i > 0; n *= i, i--); return n ? n : 1; } I used int's for brevity; use other types to support larger numbers. A: Python, C/C++ (weave): Multi-Language, Procedural Four implementations: * *[weave] *[python] *[psyco] *[list] Code: #!/usr/bin/env python """ weave_factorial.py """ # [weave] factorial() as extension module in C++ from scipy.weave import ext_tools def build_factorial_ext(): func = ext_tools.ext_function( 'factorial', r""" unsigned long long i = 1; for ( ; n > 1; --n) i *= n; PyObject *o = PyLong_FromUnsignedLongLong(i); return_val = o; Py_XDECREF(o); """, ['n'], {'n': 1}, # effective type declaration {}) mod = ext_tools.ext_module('factorial_ext') mod.add_function(func) mod.compile() try: from factorial_ext import factorial as factorial_weave except ImportError: build_factorial_ext() from factorial_ext import factorial as factorial_weave # [python] pure python procedural factorial() def factorial_python(n): i = 1 while n > 1: i *= n n -= 1 return i # [psyco] factorial() psyco-optimized try: import psyco factorial_psyco = psyco.proxy(factorial_python) except ImportError: pass # [list] list-lookup factorial() factorials = map(factorial_python, range(21)) factorial_list = lambda n: factorials[n] Measure relative performance: $ python -mtimeit \ -s "from weave_factorial import factorial_$label as f" "f($n)" * *n = 12 * *[weave] 0.70 µsec (2) *[python] 3.8 µsec (9) *[psyco] 1.2 µsec (3) *[list] 0.43 µsec (1) *n = 20 * *[weave] 0.85 µsec (2) *[python] 9.2 µsec (21) *[psyco] 4.3 µsec (10) *[list] 0.43 µsec (1) µsec stands for microseconds. A: Here is an interesting Ruby version. On my laptop it will find 30000! in under a second. (It takes longer for Ruby to format it for printing than to calculate it.) This is significantly faster than the naive solution of just multiplying the numbers in order. def factorial (n) return multiply_range(1, n) end def multiply_range(n, m) if (m < n) return 1 elsif (n == m) return m else i = (n + m) / 2 return multiply_range(n, i) * multiply_range(i+1, m) end end A: Scala: Recursive * *Should compile to being tail recursive. Should! . def factorial( value: BigInt ): BigInt = value match { case 0 => 1 case _ => value * factorial( value - 1 ) } A: Occam-pi PROC subprocess(MOBILE CHAN INT parent.out!,parent.in?) INT value: SEQ parent.in ? value IF value = 1 SEQ parent.out ! value OTHERWISE INITIAL MOBILE CHAN INT child.in IS MOBILE CHAN INT: INITIAL MOBILE CHAN INT child.out IS MOBILE CHAN INT: FORKING INT newvalue: SEQ FORK subprocess(child.in!,child.out?) child.out ! (value-1) child.in ? newvalue parent.out ! (newalue*value) : PROC main(CHAN BYTE in?,src!,kyb?) INITIAL INT value IS 0: INITIAL MOBILE CHAN INT child.out is MOBILE CHAN INT INITIAL MOBILE CHAN INT child.in is MOBILE CHAN INT SEQ WHILE TRUE SEQ subprocess(child.in!,child.out?) child.out ! value child.in ? value src ! value: value := value + 1 : A: OCaml Lest anyone believe OCaml and oddball go hand-in-hand, I thought I would provide a sane implementation of factorial. # let rec factorial n = if n=0 then 1 else n * factorial(n - 1);; I don't think I made my case very well... A: Genuinely functional Java: public final class Factorial { public static void main(String[] args) { final int n = Integer.valueOf(args[0]); System.out.println("Factorial of " + n + " is " + create(n).apply()); } private static Function create(final int n) { return n == 0 ? new ZeroFactorialFunction() : new NFactorialFunction(n); } interface Function { int apply(); } private static class NFactorialFunction implements Function { private final int n; public NFactorialFunction(final int n) { this.n = n; } @Override public int apply() { return n * Factorial.create(n - 1).apply(); } } private static class ZeroFactorialFunction implements Function { @Override public int apply() { return 1; } } } A: C# factorial using recursion in a single line private static int factorial(int n){ if (n == 0)return 1;else return n * factorial(n - 1); } A: dc Note: clobbers the e and f registers: [2++d]se[d1-d_1<fd0>e*]sf To use, put the value you want to take the factorial of on the top of the stack and then execute lfx (load the f register and execute it), which then pops the top of the stack and pushes that value's factorial. Explanation: if the top of the stack is x, then the first part makes the top of the stack look like (x, x-1). If the new top-of-stack is non-negative, it calls factorial recursively, so now the stack is (x, (x-1)!) for x >= 1, or (0, -1) for x = 0. Then, if the new top-of-stack is negative, it executes 2++d, which replaces the (0, -1) with (1, 1). Finally, it multiplies the top two values on the stack. A: R - using S4 methods (recursively) setGeneric( 'fct', function( x ) { standardGeneric( 'fct' ) } ) setMethod( 'fct', 'numeric', function( x ) { lapply( x, function(a) { if( a == 0 ) 1 else a * fact( a - 1 ) } ) } ) Has the advantage that you can pass arrays of numbers in, and it will work them all out... eg: > fct( c( 3, 5, 6 ) ) [[1]] [1] 6 [[2]] [1] 120 [[3]] [1] 720 A: Iswim/Lucid: factorial = 1 fby factorial * (time+1); A: Python, one liner: A bit more clean than the other python answer. This, and the previous answer, will fail if the input is less than 1. def fact(n): return reduce(int.mul,xrange(2,n)) A: Common Lisp * *Call it by name: ! *Tail recursive *Common Lisp handles arbitrarily large numbers (defun ! (n) "factorial" (labels ((fac (n prod) (if (zerop n) prod (fac (- n 1) (* prod n))))) (fac n 1))) edit: or with accumulator as optional parameter: (defun ! (n &optional prod) "factorial" (if (zerop n) prod (! (- n 1) (* prod n)))) or as a reduce, at the cost of a bigger memory footprint and more consing: (defun range (start end &optional acc) "range from start inclusive to end exclusive, start = start end) (nreverse acc) (range (+ start 1) end (cons start acc)))) (defun ! (n) "factorial" (reduce #'* (range 1 (+ n 1)))) A: Factor USE: math.ranges : factorial ( n -- n! ) 1 [a,b] product ; A: In MUMPS: fact(N) N F,I S F=1 F I=2:1:N S F=F*I QUIT F Or, if you're a fan of indirection: fact(N) N F,I S F=1 F I=2:1:N S F=F_"*"_I QUIT @F A: ActionScript: Procedural/OOP function f(n) { var result = n>1 ? arguments.callee(n-1)*n : 1; return result; } // function call f(3); A: Hmm... no TCL proc factorial {n} { if { $n == 0 } { return 1 } return [expr {$n*[factorial [expr {$n-1}]]}] } puts [factorial 6] But of course that doesn't work for a damn for large values of n.... we can do better with tcllib! package require math::bignum proc factorial {n} { if { $n == 0 } { return 1 } return [ ::math::bignum::tostr [ ::math::bignum::mul [ ::math::bignum::fromstr $n] [ ::math::bignum::fromstr [ factorial [expr {$n-1} ] ]]]] } puts [factorial 60] Look at all those ]'s at the end. This is practically LISP! I'll leave the version for values of n>2^32 as an excersize for the reader A: Mathematica, Memoized f[n_ /; n < 2] := 1 f[n_] := (f[n] = n*f[n - 1]) Mathematica supports n! natively, but this shows how to make definitions on the fly. When you execute f[2], this code will make a definition f[2]=2 which will subsequently be executed no differently than if you'd hard-coded it; no need for an internal data structure; you just use the language's own function definition machinery. A: Lisp : tail-recursive (defun factorial(x) (labels((f (x acc) (if (> x 1) (f (1- x)(* x acc)) acc))) (f x 1))) A: Another ruby one. class Integer def fact return 1 if self.zero? (1..self).to_a.inject(:*) end end This works if to_proc is supported on symbols. A: REBOL Math is definitely not one of REBOL's strong points, since it lacks arbitrary precision integers. For the sake of completeness, I thought I'd add it anyway. Here's a standard, naïve recursive implementation: fac: func [ [catch] n [integer!] ] [ if n < 0 [ throw make error! "Hey dummy, your argument was less than 0!" ] either n = 0 [ 1 ] [ n * fac (n - 1) ] ] And that's about it. Move along, folks, nothing to see here ... :) A: Here's my proposal. Runs in Mathematica, works fine: gen[f_, n_] := Module[{id = -1, val = Table[Null, {n}], visit}, visit[k_] := Module[{t}, id++; If[k != 0, val[[k]] = id]; If[id == n, f[val]]; Do[If[val[[t]] == Null, visit[t]], {t, 1, n}]; id--; val[[k]] = Null;]; visit[0]; ] Factorial[n_] := Module[{res=0}, gen[res++&, n]; res] Update Ok, here's how it works: the visit function is from Sedgewick's Algorithm book, it "visits" all permutations of length n. Upon the visit, it calls function f with the permutation as an argument. So, Factorial enumerates all permutations of length n, and for each permutation the counter res is increased, thus computing n! in O(n+1)! time. A: Python: def factorial(n): return reduce(lambda x, y: x * y,range(1, n + 1)) A: PHP - 59 chars function f($n){return array_reduce(range(1,$n),'bcmul',1);} Improved Version - 27 chars array_product(range(1,$n)); A: SETL ...where Haskell and Python borrowed their list comprehensions from. proc factorial(n); return 1 */ {1..n}; end factorial; And the built-in INTEGER type is arbitrary-precision, so this will work for any positive n. A: Befunge: 0&>:1-:v v *_$.@ ^ _$>\:^ A: CLOS I see Common Lisp solutions abusing recursion, LOOP, and even FORMAT. I guess it's time for somebody to write a solution that abuses CLOS! (defgeneric factorial (n)) (defmethod factorial ((n (eql 0))) 1) (defmethod factorial ((n integer)) (* n (factorial (1- n)))) (Can your favorite language's object system dispatcher do that?) A: Golfscript: designed for golfing, of course ~),1>{*}* * *~ evaluates the input string (to an integer) *) increments the number *, is range (4, becomes [0 1 2 3]) *1> selects values whose index is 1 or bigger *{*}* folds multiplication over the list *Stack contents are printed when the program terminates. To run: echo 5 | ruby gs.rb fact.gs A: Oh fork() its another example in Perl This will make use of your multiple core CPUs... although perhaps not in the most effective manner. The open statement clones the process with fork and opens a pipe from the child process to the parent. The work of multiplying numbers 2 at a time is split among a tree of very short lived processes. Of course, this example is a bit silly. The point is that if you actually had more difficult calculations to do then this example illustrates one way to divide up the work in parallel. #!/usr/bin/perl -w use strict; use bigint; print STDOUT &main::rangeProduct(1,$ARGV[0])."\n"; sub main::rangeProduct { my($l, $h) = @_; return $l if ($l==$h); return $l*$h if ($l==($h-1)); # arghhh - multiplying more than 2 numbers at a time is too much work # find the midpoint and split the work up :-) my $m = int(($h+$l)/2); my $pid = open(my $KID, "-|"); if ($pid){ # parent my $X = &main::rangeProduct($l,$m); my $Y = <$KID>; chomp($Y); close($KID); die "kid failed" unless defined $Y; return $X*$Y; } else { # kid print STDOUT &main::rangeProduct($m+1,$h)."\n"; exit(0); } } A: T-SQL: Recursive CTE Inline table function using a recursive common table expression. SQL Server 2005 and up. CREATE FUNCTION dbo.Factorial(@n int) RETURNS TABLE AS RETURN WITH RecursiveCTE (N, Value) AS ( SELECT 1, CAST(1 AS decimal(38,0)) UNION ALL SELECT N+1, CAST(Value*(N+1) AS decimal(38,0)) FROM RecursiveCTE ) SELECT TOP 1 Value FROM RecursiveCTE WHERE N = @n A: Haskell : Functional - Tail Recursive factorial n = factorial' n 1 factorial' 0 a = a factorial' n a = factorial' (n-1) (n*a) A: FoxPro: function factorial parameters n return iif( n>0, n*factorial(n-1), 1) A: Common Lisp, since noone has commited that yet: (defun factorial (n) (if (<= n 1) 1 (* n (factorial (1- n))))) A: Common Lisp I'm fairly sure this could be more effieicnet. It is my first lisp function other than "hello, world" and typing in the example code in the third chapter. Practical Common Lisp is a great text. This function does seem to handle large factorials well. (defun factorial (x) (if (< x 2) (return-from factorial (print 1))) (let ((tempx 1) (ans 1)) (loop until (equalp x tempx) do (incf tempx) (setf ans (* tempx ans))) (list ans))) A: In Io: factorial := method(n, if (list(0, 1) contains(n), 1, n * factorial(n - 1) ) ) A: C++ constexpr constexpr uint64_t fact(uint32_t n) { return (n==0) ? 1:n*fact(n-1); } A: Vb6 : Private Function factCalculation(ByVal Number%) Dim intNum% intNum = 1 For i = 2 To Number intNum = intNum * Number Next i return intNum End Function Private Sub Form_Load() Dim FactResult% : FactResult = factCalculation(3) 'e.g Print FactResult End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/23930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: Algorithm to compare two images Given two different image files (in whatever format I choose), I need to write a program to predict the chance if one being the illegal copy of another. The author of the copy may do stuff like rotating, making negative, or adding trivial details (as well as changing the dimension of the image). Do you know any algorithm to do this kind of job? A: In the form described by you, the problem is tough. Do you consider copy, paste of part of the image into another larger image as a copy ? etc. What we loosely refer to as duplicates can be difficult for algorithms to discern. Your duplicates can be either: * *Exact Duplicates *Near-exact Duplicates. (minor edits of image etc) *perceptual Duplicates (same content, but different view, camera etc) No1 & 2 are easier to solve. No 3. is very subjective and still a research topic. I can offer a solution for No1 & 2. Both solutions use the excellent image hash- hashing library: https://github.com/JohannesBuchner/imagehash * *Exact duplicates Exact duplicates can be found using a perceptual hashing measure. The phash library is quite good at this. I routinely use it to clean training data. Usage (from github site) is as simple as: from PIL import Image import imagehash # image_fns : List of training image files img_hashes = {} for img_fn in sorted(image_fns): hash = imagehash.average_hash(Image.open(image_fn)) if hash in img_hashes: print( '{} duplicate of {}'.format(image_fn, img_hashes[hash]) ) else: img_hashes[hash] = image_fn *Near-Exact Duplicates In this case you will have to set a threshold and compare the hash values for their distance from each other. This has to be done by trial-and-error for your image content. from PIL import Image import imagehash # image_fns : List of training image files img_hashes = {} epsilon = 50 for img_fn1, img_fn2 in zip(image_fns, image_fns[::-1]): if image_fn1 == image_fn2: continue hash1 = imagehash.average_hash(Image.open(image_fn1)) hash2 = imagehash.average_hash(Image.open(image_fn2)) if hash1 - hash2 < epsilon: print( '{} is near duplicate of {}'.format(image_fn1, image_fn2) ) If you take a step-back, this is easier to solve if you watermark the master images. You will need to use a watermarking scheme to embed a code into the image. To take a step back, as opposed to some of the low-level approaches (edge detection etc) suggested by some folks, a watermarking method is superior because: It is resistant to Signal processing attacks ► Signal enhancement – sharpening, contrast, etc. ► Filtering – median, low pass, high pass, etc. ► Additive noise – Gaussian, uniform, etc. ► Lossy compression – JPEG, MPEG, etc. It is resistant to Geometric attacks ► Affine transforms ► Data reduction – cropping, clipping, etc. ► Random local distortions ► Warping Do some research on watermarking algorithms and you will be on the right path to solving your problem. ( Note: You can benchmark you method using the STIRMARK dataset. It is an accepted standard for this type of application. A: This is just a suggestion, it might not work and I'm prepared to be called on this. This will generate false positives, but hopefully not false negatives. * *Resize both of the images so that they are the same size (I assume that the ratios of widths to lengths are the same in both images). *Compress a bitmap of both images with a lossless compression algorithm (e.g. gzip). *Find pairs of files that have similar file sizes. For instance, you could just sort every pair of files you have by how similar the file sizes are and retrieve the top X. As I said, this will definitely generate false positives, but hopefully not false negatives. You can implement this in five minutes, whereas the Porikil et. al. would probably require extensive work. A: I believe if you're willing to apply the approach to every possible orientation and to negative versions, a good start to image recognition (with good reliability) is to use eigenfaces: http://en.wikipedia.org/wiki/Eigenface Another idea would be to transform both images into vectors of their components. A good way to do this is to create a vector that operates in x*y dimensions (x being the width of your image and y being the height), with the value for each dimension applying to the (x,y) pixel value. Then run a variant of K-Nearest Neighbours with two categories: match and no match. If it's sufficiently close to the original image it will fit in the match category, if not then it won't. K Nearest Neighbours(KNN) can be found here, there are other good explanations of it on the web too: http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm The benefits of KNN is that the more variants you're comparing to the original image, the more accurate the algorithm becomes. The downside is you need a catalogue of images to train the system first. A: Read the paper: Porikli, Fatih, Oncel Tuzel, and Peter Meer. “Covariance Tracking Using Model Update Based on Means on Riemannian Manifolds”. (2006) IEEE Computer Vision and Pattern Recognition. I was successfully able to detect overlapping regions in images captured from adjacent webcams using the technique presented in this paper. My covariance matrix was composed of Sobel, canny and SUSAN aspect/edge detection outputs, as well as the original greyscale pixels. A: An idea: * *use keypoint detectors to find scale- and transform- invariant descriptors of some points in the image (e.g. SIFT, SURF, GLOH, or LESH). *try to align keypoints with similar descriptors from both images (like in panorama stitching), allow for some image transforms if necessary (e.g. scale & rotate, or elastic stretching). *if many keypoints align well (exists such a transform, that keypoint alignment error is low; or transformation "energy" is low, etc.), you likely have similar images. Step 2 is not trivial. In particular, you may need to use a smart algorithm to find the most similar keypoint on the other image. Point descriptors are usually very high-dimensional (like a hundred parameters), and there are many points to look through. kd-trees may be useful here, hash lookups don't work well. Variants: * *Detect edges or other features instead of points. A: These are simply ideas I've had thinking about the problem, never tried it but I like thinking about problems like this! Before you begin Consider normalising the pictures, if one is a higher resolution than the other, consider the option that one of them is a compressed version of the other, therefore scaling the resolution down might provide more accurate results. Consider scanning various prospective areas of the image that could represent zoomed portions of the image and various positions and rotations. It starts getting tricky if one of the images are a skewed version of another, these are the sort of limitations you should identify and compromise on. Matlab is an excellent tool for testing and evaluating images. Testing the algorithms You should test (at the minimum) a large human analysed set of test data where matches are known beforehand. If for example in your test data you have 1,000 images where 5% of them match, you now have a reasonably reliable benchmark. An algorithm that finds 10% positives is not as good as one that finds 4% of positives in our test data. However, one algorithm may find all the matches, but also have a large 20% false positive rate, so there are several ways to rate your algorithms. The test data should attempt to be designed to cover as many types of dynamics as possible that you would expect to find in the real world. It is important to note that each algorithm to be useful must perform better than random guessing, otherwise it is useless to us! You can then apply your software into the real world in a controlled way and start to analyse the results it produces. This is the sort of software project which can go on for infinitum, there are always tweaks and improvements you can make, it is important to bear that in mind when designing it as it is easy to fall into the trap of the never ending project. Colour Buckets With two pictures, scan each pixel and count the colours. For example you might have the 'buckets': white red blue green black (Obviously you would have a higher resolution of counters). Every time you find a 'red' pixel, you increment the red counter. Each bucket can be representative of spectrum of colours, the higher resolution the more accurate but you should experiment with an acceptable difference rate. Once you have your totals, compare it to the totals for a second image. You might find that each image has a fairly unique footprint, enough to identify matches. Edge detection How about using Edge Detection. (source: wikimedia.org) With two similar pictures edge detection should provide you with a usable and fairly reliable unique footprint. Take both pictures, and apply edge detection. Maybe measure the average thickness of the edges and then calculate the probability the image could be scaled, and rescale if necessary. Below is an example of an applied Gabor Filter (a type of edge detection) in various rotations. Compare the pictures pixel for pixel, count the matches and the non matches. If they are within a certain threshold of error, you have a match. Otherwise, you could try reducing the resolution up to a certain point and see if the probability of a match improves. Regions of Interest Some images may have distinctive segments/regions of interest. These regions probably contrast highly with the rest of the image, and are a good item to search for in your other images to find matches. Take this image for example: (source: meetthegimp.org) The construction worker in blue is a region of interest and can be used as a search object. There are probably several ways you could extract properties/data from this region of interest and use them to search your data set. If you have more than 2 regions of interest, you can measure the distances between them. Take this simplified example: (source: per2000.eu) We have 3 clear regions of interest. The distance between region 1 and 2 may be 200 pixels, between 1 and 3 400 pixels, and 2 and 3 200 pixels. Search other images for similar regions of interest, normalise the distance values and see if you have potential matches. This technique could work well for rotated and scaled images. The more regions of interest you have, the probability of a match increases as each distance measurement matches. It is important to think about the context of your data set. If for example your data set is modern art, then regions of interest would work quite well, as regions of interest were probably designed to be a fundamental part of the final image. If however you are dealing with images of construction sites, regions of interest may be interpreted by the illegal copier as ugly and may be cropped/edited out liberally. Keep in mind common features of your dataset, and attempt to exploit that knowledge. Morphing Morphing two images is the process of turning one image into the other through a set of steps: Note, this is different to fading one image into another! There are many software packages that can morph images. It's traditionaly used as a transitional effect, two images don't morph into something halfway usually, one extreme morphs into the other extreme as the final result. Why could this be useful? Dependant on the morphing algorithm you use, there may be a relationship between similarity of images, and some parameters of the morphing algorithm. In a grossly over simplified example, one algorithm might execute faster when there are less changes to be made. We then know there is a higher probability that these two images share properties with each other. This technique could work well for rotated, distorted, skewed, zoomed, all types of copied images. Again this is just an idea I have had, it's not based on any researched academia as far as I am aware (I haven't look hard though), so it may be a lot of work for you with limited/no results. Zipping Ow's answer in this question is excellent, I remember reading about these sort of techniques studying AI. It is quite effective at comparing corpus lexicons. One interesting optimisation when comparing corpuses is that you can remove words considered to be too common, for example 'The', 'A', 'And' etc. These words dilute our result, we want to work out how different the two corpus are so these can be removed before processing. Perhaps there are similar common signals in images that could be stripped before compression? It might be worth looking into. Compression ratio is a very quick and reasonably effective way of determining how similar two sets of data are. Reading up about how compression works will give you a good idea why this could be so effective. For a fast to release algorithm this would probably be a good starting point. Transparency Again I am unsure how transparency data is stored for certain image types, gif png etc, but this will be extractable and would serve as an effective simplified cut out to compare with your data sets transparency. Inverting Signals An image is just a signal. If you play a noise from a speaker, and you play the opposite noise in another speaker in perfect sync at the exact same volume, they cancel each other out. (source: themotorreport.com.au) Invert on of the images, and add it onto your other image. Scale it/loop positions repetitively until you find a resulting image where enough of the pixels are white (or black? I'll refer to it as a neutral canvas) to provide you with a positive match, or partial match. However, consider two images that are equal, except one of them has a brighten effect applied to it: (source: mcburrz.com) Inverting one of them, then adding it to the other will not result in a neutral canvas which is what we are aiming for. However, when comparing the pixels from both original images, we can definatly see a clear relationship between the two. I haven't studied colour for some years now, and am unsure if the colour spectrum is on a linear scale, but if you determined the average factor of colour difference between both pictures, you can use this value to normalise the data before processing with this technique. Tree Data structures At first these don't seem to fit for the problem, but I think they could work. You could think about extracting certain properties of an image (for example colour bins) and generate a huffman tree or similar data structure. You might be able to compare two trees for similarity. This wouldn't work well for photographic data for example with a large spectrum of colour, but cartoons or other reduced colour set images this might work. This probably wouldn't work, but it's an idea. The trie datastructure is great at storing lexicons, for example a dictionarty. It's a prefix tree. Perhaps it's possible to build an image equivalent of a lexicon, (again I can only think of colours) to construct a trie. If you reduced say a 300x300 image into 5x5 squares, then decompose each 5x5 square into a sequence of colours you could construct a trie from the resulting data. If a 2x2 square contains: FFFFFF|000000|FDFD44|FFFFFF We have a fairly unique trie code that extends 24 levels, increasing/decreasing the levels (IE reducing/increasing the size of our sub square) may yield more accurate results. Comparing trie trees should be reasonably easy, and could possible provide effective results. More ideas I stumbled accross an interesting paper breif about classification of satellite imagery, it outlines: Texture measures considered are: cooccurrence matrices, gray-level differences, texture-tone analysis, features derived from the Fourier spectrum, and Gabor filters. Some Fourier features and some Gabor filters were found to be good choices, in particular when a single frequency band was used for classification. It may be worth investigating those measurements in more detail, although some of them may not be relevant to your data set. Other things to consider There are probably a lot of papers on this sort of thing, so reading some of them should help although they can be very technical. It is an extremely difficult area in computing, with many fruitless hours of work spent by many people attempting to do similar things. Keeping it simple and building upon those ideas would be the best way to go. It should be a reasonably difficult challenge to create an algorithm with a better than random match rate, and to start improving on that really does start to get quite hard to achieve. Each method would probably need to be tested and tweaked thoroughly, if you have any information about the type of picture you will be checking as well, this would be useful. For example advertisements, many of them would have text in them, so doing text recognition would be an easy and probably very reliable way of finding matches especially when combined with other solutions. As mentioned earlier, attempt to exploit common properties of your data set. Combining alternative measurements and techniques each that can have a weighted vote (dependant on their effectiveness) would be one way you could create a system that generates more accurate results. If employing multiple algorithms, as mentioned at the begining of this answer, one may find all the positives but have a false positive rate of 20%, it would be of interest to study the properties/strengths/weaknesses of other algorithms as another algorithm may be effective in eliminating false positives returned from another. Be careful to not fall into attempting to complete the never ending project, good luck! A: It is indeed much less simple than it seems :-) Nick's suggestion is a good one. To get started, keep in mind that any worthwhile comparison method will essentially work by converting the images into a different form -- a form which makes it easier to pick similar features out. Usually, this stuff doesn't make for very light reading ... One of the simplest examples I can think of is simply using the color space of each image. If two images have highly similar color distributions, then you can be reasonably sure that they show the same thing. At least, you can have enough certainty to flag it, or do more testing. Comparing images in color space will also resist things such as rotation, scaling, and some cropping. It won't, of course, resist heavy modification of the image or heavy recoloring (and even a simple hue shift will be somewhat tricky). http://en.wikipedia.org/wiki/RGB_color_space http://upvector.com/index.php?section=tutorials&subsection=tutorials/colorspace Another example involves something called the Hough Transform. This transform essentially decomposes an image into a set of lines. You can then take some of the 'strongest' lines in each image and see if they line up. You can do some extra work to try and compensate for rotation and scaling too -- and in this case, since comparing a few lines is MUCH less computational work than doing the same to entire images -- it won't be so bad. http://homepages.inf.ed.ac.uk/amos/hough.html http://rkb.home.cern.ch/rkb/AN16pp/node122.html http://en.wikipedia.org/wiki/Hough_transform A: If you're willing to consider a different approach altogether to detecting illegal copies of your images, you could consider watermarking. (from 1.4) ...inserts copyright information into the digital object without the loss of quality. Whenever the copyright of a digital object is in question, this information is extracted to identify the rightful owner. It is also possible to encode the identity of the original buyer along with the identity of the copyright holder, which allows tracing of any unauthorized copies. While it's also a complex field, there are techniques that allow the watermark information to persist through gross image alteration: (from 1.9) ... any signal transform of reasonable strength cannot remove the watermark. Hence a pirate willing to remove the watermark will not succeed unless they debase the document too much to be of commercial interest. of course, the faq calls implementing this approach: "...very challenging" but if you succeed with it, you get a high confidence of whether the image is a copy or not, rather than a percentage likelihood. A: If you're running Linux I would suggest two tools: align_image_stack from package hugin-tools - is a commandline program that can automatically correct rotation, scaling, and other distortions (it's mostly intended for compositing HDR photography, but works for video frames and other documents too). More information: http://hugin.sourceforge.net/docs/manual/Align_image_stack.html compare from package imagemagick - a program that can find and count the amount of different pixels in two images. Here's a neat tutorial: http://www.imagemagick.org/Usage/compare/ uising the -fuzz N% you can increase the error tolerance. The higher the N the higher the error tolerance to still count two pixels as the same. align_image_stack should correct any offset so the compare command will actually have a chance of detecting same pixels.
{ "language": "en", "url": "https://stackoverflow.com/questions/23931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "163" }
Q: Best method to get objects from a BlockingQueue in a concurrent program? What is the best method to get objects out of a BlockingQueue, in a concurrent program, without hitting a race condition? I'm currently doing the following and I'm not convinced it is the best method: BlockingQueue<Violation> vQueue; /* in the constructor I pass in a BlockingQueue object full of violations that need to be processed - cut out for brevity */ Violation v; while ( ( v = vQueue.poll(500, TimeUnit.MILLISECONDS) ) != null ) { // do stuff with the violation } I have yet to hit a race condition... but, I'm none too sure if this is truly safe. A: class Producer implements Runnable { private final BlockingQueue queue; Producer(BlockingQueue q) { queue = q; } public void run() { try { while (true) { queue.put(produce()); } } catch (InterruptedException ex) { ... handle ...} } Object produce() { ... } } class Consumer implements Runnable { private final BlockingQueue queue; Consumer(BlockingQueue q) { queue = q; } public void run() { try { while (true) { consume(queue.take()); } } catch (InterruptedException ex) { ... handle ...} } void consume(Object x) { ... } } class Setup { void main() { BlockingQueue q = new SomeQueueImplementation(); Producer p = new Producer(q); Consumer c1 = new Consumer(q); Consumer c2 = new Consumer(q); new Thread(p).start(); new Thread(c1).start(); new Thread(c2).start(); } } This example was taken from the JDK 1.6 docs of BlockingQueue. So You can see that you are doing it the right way. Here's the quote which tells you that it have to work: Memory consistency effects: As with other concurrent collections, actions in a thread prior to placing an object into a BlockingQueue happen-before actions subsequent to the access or removal of that element from the BlockingQueue in another thread.
{ "language": "en", "url": "https://stackoverflow.com/questions/23950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What to do about ScanAlert? One of my clients uses McAfee ScanAlert (i.e., HackerSafe). It basically hits the site with about 1500 bad requests a day looking for security holes. Since it demonstrates malicious behavior it is tempting to just block it after a couple bad requests, but maybe I should let it exercise the UI. Is it a true test if I don't let it finish? A: Isn't it a security flaw of the site to let hackers throw everything in their arsenal against the site? Well, you should focus on closing holes, rather than trying to thwart scanners (which is a futile battle). Consider running such tests yourself. A: It's good that you block bad request after a couple of trials, but you should let it continue. If you block it after 5 bad requests you won't know if the 6th request wouldn't crash your site. EDIT: I meant that some attacker might send only one request but similar to one of those 1495 that You didn't test because you blocked., and this one request might chrash your site. A: Preventing security breaches requires different strategies for different attacks. For instance, it would not be unusual to block traffic from certain sources during a denial of service attack. If a user fails to provide proper credentials more than 3 times the IP address is blocked or the account is locked. When ScanAlert issues hundreds of requests which may include SQL injection--to name one--it certainly matches what the site code should consider "malicious behavior". In fact, just putting UrlScan or eEye SecureIIS in place may deny many such requests, but is that a true test of the site code. It's the job of the site code to detect malicious users/requests and deny them. At what layer is the test valid? ScanAlert presents in two different ways: the number of requests which are malformed and the variety of each individual request as a test. It's seems like the 2 pieces of advice that emerge are as follows: * *The site code should not try to detect malicious traffic from a particular source and block that traffic, because that is a futile effort. *If you do attempt such a futile effort, as least make an exception for requests from ScanAlert in order to test lower layers. A: If it's not hurting the performance of the site, I think its a good thing. If you had 1000 clients to the same site all doing that, yeah, block it. But if the site was built for that client, I think it's fair enough they do that.
{ "language": "en", "url": "https://stackoverflow.com/questions/23961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there some way to speed up recursion by remembering child nodes? For example, Look at the code that calculates the n-th Fibonacci number: fib(int n) { if(n==0 || n==1) return 1; return fib(n-1) + fib(n-2); } The problem with this code is that it will generate stack overflow error for any number greater than 15 (in most computers). Assume that we are calculating fib(10). In this process, say fib(5) is calculated a lot of times. Is there some way to store this in memory for fast retrieval and thereby increase the speed of recursion? I am looking for a generic technique that can be used in almost all problems. A: This is called memoization and there is a very good article about memoization Matthew Podwysocki posted these days. It uses Fibonacci to exemplify it. And shows the code in C# also. Read it here. A: If you're using C#, and can use PostSharp, here's a simple memoization aspect for your code: [Serializable] public class MemoizeAttribute : PostSharp.Laos.OnMethodBoundaryAspect, IEqualityComparer<Object[]> { private Dictionary<Object[], Object> _Cache; public MemoizeAttribute() { _Cache = new Dictionary<object[], object>(this); } public override void OnEntry(PostSharp.Laos.MethodExecutionEventArgs eventArgs) { Object[] arguments = eventArgs.GetReadOnlyArgumentArray(); if (_Cache.ContainsKey(arguments)) { eventArgs.ReturnValue = _Cache[arguments]; eventArgs.FlowBehavior = FlowBehavior.Return; } } public override void OnExit(MethodExecutionEventArgs eventArgs) { if (eventArgs.Exception != null) return; _Cache[eventArgs.GetReadOnlyArgumentArray()] = eventArgs.ReturnValue; } #region IEqualityComparer<object[]> Members public bool Equals(object[] x, object[] y) { if (Object.ReferenceEquals(x, y)) return true; if (x == null || y == null) return false; if (x.Length != y.Length) return false; for (Int32 index = 0, len = x.Length; index < len; index++) if (Comparer.Default.Compare(x[index], y[index]) != 0) return false; return true; } public int GetHashCode(object[] obj) { Int32 hash = 23; foreach (Object o in obj) { hash *= 37; if (o != null) hash += o.GetHashCode(); } return hash; } #endregion } Here's a sample Fibonacci implementation using it: [Memoize] private Int32 Fibonacci(Int32 n) { if (n <= 1) return 1; else return Fibonacci(n - 2) + Fibonacci(n - 1); } A: Quick and dirty memoization in C++: Any recursive method type1 foo(type2 bar) { ... } is easily memoized with map<type2, type1> M. // your original method int fib(int n) { if(n==0 || n==1) return 1; return fib(n-1) + fib(n-2); } // with memoization map<int, int> M = map<int, int>(); int fib(int n) { if(n==0 || n==1) return 1; // only compute the value for fib(n) if we haven't before if(M.count(n) == 0) M[n] = fib(n-1) + fib(n-2); return M[n]; } EDIT: @Konrad Rudolph Konrad points out that std::map is not the fastest data structure we could use here. That's true, a vector<something> should be faster than a map<int, something> (though it might require more memory if the inputs to the recursive calls of the function were not consecutive integers like they are in this case), but maps are convenient to use generally. A: According to wikipedia Fib(0) should be 0 but it does not matter. Here is simple C# solution with for cycle: ulong Fib(int n) { ulong fib = 1; // value of fib(i) ulong fib1 = 1; // value of fib(i-1) ulong fib2 = 0; // value of fib(i-2) for (int i = 0; i < n; i++) { fib = fib1 + fib2; fib2 = fib1; fib1 = fib; } return fib; } It is pretty common trick to convert recursion to tail recursion and then to loop. For more detail see for example this lecture (ppt). A: Yes your insight is correct. This is called dynamic programming. It is usually a common memory runtime trade-off. In the case of fibo, you don't even need to cache everything : [edit] The author of the question seems to be looking for a general method to cache rather than a method to compute Fibonacci. Search wikipedia or look at the code of the other poster to get this answer. Those answers are linear in time and memory. **Here is a linear-time algorithm O(n), constant in memory ** in OCaml: let rec fibo n = let rec aux = fun | 0 -> (1,1) | n -> let (cur, prec) = aux (n-1) in (cur+prec, cur) let (cur,prec) = aux n in prec;; in C++: int fibo(int n) { if (n == 0 ) return 1; if (n == 1 ) return 1; int p = fibo(0); int c = fibo(1); int buff = 0; for (int i=1; i < n; ++i) { buff = c; c = p+c; p = buff; }; return c; }; This perform in linear time. But log is actually possible !!! Roo's program is linear too, but way slower, and use memory. Here is the log algorithm O(log(n)) Now for the log-time algorithm (way way way faster), here is a method : If you know u(n), u(n-1), computing u(n+1), u(n) can be done by applying a matrix: | u(n+1) | = | 1 1 | | u(n) | | u(n) | | 1 0 | | u(n-1) | So that you have : | u(n) | = | 1 1 |^(n-1) | u(1) | = | 1 1 |^(n-1) | 1 | | u(n-1) | | 1 0 | | u(0) | | 1 0 | | 1 | Computing the exponential of the matrix has a logarithmic complexity. Just implement recursively the idea : M^(0) = Id M^(2p+1) = (M^2p) * M M^(2p) = (M^p) * (M^p) // of course don't compute M^p twice here. You can also just diagonalize it (not to difficult), you will find the gold number and its conjugate in its eigenvalue, and the result will give you an EXACT mathematical formula for u(n). It contains powers of those eigenvalues, so that the complexity will still be logarithmic. Fibo is often taken as an example to illustrate Dynamic Programming, but as you see, it is not really pertinent. @John: I don't think it has anything to do with do with hash. @John2: A map is a bit general don't you think? For Fibonacci case, all the keys are contiguous so that a vector is appropriate, once again there are much faster ways to compute fibo sequence, see my code sample over there. A: Try using a map, n is the key and its corresponding Fibonacci number is the value. @Paul Thanks for the info. I didn't know that. From the Wikipedia link you mentioned: This technique of saving values that have already been calculated is called memoization Yeah I already looked at the code (+1). :) A: What language is this? It doesnt overflow anything in c... Also, you can try creating a lookup table on the heap, or use a map A: caching is generally a good idea for this kind of thing. Since fibonacci numbers are constant, you can cache the result once you have calculated it. A quick c/pseudocode example class fibstorage { bool has-result(int n) { return fibresults.contains(n); } int get-result(int n) { return fibresult.find(n).value; } void add-result(int n, int v) { fibresults.add(n,v); } map<int, int> fibresults; } fib(int n ) { if(n==0 || n==1) return 1; if (fibstorage.has-result(n)) { return fibstorage.get-result(n-1); } return ( (fibstorage.has-result(n-1) ? fibstorage.get-result(n-1) : fib(n-1) ) + (fibstorage.has-result(n-2) ? fibstorage.get-result(n-2) : fib(n-2) ) ); } calcfib(n) { v = fib(n); fibstorage.add-result(n,v); } This would be quite slow, as every recursion results in 3 lookups, however this should illustrate the general idea A: Is this a deliberately chosen example? (eg. an extreme case you're wanting to test) As it's currently O(1.6^n) i just want to make sure you're just looking for answers on handling the general case of this problem (caching values, etc) and not just accidentally writing poor code :D Looking at this specific case you could have something along the lines of: var cache = []; function fib(n) { if (n < 2) return 1; if (cache.length > n) return cache[n]; var result = fib(n - 2) + fib(n - 1); cache[n] = result; return result; } Which degenerates to O(n) in the worst case :D [Edit: * does not equal + :D ] [Yet another edit: the Haskell version (because i'm a masochist or something) fibs = 1:1:(zipWith (+) fibs (tail fibs)) fib n = fibs !! n ] A: @ESRogs: std::map lookup is O(log n) which makes it slow here. Better use a vector. vector<unsigned int> fib_cache; fib_cache.push_back(1); fib_cache.push_back(1); unsigned int fib(unsigned int n) { if (fib_cache.size() <= n) fib_cache.push_back(fib(n - 1) + fib(n - 2)); return fib_cache[n]; } A: Others have answered your question well and accurately - you're looking for memoization. Programming languages with tail call optimization (mostly functional languages) can do certain cases of recursion without stack overflow. It doesn't directly apply to your definition of Fibonacci, though there are tricks.. The phrasing of your question made me think of an interesting idea.. Avoiding stack overflow of a pure recursive function by only storing a subset of the stack frames, and rebuilding when necessary.. Only really useful in a few cases. If your algorithm only conditionally relies on the context as opposed to the return, and/or you're optimizing for memory not speed. A: Mathematica has a particularly slick way to do memoization, relying on the fact that hashes and function calls use the same syntax: fib[0] = 1; fib[1] = 1; fib[n_] := fib[n] = fib[n-1] + fib[n-2] That's it. It caches (memoizes) fib[0] and fib[1] off the bat and caches the rest as needed. The rules for pattern-matching function calls are such that it always uses a more specific definition before a more general definition. A: One more excellent resource for C# programmers for recursion, partials, currying, memoization, and their ilk, is Wes Dyer's blog, though he hasn't posted in awhile. He explains memoization well, with solid code examples here: http://blogs.msdn.com/wesdyer/archive/2007/01/26/function-memoization.aspx A: If you're using a language with first-class functions like Scheme, you can add memoization without changing the initial algorithm: (define (memoize fn) (letrec ((get (lambda (query) '(#f))) (set (lambda (query value) (let ((old-get get)) (set! get (lambda (q) (if (equal? q query) (cons #t value) (old-get q)))))))) (lambda args (let ((val (get args))) (if (car val) (cdr val) (let ((ret (apply fn args))) (set args ret) ret)))))) (define fib (memoize (lambda (x) (if (< x 2) x (+ (fib (- x 1)) (fib (- x 2))))))) The first block provides a memoization facility and the second block is the fibonacci sequence using that facility. This now has an O(n) runtime (as opposed to O(2^n) for the algorithm without memoization). Note: the memoization facility provided uses a chain of closures to look for previous invocations. At worst case this can be O(n). In this case, however, the desired values are always at the top of the chain, ensuring O(1) lookup. A: The problem with this code is that it will generate stack overflow error for any number greater than 15 (in most computers). Really? What computer are you using? It's taking a long time at 44, but the stack is not overflowing. In fact, your going to get a value bigger than an integer can hold (~4 billion unsigned, ~2 billion signed) before the stack is going to over flow (Fibbonaci(46)). This would work for what you want to do though (runs wiked fast) class Program { public static readonly Dictionary<int,int> Items = new Dictionary<int,int>(); static void Main(string[] args) { Console.WriteLine(Fibbonacci(46).ToString()); Console.ReadLine(); } public static int Fibbonacci(int number) { if (number == 1 || number == 0) { return 1; } var minus2 = number - 2; var minus1 = number - 1; if (!Items.ContainsKey(minus2)) { Items.Add(minus2, Fibbonacci(minus2)); } if (!Items.ContainsKey(minus1)) { Items.Add(minus1, Fibbonacci(minus1)); } return (Items[minus2] + Items[minus1]); } } A: As other posters have indicated, memoization is a standard way to trade memory for speed, here is some pseudo code to implement memoization for any function (provided the function has no side effects): Initial function code: function (parameters) body (with recursive calls to calculate result) return result This should be transformed to function (parameters) key = serialized parameters to string if (cache[key] does not exist) { body (with recursive calls to calculate result) cache[key] = result } return cache[key] A: By the way Perl has a memoize module that does this for any function in your code that you specify. # Compute Fibonacci numbers sub fib { my $n = shift; return $n if $n < 2; fib($n-1) + fib($n-2); } In order to memoize this function all you do is start your program with use Memoize; memoize('fib'); # Rest of the fib function just like the original version. # Now fib is automagically much faster ;-) A: @lassevk: This is awesome, exactly what I had been thinking about in my head after reading about memoization in Higher Order Perl. Two things which I think would be useful additions: * *An optional parameter to specify a static or member method that is used for generating the key to the cache. *An optional way to change the cache object so that you could use a disk or database backed cache. Not sure how to do this sort of thing with Attributes (or if they are even possible with this sort of implementation) but I plan to try and figure out. (Off topic: I was trying to post this as a comment, but I didn't realize that comments have such a short allowed length so this doesn't really fit as an 'answer')
{ "language": "en", "url": "https://stackoverflow.com/questions/23962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: RESTful web services and HTTP verbs What is the minimum set of HTTP verbs that a server should allow for a web service to be classed as RESTful? What if my hoster doesn't permit PUT and DELETE? Is this actually important, can I live happily ever after with just GET and POST ? Update: Thanks for the answers folks, Roger's answer was probably best because of the link to the Bill Venners and Elliotte Rusty Harold interview. I now get it. A: You can also use X-Http-Verb-Override:DELETE inst. of HTTP DELETE. This is also usefull for Silverlight clients who cant change the HTTP verbs and only support GET and POST... A: Yes, you can live without PUT and DELETE. This article tells you why: http://www.artima.com/lejava/articles/why_put_and_delete.html While to true RESTafrians this may be heresy, in the real world you do what you can, with what you have. Be as rational as you can and as consistent with your own convention as you can, but you can definitely build a good RESTful system without P and D. rp A: If you just use GET and POST, it's still RESTful. Your web service may only do things which only required GET or POST, so that's fine. A: REST allows for breaking protocol convention if the implementations of the protocol are broken (so that the only non-standard things you do are to get around the broken parts of the implementation). So it is allowable within REST to use some other method to represent generally unsupported verbs like DELETE or PUT. edit: Here is a quote from Fielding, who is the one that created and defined REST: A REST API should not contain any changes to the communication protocols aside from filling-out or fixing the details of underspecified bits of standard protocols, such as HTTP’s PATCH method or Link header field. Workarounds for broken implementations (such as those browsers stupid enough to believe that HTML defines HTTP’s method set) should be defined separately, or at least in appendices, with an expectation that the workaround will eventually be obsolete. [Failure here implies that the resource interfaces are object-specific, not generic.] A: Today's web browsers only handle GETS + POSTS. In Rails, for example, PUTS + DELETES are "faked" through hidden form fields. Unless your framework has some workaround to "support" PUTS + DELETES, don't worry about them for now.
{ "language": "en", "url": "https://stackoverflow.com/questions/23963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How do I marshal a lambda (Proc) in Ruby? Joe Van Dyk asked the Ruby mailing list: Hi, In Ruby, I guess you can't marshal a lambda/proc object, right? Is that possible in lisp or other languages? What I was trying to do: l = lamda { ... } Bj.submit "/path/to/ruby/program", :stdin => Marshal.dump(l) So, I'm sending BackgroundJob a lambda object, which contains the context/code for what to do. But, guess that wasn't possible. I ended up marshaling a normal ruby object that contained instructions for what to do after the program ran. Joe A: If you're interested in getting a string version of Ruby code using Ruby2Ruby, you might like this thread. A: Try ruby2ruby A: You cannot marshal a Lambda or Proc. This is because both of them are considered closures, which means they close around the memory on which they were defined and can reference it. (In order to marshal them you'd have to Marshal all of the memory they could access at the time they were created.) As Gaius pointed out though, you can use ruby2ruby to get a hold of the string of the program. That is, you can marshal the string that represents the ruby code and then reevaluate it later. A: you could also just enter your code as a string: code = %{ lambda {"hello ruby code".split(" ").each{|e| puts e + "!"}} } then execute it with eval eval code which will return a ruby lamda. using the %{} format escapes a string, but only closes on an unmatched brace. i.e. you can nest braces like this %{ [] {} } and it's still enclosed. most text syntax highlighters don't realize this is a string, so still display regular code highlighting. A: I've found proc_to_ast to do the best job: https://github.com/joker1007/proc_to_ast. Works for sure in ruby 2+, and I've created a PR for ruby 1.9.3+ compatibility(https://github.com/joker1007/proc_to_ast/pull/3) A: Once upon a time, this was possible using ruby-internal gem (https://github.com/cout/ruby-internal), e.g.: p = proc { 1 + 1 } #=> #<Proc> s = Marshal.dump(p) #=> #<String> u = Marshal.load(s) #=> #<UnboundProc> p2 = u.bind(binding) #=> #<Proc> p2.call() #=> 2 There are some caveats, but it has been many years and I cannot remember the details. As an example, I'm not sure what happens if a variable is a dynvar in the binding where it is dumped and a local in the binding where it is re-bound. Serializing an AST (on MRI) or bytecode (on YARV) is non-trivial. The above code works on YARV (up to 1.9.3) and MRI (up to 1.8.7). There's no reason why it cannot be made to work on Ruby 2.x, with a small amount of effort. A: If proc is defined into a file, U can get the file location of proc then serialize it, then after deserialize use the location to get back to the proc again proc_location_array = proc.source_location after deserialize: file_name = proc_location_array[0] line_number = proc_location_array[1] proc_line_code = IO.readlines(file_name)[line_number - 1] proc_hash_string = proc_line_code[proc_line_code.index("{")..proc_line_code.length] proc = eval("lambda #{proc_hash_string}")
{ "language": "en", "url": "https://stackoverflow.com/questions/23970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Why is an s-box input longer than its output? I don't understand where the extra bits are coming from in this article about s-boxes. Why doesn't the s-box take in the same number of bits for input as output? A: It is the way s-boxes work. They can be m * n ==> m bit input , n bit output. For example, in the AES S-box the number of bits in input is equal to the number of bits in output. In DES, m=6 and n=4. The input is expanded from 32 to 48 bits in the first stages of DES. So it is be reduced to 32 bits again by applying one round of S-box substitution. Thus no information is lost here. The Wikipedia article on itself can be a bit confusing. It will make people think that information is lost. You should read the article in conjuncture with implementation details of some encryption algorithm using s-boxes. A: What extra bits? They are going from 6 to 4. EDIT: Whoops! I'm an idiot. This is kinda like a 2nd grade multiplication table. They strip the outer bits off of the 6-bit block to be encypted, and leave the middle 4. Just like a table for an arithmatic operation, they go down one side, and find the outer bit sequence, then across the top and find the middle ones. To answer your question, it could input and output the same number of bits, but this s-box is just set up to do it the way it does. Its arbitrary.
{ "language": "en", "url": "https://stackoverflow.com/questions/23988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Experiences Using ASP.NET MVC Framework I am wondering what experiences people are having using the ASP.NET MVC Framework? In particular I am looking for feedback on the type of experience folks are having using the framework. What are people using for their view engine? What about the db layer, NHibernate, LINQ to SQL or something else? I know stackoverflow uses MVC, so please say this site. Thank you. Why the choice of NHibernate over anything else? I am not against NHibernate, just wondering the rational. A: I've been building a few sites with the framework since the first preview came out, and it has certainly come a long way already. It feels like a very light-weight and tidy framework. There are a couple of areas where I think it really excels over "vanilla" asp.net: * *Enables a much cleaner separation of concerns/loose coupling *makes test-driven development actually possible. *And it's much more friendly towards javascript (ajax) heavy sites. That said, there are some areas where it has some way to go yet: * *Validation *Data binding *Tag soup, as mentioned earlier (although this can be avoided to some extent; user controls, helper methods&codebehind is still allowed!) The framework is still in beta though, so I expect these things to improve over time. Scott Hanselman has hinted that the Dynamic Data framework will be available for ASP.NET MVC at some point too, for example. A: I've been getting into some pretty heavy use of NHibernate with ASP.NET MVC lately, and am really loving it. A: I have used ASP.NET MVC for a few projects recently and its like a breath of fresh air compared to WebForms. It works with the web rather than against it, and feels like a much more natural way to develop. I use SubSonic rather than NHibernate, and find it fits very nice within the MVC architecture. The building blocks I commonly use for a website are:- Asp.net mvc Subsonic SQL Server Lucene JQuery A: I used the MVC framework to build a small site, and I found myself frequently frustrated by the tag soup views, and lack of the server controls I had come to love. I went back to using webforms. WebForms, once mastered, are great...They just take a very long time to learn all the tricks. A: I've just been recently turned on to MVC and Linq to Sql for Asp.Net. I'm still learning both, and I'm really enjoying them both. There are quite a few screen casts on http://www.asp.net/learn/. A: Why the choice of NHibernate over anything else? It's a very powerful tool, and is (relatively) easy to learn. It takes away all the monotony and repetitiveness of manually implementing object-relational mapping.
{ "language": "en", "url": "https://stackoverflow.com/questions/23994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Setting Attributes in Webby Layouts I'm working with Webby and am looking for some clarification. Can I define attributes like title or author in my layout? A: Not really. The layout has access to the page attributes rather than the other way. The easiest way to do what you want is to populate the SITE.page_defaults hash in your site's Rakefile (probably build.rake). Add something like the following: SITE.page_defaults['title'] = "My awesome title" SITE.page_defaults['author'] = "Shazbug" SITE.page_defaults['is_mando_awesome'] = "very yes" You can now access those hash members in your template: Written by <%= @page.author %> You can find more info about Webby's page default stuff on the Google Group, specifically here: http://groups.google.com/group/webby-forum/browse_thread/thread/f3dc1f4187959634/c30d7883705f6218?lnk=gst&q=SITE#c30d7883705f6218 A: I've never used it but the tutorial here: Makes it look like the answer to your question is "yes". Specifically I'm looking under the "Making Changes" header on that page.
{ "language": "en", "url": "https://stackoverflow.com/questions/23996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: To what extent should a developer learn specifics about database systems? Modern database systems today come with loads of features. And you would agree with me that to learn one database you must unlearn the concepts you learned in another database. For example, each database would implement locking differently than others. So to carry the concepts of one database to another would be a recipe for failure. And there could be other examples where two databases would perform very very differently. So while developing the database driven systems should the programmers need to know the database in detail so that they code for performance? I don't think it would be appropriate to have the DBA called for performance later as his job is to only maintain the database and help out the developer in case of an emergency but not on a regular basis. What do you think should be the extent the developer needs to gain an insight into the database? A: I think these are the most important things (from most important to least, IMO): * *SQL (obviously) - It helps to know how to at least do basic queries, aggregates (sum(), etc), and inner joins *Normalization - DB design skills are an major requirement *Locking Model/MVCC - Its nice to have at least a basic grasp of how your databases manage row locking (or use MVCC to accomplish similar goals with optimistic locking) *ACID compliance, Txns - Please know how these work and interact *Indexing - While I don't think that you need to be an expert in tablespaces, placing data on separate drives for optimal performance, and other minutiae, it does help to have a high level knowledge of how index scans work vs. tablescans. It also helps to be able to read a query plan and understand why it might be choosing one over the other. *Basic Tools - You'll probably find yourself wanting to copy production data to a test environment at some point, so knowing the basics of how to restore/backup your database will be important. Fortunately, there are some great FOSS and free commercial databases out there today that can be used to learn quite a bit about db fundamentals. A: I think a developer should have a fairly good grasp of how their database system works, not matter which one it is. When making design and architecture decisions, they need to understand the possible implications when it comes to the database. A: Personally, I think you should know how databases work as well as the relational model and the rhetoric behind it, including all forms of normalization (even though I rarely see a need to go beyond third normal form). The core concepts of the relational model do not change from relational database to relational database - implementation may, but so what? Developers that don't understand the rationale behind database normalization, indexes, etc. are going to suffer if they ever work on a non-trivial project. A: I think it really depends on your job. If you are a developer in a large company with dedicated DBAs then maybe you don't need to know much, but if you are in a small company then it may be really helpful knowing more about databases. In small companies you may wear more than one hat. It cannot hurt to know more in any situation. A: It certainly can't hurt to be familiar with relational database theory, and have a good working knowledge of the standard SQL syntax, as well as knowing what stored procedures, triggers, views, and indexes are. Obviously it's not terribly important to learn the database-specific extensions to SQL (T-SQL, PL/SQL, etc) until you start working with that database. I think it's important to have a basic understand of databses when developing database applications just like it's important to have an understanding of the hardware your your software runs on. You don't have to be an expert, but you shouldn't be totally ignorant of anything your software interacts with. That said, you probably shouldn't need to do much SQL as an application developer. Most of the interaction with the database should be done through stored procedures developed by the DBA, I'm not a big fan of including SQL code in your application code. If your queries are in stored procedures, then the DBA can change the implementation of the stored procedure, or even the database schema, and so long as the result is the same it doesn't require any changes to your application code. A: If you are uncertain about how to best access the database you should be using tried and tested solutions like the application blocks from Microsoft - http://msdn.microsoft.com/en-us/library/cc309504.aspx. They can also prove helpful to you by examining how that code is implemented. A: Basic things about Sql queries are must. then you can develop simple system. but when you are going to implement Complex systems you should know Normalization, Procedures, Functions, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/24004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Markdown vs markup - are they related? I'm using markdown to edit this question right now. In some wikis I used wiki markup. Are they the same thing? Are they related? Please explain. If I want to implement one or the other in a web project (like stackoverflow) what do I need to use? A: Markup is a general term for content formatting - such as HTML - but markdown is a library that generates HTML markup. Take a look at Markdown. A: Markdown and the markup used in Mediawiki (the wiki software that powers Wikipedia) is not the same. They're related in the sense that both are less verbose ways of entering html (with some added features), but I doubt that they are related to each other in any other sense. If you want to implement Markdown on your site just Google Markdown + your favourite platform/language and you'll likely to find a library that does it for you. If you want to implement Mediawiki's markup you probably need to look round for better ones (like Markdown). A: Markdown is a play on words because it is markup. "Markdown" is a proper noun. Markup is just a way of providing functionality above plain text. For example: formatting, links, images, etc. A: * *Markup is a generic term for a language that describes a document's formatting *Markdown is a specific markup library: http://daringfireball.net/projects/markdown/ These days the term is more commonly used to refer to markup languages that mimic the style of the library. See: https://en.wikipedia.org/wiki/Markdown A: Mark-up is a term from print editing - the editor would go through the text and add annotations (i.e. this in italic, that in bold) for the printers to use when producing the final version. This was called marking up the text. A computer mark-up language is just a standardised short-hand for these sorts of annotations. HTML is basically the web's standard mark-up language, but it's rather verbose. A list in HTML: <ul> <li>Item one</li> <li>Item two</li> </ul> Markdown is a specific markup language, having its own simple syntax. A list in Markdown: * Item one * Item two Both of these will work in your stackoverflow posts. Markdown can't do everything HTML can, but both are mark-up languages.
{ "language": "en", "url": "https://stackoverflow.com/questions/24041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "271" }
Q: AnkhSVN versus VisualSVN I currently use AnkhSVN to integrate subversion into Visual Studio. Is there any reason I should switch to VisualSVN? AnkhSVN is free (in more than one sense of the word) while VisualSVN costs $50. So right there unless I'm missing some great feature of VisualSVN I don't see any reason to switch. A: I recently tried Ankh but quickly switched back to VisualSVN. Because: * *Better commit dialog (use UI of tortoise) *No refresh problems (which i had using ankh) Imho visual svn is easilty worth its money A: For me, VisualSVN is pretty, but useless. AnkhSvn on the other hand, after it came in v2 as an scc provider, it works very good. VisualSVN tries to think for you, which is not an good thing, the user should be the controller, not the software. A: I used VisualSVN until Ankh hit 2.0, and ever since, I've abandoned VisualSVN. Ankh has surpassed VisualSVN in functionality, in my mind, and all the 1.x perf and integration issues are gone. A: The main thing is that VisualSVN uses TortoiseSVN for nearly all of its UI. So you only really have to set up one client (preferred diff viewer, etc), and you can take advantage of things like the same "Previous messages" button on the Commit dialog, whether you're committing from Explorer or Visual Studio.
{ "language": "en", "url": "https://stackoverflow.com/questions/24045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "59" }
Q: The Safari Back Button Problem I do some minor programming and web work for a local community college. Work that includes maintaining a very large and soul destroying website that consists of a hodge podge of VBScript, javascript, Dreamweaver generated cruft and a collection of add-ons that various conmen have convinced them to buy over the years. A few days ago I got a call "The website is locking up for people using Safari!" Okay, step one download Safari(v3.1.2), step two surf to the site. Everything appears to work fine. Long story short I finally isolated the problem and it relates to Safari's back button. The website uses a fancy-pants javascript menu that works in every browser I've tried including Safari, the first time around. But in Safari if you follow a link off the page and then hit the back button the menu no longer works. I made a pared down webpage to illustrate the principle. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head><title>Safari Back Button Test</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> </head> <body onload="alert('Hello');"> <a href="http://www.codinghorror.com">Coding Horror</a> </body> </html> Load the page and you see the alert box. Then follow the link off the page and hit the back button. In IE and Firefox you see the alert box again, in Safari you do not. After a vigorous googling I've discovered others with similar problems but no really satisfactory answers. So my question is how can I make my pages work the same way in Safari after the user hits the back button as they do in other browsers? If this is a stupid question please be gentle, javascript is somewhat new to me. A: Please do not follow any of the advice that tells you to ignore the cache. Pages are cached for a reason -- to improve user experience. The methods you're using will make user experience worse, so unless you hate your users, don't do that. The correct solution for Safari (Desktop and iOS) is to use the pageshow event instead of the onload event (See https://developer.mozilla.org/en-US/docs/Using_Firefox_1.5_caching for what these are). The pageshow event will fire at the same time you expect the onload event to fire, but it will also work when pages are served via the cache. This appears to be what you want anyway. A: Here is a good solution for Mobile Safari: /*! Reloads page on every visit */ function Reload() { try { var headElement = document.getElementsByTagName("head")[0]; if (headElement && headElement.innerHTML) headElement.innerHTML += " "; } catch (e) {} } /*! Reloads on every visit in mobile safari */ if ((/iphone|ipod|ipad.*os 5/gi).test(navigator.appVersion)) { window.onpageshow = function(evt) { if (evt.persisted) { document.body.style.display = "none"; location.reload(); } }; } (source) I modified it to my needs, but as is it is okay. (annoying white screen on refresh if you dont modify it). A: An iframe solves the problem: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head><title>Safari Back Button Test</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> </head> <body onload="alert('Hello');"> <a href="http://www.codinghorror.com">Coding Horror</a> <iframe style="height:0px;width:0px;visibility:hidden" src="about:blank"> this prevents back forward cache </iframe> </body> </html> more details A: $(window).bind("pageshow", function(event) { if (event.originalEvent.persisted) { window.location.reload() } }); Answered more recently here by mshah. A: I have no idea what's causing the problem but I know who might be able to help you. Safari is built on Webkit and short of Apple (who are not so community minded) the Webkit team might know what the issue is. It's not a stupid question at all. A: I've noticed something very similar. I think it is because Firefox and IE, when going back, are retrieving the page from the server again and Safari is not. Have you tried adding a page expiry/no cache header? I was going to look into it when I discovered the behaviour but haven't had time yet. A: Stefan's iframe solution works, but if that's not elegant enough, I find the following JavaScript also solves it: window.onunload = function(){}; That is, if your menu is JavaScript, then you might prefer to solve this issue with JavaScript too. The unload event handler definition idea came from this Firefox 1.5 article: https://developer.mozilla.org/en/Using_Firefox_1.5_caching. A: Just put this in body tag <body onunload=""> this will force safari, chrome, FF reload every time you hit Back button A: I know this thread is a year old, but I just fixed an identical Safari-only problem using ProjectSeven's Safari backbutton fix. Works like a charm. http://www.projectseven.com/extensions/info/safaribbfix/index.htm A: Had the same problem on iPad. Not that beautiful but it works :). How it works. I realised that on iPad Safari, the page was not reloaded when the back button was pressed. I put a counter every second on the page and I save the current timestamp. When the page is loaded the counter and time are synchronized. On back button, counter continue where it stopped and there is a gap between timestamp and counter. If the gap is grater than 500ms, force reload the page. In the file action.js var showLoadingBoxSetIntervalVar; var showLoadingBoxCount = 0; var showLoadingBoxLoadedTimestamp = 0 function showLoadingBox(text) { var showLoadingBoxSetIntervalVar=self.setInterval(function(){showLoadingBoxIpadRelaod()},1000); showLoadingBoxCount = 0 showLoadingBoxLoadedTimestamp = new Date().getTime(); //Here load the spinner } function showLoadingBoxIpadRelaod() { //Calculate difference between now and page loaded time minus threshold 500ms var diffTime = ( (new Date().getTime()) - showLoadingBoxLoadedTimestamp - 500)/1000; showLoadingBoxCount = showLoadingBoxCount + 1; var isiPad = navigator.userAgent.match(/iPad/i) != null; if(diffTime > showLoadingBoxCount && isiPad){ location.reload(); } } A: Try this if(performance.navigation.type == 2) { alert("Hello"); } A: In my Angular application I used this condition to fix the problem. First I added onunload="" event handle to index.html lilke below <body class="mat-typography" onunload=""> <app-root></app-root> </body> then in my component ngOnInit method I added that code like below public ngOnInit(): void { window.onpageshow = (event) => { if (event.persisted) { document.body.style.display = "none"; location.reload(); } }; ... }
{ "language": "en", "url": "https://stackoverflow.com/questions/24046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: Best way to license Microsoft software as an independent developer I've recently switched from being an employee of a small consulting company to being an independent consultant and as time goes on I will need to upgrade Windows and Visual Studio. So what is the most affordable way to go about this for a small time developer? My previous boss suggested I get a TechNet Plus subscription for OS licenses, I've done that and appears to be what I need, but open to other options for the future. Visual Studio I'm having a hard time figuring out exactly what is the difference between Professional and Standard. Also I'd really like a digital version, but seems that expensive MSDN subscription is the only way? Visual Studio 2008 Professional with MSDN Professional listed here appears to be semi-reasonably priced at $1,199. That would make the TechNet Plus subscription unneeded. A: I recommend that if VS Express is not good enough, use Professional. Standard is missing some really useful features, like a Remote Debugger. Here is a detailed comparison: http://msdn.microsoft.com/en-us/vs2008/products/cc149003.aspx I'd say cancel TechNet and get one of the bottom two MSDN Subscriptions, Visual Studio Professional with either MSDN Professional or with MSDN Premium. A: You have the Microsoft Empower for ISV program, see https://partner.microsoft.com/40011351 Gives you a full msdn pro subscription for two years. A: For non developer tools try Microsoft Action Pack https://partner.microsoft.com/40016455 Then use Visual Studio Professional (in some exibitions you will get this for free) For the versioning use svn and not TeamSystem A: I realise that this doesn't apply to the asker but it it is relevent to the question. Any student developers out there try Microsfts Dream Spark scheme. Visual Studio, Expression Studio, XNA and Server 2003 for free! Office is also available to students for less than 60 bucks in Microsfts `Ultimate Steal' A: I think that Visual Studio Professional with MSDN Subscription doesn't offer much value compared to just purchasing Visual Studio 2010 Pro. You get testing licenses for Windows Server and MSSQL, but that's it. And you can get by just fine without those 90% of the time. But Visual Studio Premium with MSDN is a different story. You get access to most other server products (testing license only of course), and an Office Professional license. That's a much better value for a one-man shop in my opinion, if you can afford it.
{ "language": "en", "url": "https://stackoverflow.com/questions/24099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: C++ IDE for Linux? I want to expand my programming horizons to Linux. A good, dependable basic toolset is important, and what is more basic than an IDE? I could find these SO topics: * *Lightweight IDE for linux and *What tools do you use to develop C++ applications on Linux? I'm not looking for a lightweight IDE. If an IDE is worth the money, then I will pay for it, so it need not be free. My question, then: What good, C++ programming IDE is available for Linux? The minimums are fairly standard: syntax highlighting, code completion (like intellisense or its Eclipse counterpart) and integrated debugging (e.g., basic breakpoints). I have searched for it myself, but there are so many that it is almost impossible to separate the good from the bads by hand, especially for someone like me who has little C++ coding experience in Linux. I know that Eclipse supports C++, and I really like that IDE for Java, but is it any good for C++ and is there something better? The second post actually has some good suggestions, but what I am missing is what exactly makes the sugested IDE so good for the user, what are its (dis)advantages? Maybe my question should therefore be: What IDE do you propose (given your experiences), and why? A: I love how people completely miss the request in the original question for an IDE. Linux is NOT an IDE. That's just not what those words mean. I learned c and c++ using vi and gcc and make, and I'm not saying they aren't adequate tools, but they are NOT an IDE. Even if you use more elaborate tools like vim or emacs or whatever fancy editor you want, typing commands on a command line is not an IDE. Also, you all know make exists as part of visual studio right? The idea that an IDE is "limiting" is just silly if you can use the IDE to speed some things, yet are still able to fall back on command line stuff when needed. All that said, I'd suggest, as several above have, trying Code blocks. Its got decent code highlighting, a pretty effortless way to create a project, code it, run it, etc, that is the core of a real IDE, and seems fairly stable. Debugging sucks...I have never seen a decent interactive debugger in any linux/unix variant. gdb ain't it. If you're used to visual studio style debugging, you're pretty much out of luck. Anyway, I'll go pack my things, I know the one-view-only linux crowd will shout this down and run me out of town in no time. A: My personal favorite is the CodeLite 2.x IDE. see: http://www.codelite.org The decision to use CodeLite was based on a research regarding the following C++ IDE for Linux: * *Eclipse Galileo with CDT Plugin *NetBeans 6.7 (which is also the base for the SunStudio IDE) *KDevelop4 *CodeBlocks 8.02 *CodeLite 2.x After all I have decided to use CodeLite 2.x. Below I have listed some Pros and Cons regarding the mentioned C++ IDEs. Please note, that this reflects my personal opinion only! EDIT: what a pity that SOF doesn't support tables, so I have to write in paragraphs ... Eclipse Galileo with CDT Plugin Pros: * *reasonable fast *also supports Java, Perl(with E.P.I.C plugin) *commonly used and well maintained *also available for other OS flavours (Windows, MacOS, Solaris, AIX(?)) Cons: * *GUI is very confusing and somewhat inconsistent - not very intuitive at all *heavy weight *Only supports CVS (AFAIK) NetBeans 6.7 (note this is also the base for the SunStudio IDE) Pros: * *one of the most intuitive GUI I have ever seen *also supports Java, Python, Ruby *integrates CVS, SVN, Mercurial *commonly used and well maintained *also available for other OS flavours (Windows, MacOS, Solaris) Cons: * *extremly slow *heavy weight *uses Spaces for indentation, which is not the policy at my work. I'm sure this is configurable, but I couldn't find out how to to that KDevelop4 (note: I did not much testing on it) Pros: * *commonly used on Linux *integrates CVS, SVN, Mercurial Cons: * *the GUI looks somewhat old fashioned *heavy weight *very specific to the KDE environment CodeBlocks 8.02 (note: I did not much testing on it) Pros: * *reasonable fast Cons: * *the GUI looks somewhat old fashioned (although it has a nice startup screen) *the fonts in the editor are very small *some icons (e.g. the debugger related icons starting/stepping) are very small *no source control integration CodeLite 2.x (note: this is my personal favorite) Pros: * *the best, modern looking and intuitive GUI I have seen on Linux *lightweight *reasonable fast *integrates SVN *also available on other OS flavours(Windows, MacOS, Solaris(?)) Cons: * *no CVS integration (that's important for me because I have to use it at work) *no support for Java, Perl, Python (would be nice to have) A: make + vim + gdb = one great IDE A: * *Code::Blocks *Eclipse CDT Soon you'll find that IDEs are not enough, and you'll have to learn the GCC toolchain anyway (which isn't hard, at least learning the basic functionality). But no harm in reducing the transitional pain with the IDEs, IMO. A: I quite like Ultimate++'s IDE. It has some features that were designed to use with their own library (which, BTW, is quite a nice toolkit if you don't want to buy on either GTK+ or QT) but it works perfectly well with general C++ projects. It provides decent code completion, good syntax colouring, integrated debugging, and all other features most modern IDEs support. A: I really suggest codeblocks. It's not as heavy as Eclipse and it's got Visual Studio project support. A: A quick answer, just to add a little more knowledge to this topic: You must definitely check out NetBeans. Netbeans 6.7 has the following features: * *C/C++ Projects and Templates: Supports syntax highlighting, automatic code completion, automatic indentation. *It has a C/C++ Debugger *Supports Compiler Configurations, Configuration Manager and Makefile Support (with a Wizard). *It has a Classes Window, a Usages Window and a File Navigation Window (or panel). *A Macro expansion view, and also tooltips. *Support for QT development. I think it's a perfect (and far better) Visual Studio substitution, and a very good tool to learn C/C++. Good Luck! A: Perhaps the Linux Tools Project for Eclipse could fill your needs? The Linux Tools project aims to bring a full-featured C and C++ IDE to Linux developers. We build on the source editing and debugging features of the CDT and integrate popular native development tools such as the GNU Autotools, Valgrind, OProfile, RPM, SystemTap, GCov, GProf, LTTng, etc. Current projects include LTTng trace viewers and analyzers, an RPM .spec editor, Autotools build integration, a Valgrind heap usage analysis tool, and OProfile call profiling tools. A: On Linux there are plenty of IDEs: * *Code::blocks *Codelite *KDevelop *Qt Creator *Eclipse with CDT *NetBeans In my experience, the most valuable are Eclipse and Qt Creator. Both provide all "standard" features (i.e., autocompletion, syntax highlightning, debugger, git integration). It is worth noting that Eclipse also provides refactoring functionalities, while Qt Creator provides integration with Valgrind and support for deployment on remote targets. Also the commercial CLion IDE seems preety good (but I've not used it extensively). A: At least for Qt specific projects, the Qt Creator (from Nokia/Trolltech/Digia) shows great promise. A: Although I use Vim, some of my co-workers use SlickEdit which looks pretty good. I'm not certain about integrated debugging because we wouldn't be able to do that on our particular project anyway. SlickEdit does have good support for navigating large code bases, with cross referencing and tag jumping. Of course it has the basic stuff like syntax highlighting and code completion too. A: I hear Anjuta is pretty slick for Gnome users. I played a bit with KDevelop and it's nice, but sort of lacking featurewise. Code::Blocks is also very promising, and I like that one best. A: Sun Studio version 12 is a free download(FREE and paid support available) -- http://developers.sun.com/sunstudio/downloads/thankyou.jsp?submit=%A0FREE+Download%A0%BB%A0. I'm sure you have code completion and debugging support including plugin support in this IDE. Sun Studio is available for Linux as well as Solaris. forums : http://developers.sun.com/sunstudio/community/forums/index.jsp. Sun Studio Linux forums : http://forum.sun.com/forum.jspa?forumID=855 I'll be eager to hear your feedback on this tool. BR, ~A A: I've previously used Ultimate++ IDE and it's rather good. A: And then I noticed that this simply isn't how you work there*, and I threw everything out, spent a few days reading manuals, set up my shell (bash), set up a GVIM environment, learned the GCC/binutils toolchain, make and gdb and lived happily ever after. I'd mostly agree, but the problem is also one of perception: we forget how difficult it was to become productive in any chose IDE (or other environment). I find IDE's (Visual Studio, NetBeans, Eclipse) amazingly cumbersome in so many ways. As an old-time UNIX guy, I always use Emacs. But that has a pretty steep and long learning curve, so I'm not sure I can recommend it to newcomers. I'd second that; use Emacs as my primary editor on both Linux and on MSW (XP2,W2K). I would disagree that it has a steep learning curve, but would say that because of the huge number of features it has a long learning curve. You can be productive within a short time, but if you want you can learn new features of it for years to come. However -- don't expect all the features of Emacs to be available on drop-down menus, there is just too much functionality to find it there. As I metioned, I've used GNU Emacs on MSW for years. And it's always worked well with Visual Studio until I "upgraded" to 2008; now it sometimes delays many seconds before refreshing files from disk. The main reason for editing in the VS window is the "Intellisense" code completion feature. A: geany I recommend A: I use Eclipse CDT and Qt Creator (for Qt applications). That's my preferences. It's a very suggestive question and there is as many answers as there is developers. :) A: SlickEdit. I have used and loved SlickEdit since 2005, both on Windows and on Linux. I also have experience working in Visual Studio (5, 6, 2003, 2005) and just with Emacs and command line. I use SlickEdit with external makefiles, some of my teammates use SlickEdit, others use Emacs/vi. I do not use the integrated debugger, integrated version control, integrated build system: I generally find too much integration to be real pain. SlickEdit is robust (very few bugs), fast and intuitive. It is like a German car, a driver's car. The newest versions of SlickEdit seem to offer many features that do not interest me, I am a little worried that the product will become bloated and diluted in the future. For now (I use V13.0) it is great. A: could you clarify a little bit more how it was for you, what you had to change. Maybe you could point me in the right direction by providing some links to the information you used. My first source were actually the tools' man pages. Just type $ man toolname on the command line ($ here is part of the prompt, not the input). Depending on the platform, they're quite well-written and can also be found on the internet. In the case of make, I actually read the complete documentation which took a few hours. Actually, I don't think this is necessary or helpful in most cases but I had a few special requirements in my first assignments under Linux that required a sophisticated makefile. After writing the makefile I gave it to an experienced colleague who did some minor tweaks and corrections. After that, I pretty much knew make. I used GVIM because I had some (but not much) prior experience there, I can't say anything at all about Emacs or alternatives. I find it really helps to read other peoples' .gvimrc config file. Many people put it on the web. Here's mine. Don't try to master all binutils at once, there are too many functions. But get a general overview so you'll know where to search when needing something in the future. You should, however, know all the important parameters for g++ and ld (the GCC linker tool that's invoked automatically except when explicitly prevented). Also I'm curious, do you have code completion and syntax highlighting when you code? Syntax highlighting: yes, and a much better one than Visual Studio. Code completion: yes-ish. First, I have to admit that I didn't use C++ code completion even in Visual Studio because (compared to VB and C#) it wasn't good enough. I don't use it often now but nevertheless, GVIM has native code completion support for C++. Combined with the ctags library and a plug-in like taglist this is almost an IDE. Actually, what got me started was an article by Armin Ronacher. Before reading the text, look at the screenshots at the end of it! do you have to compile first before getting (syntax) errors? Yes. But this is the same for Visual Studio, isn't it (I've never used Whole Tomato)? Of course, the syntax highlighting will show you non-matching brackets but that's about all. and how do you debug (again think breakpoints etc)? I use gdb which is a command-line tool. There's also a graphical frontend called DDD. gdb is a modern debugging tool and can do everything you can do in an IDE. The only thing that really annoys me is reading a stack trace because lines aren't indented or formatted so it's really hard to scan the information when you're using a lot of templates (which I do). But those also clutter the stack trace in IDEs. Like I said, I had the 'pleasure' to set my first steps in the Java programming language using windows notepad and the command line java compiler in high school, and it was, .. wel a nightmare! certainly when I could compare it with other programming courses I had back then where we had decent IDE's You shouldn't even try to compare a modern, full-feature editor like Emacs or GVIM to Notepad. Notepad is an embellished TextBox control, and this really makes all the difference. Additionally, working on the command line is a very different experience in Linux and Windows. The Windows cmd.exe is severely crippled. PowerShell is much better. /EDIT: I should mention explicitly that GVIM has tabbed editing (as in tabbed browsing, not tabs-vs-spaces)! It took me ages to find them although they're not hidden at all. Just type :tabe instead of plain :e when opening a file or creating a new one, and GVIM will create a new tab. Switching between tabs can be done using the cursor or several different shortcuts (depending on the platform). The key gt (type g, then t in command mode) should work everywhere, and jumps to the next tab, or tab no. n if a number was given. Type :help gt to get more help. A: For me Ultimate++ seems to be the best solution to write cross-os program A: If you were using vim for a long time, then you should actually make that as your IDE. There are a lot of addons available. I found several of those as pretty useful, and compiled it here, have a look at it. * *C/C++ IDE *Source code browser And a lot more in the vi / vim tips & tricks series over there. A: I have been using Anjuta for my university projects about 3 years ago. I haven't been using it lately. But it was nice back then, so should be better with the latest releases. A: For CMake based projects i use Jetbrains CLion For Autotools based projects the already mentioned Qtcreator. For everything else: VIM + YouCompleteMe A: Initially: confusion When originally writing this answer, I had recently made the switch from Visual Studio (with years of experience) to Linux and the first thing I did was try to find a reasonable IDE. At the time this was impossible: no good IDE existed. Epiphany: UNIX is an IDE. All of it.1 And then I realised that the IDE in Linux is the command line with its tools: * *First you set up your shell * *Bash, in my case, but many people prefer *fish or *(Oh My) Zsh; *and your editor; pick your poison — both are state of the art: * *Neovim2 or *Emacs. Depending on your needs, you will then have to install and configure several plugins to make the editor work nicely (that’s the one annoying part). For example, most programmers on Vim will benefit from the YouCompleteMe plugin for smart autocompletion. Once that’s done, the shell is your command interface to interact with the various tools — Debuggers (gdb), Profilers (gprof, valgrind), etc. You set up your project/build environment using Make, CMake, SnakeMake or any of the various alternatives. And you manage your code with a version control system (most people use Git). You also use tmux (previously also screen) to multiplex (= think multiple windows/tabs/panels) and persist your terminal session. The point is that, thanks to the shell and a few tool writing conventions, these all integrate with each other. And that way the Linux shell is a truly integrated development environment, completely on par with other modern IDEs. (This doesn’t mean that individual IDEs don’t have features that the command line may be lacking, but the inverse is also true.) To each their own I cannot overstate how well the above workflow functions once you’ve gotten into the habit. But some people simply prefer graphical editors, and in the years since this answer was originally written, Linux has gained a suite of excellent graphical IDEs for several different programming languages (but not, as far as I’m aware, for C++). Do give them a try even if — like me — you end up not using them. Here’s just a small and biased selection: * *For Python development, there’s PyCharm *For R, there’s RStudio *For JavaScript and TypeScript, there’s Visual Studio Code (which is also a good all-round editor) *And finally, many people love the Sublime Text editor for general code editing. Keep in mind that this list is far from complete. 1 I stole that title from dsm’s comment. 2 I used to refer to Vim here. And while plain Vim is still more than capable, Neovim is a promising restart, and it’s modernised a few old warts. A: Not to repeat an answer, but I think I can add a bit more. Slickedit is an excellent IDE. It supports large code-bases well without slowing down or spending all its time indexing. (This is a problem I had with eclipse's cdt). Slickedit's speed is probably the nicest thing about it, actually. The code completion works well and there are a large amount of options for things like automatic formatting, beautification and refactoring. It does have integrated debugging. It has plug-in support and fairly active community creating them. In theory, you should be able to integrate well with people doing the traditional makefile stuff, as it allows you to create a project directly from one, but that didn't work as smoothly as I would have liked when I tried it. In addition to Linux, there are Mac and Windows versions of it, should you need them. A: As an old-time UNIX guy, I always use Emacs. But that has a pretty steep and long learning curve, so I'm not sure I can recommend it to newcomers. There really isn't a "good" IDE for Linux. Eclipse is not very good for C/C++ (CDT is improving, but is not very useful yet). The others are missing all the features you are going to be looking for. It really is important to learn how all the individual tools (gcc, make, gdb, etc.) work. After you do so, you may find the Visual Studio way of doing things to be very limiting. A: I like SciTE as a basic editor for C++/Python on Linux. It has keyboard bindings similar to VC so you do not have to reprogram your cut-and-paste fingers. I use it together with Git for source code control and the very useful 'git grep' command for searching in your code base. I played with Eclipse CDT but my source codebase was to big for it and I spend too much time waiting on the IDE. If your project is smaller it may be good for you though. A: Use Mono-Develop. It is very similar to Visual Studio. It works cross-platform and is Awesome!! A: I prefer using Emacs and Vim for writing C++ code. When I need to use an IDE, I use CodeBlocks. A: Checkout Netbeans, it's written in Java so you'll have the same environment regardless of your OS, and it supports a lot more than just C++. I'm not going to try to convince you, because I think IDEs can be a very personal choice. For me it improves my productivity by being fast, supporting the languages I code in and has the standard features you'd expect from an IDE. A: Just a quick follow up for this question... It's been a month since I started using Vim as my main 'GUI' tool for programming C++ in Linux. At first the learning curve was indeed a bit steep but after a while and with the right options turned on and scripts running I really got the hang of it! I love the way how you can shape Vim to suite your needs; just add/change key mappings and Vim is turned into a highly productive 'IDE'. The toolchain to build and compile a C++ program on Linux is also really intuitive. make and g++ are the tools you'll use. The debugger ddd is however not really that good, but maybe that's because I haven't had the time to master it properly. So to anyone who is, or was looking for a good C++ IDE in Linux, just like I was, your best bet lays with the standard available tools in Linux itself (Vim, g++, ddd) and you should really at least try to use them, before looking for sonething else... Last but not least, I really want to thank konrad for his answer here, It really helped me find my way in the Linux development environment, thank you! I'm also not closing this question, so people can still react or maybe even add new suggestions or additions to the already really nice answers... A: I recommend you read The Art Of UNIX Progranmming. It will frame your mind into using the environment as your IDE. A: Shorter answer is: choosing whatever "editor" you like, then use GDB console or a simple GDB front end to debug your application. The debuggers come with fancy IDEs such as Netbeans sucks for C/C++. I use Netbeans as my editor, and Insight and GDB console as my debugger. With insight, you have a nice GUI and the raw power of GDB. As soon as you get used to GDB commands, you will start to love it since you can do things you will never be able to do using an GUI. You can use even use Python as your script language if you are using GDB 7 or newer version. Most people here paid more attentions to the "Editors" of the IDEs. However, if you are developing a large project in C/C++, you could easily spend more than 70% of your time on the "debuggers". The debuggers of the fancy IDEs are at least 10 years behind Visual Studio. For instance, Netbenas has very similar interfaces with Visual Studio. But its debugger has a number of disadvantages compared to Visual Studio. * *Very slow to display even a array with only a few hundreds of elements *No highlighting for changed value ( By default, visual studio shows changed values in the watch windows in red) *Very limited ability to show memory. *You cannot modify the source code then continue to run. If a bug takes a long time to hit, you would like to change the source and apply the changes live and continue to run your application. *You cannot change the "next statement" to run. In Visual Studio, you can use "Set Next Statement" to change how your application runs. Although this feature could crash your application if not used properly, but it will save you a lot of time. For instance, if you found the state of your application is not correct, but you do not know what caused the problems, you might want to rerun a certain region of the your source codes without restarting your application. *No built-in support for STL such as vector, list, deque and map etc. *No watch points. You need to have this feature, when you need to stop your application right at the point a variable is changed. Intel based computers have hardware watch points so that the watch points will not slow down your system. It might takes many hours to find some hard-to-find bugs without using watch points. "Visual Studio" calls "watch pointer" as "Data BreakPoint". The list can be a lot longer. I was so frustrated by the disadvantages of the Netbeans or other similar IDEs, so that I started to learn GDB itself. I found GDB itself are very powerful. GDB does not have all the "disadvantages" mentioned above. Actually, GDB is very powerful, it is even better than Visual Studio in many ways. Here I just show you a very simple example. For instance, you have a array like: struct IdAndValue { int ID; int value; }; IdAndValue IdAndValues[1000]; When your application stops, and you want to examine the data in IdAndValues. For instance, if you want to find the ordinals and values in the array for a particular "ID", you can create a script like the following: define PrintVal set $i=0 printf "ID = %d\n", $arg0 while $i<1000 if IdAndValues[$i].ID == $arg0 printf "ordinal = %d, value = %d\n", $i, IdAndValues[$i].vaue set $i++ end end end You can use all variables in your application in the current context, your own variables (in our example, it is $i), arguments passed (in our example, it is $arg0) and all GDB commands (built-in or user defined). Use PrintVal 1 from GDB prompt to print out values for ID "1" By the way, NetBeans does come with a GDB console, but by using the console, you could crash Netbeans. And I believe that is why the console is hidden by default in NetBeans A: I am using "Geany" found good so far, its fast and light weight IDE. Among Geany’s features are: * *Code folding *Session saving *Basic IDE features such as syntax highlighting, tabs, automatic indentation and code completion *Simple project management *Build system *Color picker (surprisingly handy during web development) *Embedded terminal emulation *Call tips *Symbol lists *Auto-completion of common constructs (such as if, else, while, etc.) A: If you like Eclipse for Java, I suggest Eclipse CDT. Despite C/C++ support isn't so powerful as is for Java, it still offers most of the features. It has a nice feature named Managed Project that makes working with C/C++ projects easier if you don't have experience with Makefiles. But you can still use Makefiles. I do C and Java coding and I'm really happy with CDT. I'm developing the firmware for a embedded device in C and a application in Java that talks to this device, and is really nice to use the same environment for both. I guess it probably makes me more productive. A: Konrad's advice is excellent, and you should become happily productive in a classic vi/cc/ld/db/make environment without too much trouble. Many, many university students have learned this toolchain over the course of a 10-15 week class. That said, the other classic environment is to go the Emacs route. I wouldn't call it an IDE, but it does integrate two important development tools into the editor: the compiler's output, and the debugger. You can have it zip you to the line in the file corresponding to a compiler error, and you can set breakpoints and use the stepper from the editor. A: I'm glad you seem to be working it out with vim. But I have to say, I'm a bit mystified about how you already "really like Eclipse for Java", implying that you're already familiar with how it works. In that case, why wouldn't you also use it for C++? CDT meets every requirement you've mentioned. A: Having been raised on Visual Studio, I've found the relatively young Code::Blocks to be very familiar. A: vim editor + g++ compiler(GNU C++) + gdb - might help you A: IntelliJ IDEA + the C/C++ plugin at http://plugins.intellij.net/plugin/?id=1373 Prepare to have your mind-blown. Cheers! A: why wouldn't you also use it for C++? CDT meets every requirement you've mentioned. I didn't use eclipse at first because I wasn't sure that it was equally good at giving me the means of developing in C++ (efficiently). Besides that, I was also convinced that there had to be better, more specialized tools available for c++ development in Linux: and I really like that [eclipse] IDE for java, but is it any good for c++ and won't I miss out on something that is even better? I honestly believe that, although some tools (like eclipse) are great at many things, it is best to look for other options as well (and I don't mean that for IDE's only, but in general and even in real life)... Like in this case, vim is really great, and I would have missed out on it if I sticked to something I already knew. A: Code::Blocks is Great. A: Also you can try to setup emacs as an IDE. Details are discussed here. A: acme, sam from plan9, you can use it through Plan 9 from User Space. A: In my eyes best IDE for Linux is SlickEdit. It cost some money but it is fast, great support for tagging and great diff tool, works well with huge project.
{ "language": "en", "url": "https://stackoverflow.com/questions/24109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "209" }
Q: Outlook Add-in using .NET We have been developing an Outlook Add-in using Visual Studio 2008. However I am facing a strange behavior while adding a command button to a custom command bar. This behavior is reflected when we add the button in the reply, reply all and forward windows. The issue is that the caption of the command button is not visible though when we debug using VS it shows the caption correctly. But the button is captionless when viewed in Outlook(2003). I have the code snippet as below. Any help would be appreciated. private void AddButtonInNewInspector(Microsoft.Office.Interop.Outlook.Inspector inspector) { try { if (inspector.CurrentItem is Microsoft.Office.Interop.Outlook.MailItem) { try { foreach (CommandBar c in inspector.CommandBars) { if (c.Name == "custom") { c.Delete(); } } } catch { } finally { //Add Custom Command bar and command button. CommandBar myCommandBar = inspector.CommandBars.Add("custom", MsoBarPosition.msoBarTop, false, true); myCommandBar.Visible = true; CommandBarControl myCommandbarButton = myCommandBar.Controls.Add(MsoControlType.msoControlButton, 1, "Add", System.Reflection.Missing.Value, true); myCommandbarButton.Caption = "Add Email"; myCommandbarButton.Width = 900; myCommandbarButton.Visible = true; myCommandbarButton.DescriptionText = "This is Add Email Button"; CommandBarButton btnclickhandler = (CommandBarButton)myCommandbarButton; btnclickhandler.Click += new Microsoft.Office.Core._CommandBarButtonEvents_ClickEventHandler(this.OnAddEmailButtonClick); } } } catch (System.Exception ex) { MessageBox.Show(ex.Message.ToString(), "AddButtInNewInspector"); } } A: I don't know the answer to your question, but I would highly recommend Add-In Express for doing the addin. See http://www.add-in-express.com/add-in-net/. I've used this in many projects, including some commercial software and it is completely awesome. It does all the Outlook (and office) integration for you so you just work with it like any toolbar and just focus on the specifics of what you need it to do. You won't ever have to worry about the Outlook extensibility at all. Highly recommended. Anyway, just wanted to mention it as something to look in to. It will definitely save some headaches if you're comfortable with using a 3rd party component in the project. A: I don't know, but your code raises two questions: * *Why are you declaring "CommandBarControl myCommandbarButton" instead of "CommandBarButton myCommandbarButton"? *Why are you setting the width to 900 pixels? That's huge. I never bother with this setting in Excel since it autosizes, and I'm guessing the Outlook would behave the same. A: You aren't setting the command bar button's style property (from what I can tell). This results in the button having an MsoButtonStyle of msoButtonAutomation. I have seen the caption fail to appear if the style is left at this. Try setting the Style property to msoButtonCaption.
{ "language": "en", "url": "https://stackoverflow.com/questions/24113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Classes vs 2D arrays Which is better to use in PHP, a 2D array or a class? I've included an example of what I mean by this. // Using a class class someClass { public $name; public $height; public $weight; function __construct($name, $height, $weight) { $this -> name = $name; $this -> height = $height; $this -> weight = $weight; } } $classArray[1] = new someClass('Bob', 10, 20); $classArray[2] = new someClass('Fred', 15, 10); $classArray[3] = new someClass('Ned', 25, 30); // Using a 2D array $normalArray[1]['name'] = 'Bob'; $normalArray[1]['height'] = 10; $normalArray[1]['weight'] = 20; $normalArray[2]['name'] = 'Fred'; $normalArray[2]['height'] = 15; $normalArray[2]['weight'] = 10; $normalArray[3]['name'] = 'Ned'; $normalArray[3]['height'] = 25; $normalArray[3]['weight'] = 30; Assuming that somebody doesn't come out and show that classes are too slow, it looks like class wins. I've not idea which answer I should accept to I've just upvoted all of them. And I have now written two near identical pages, one using the 2D array (written before this question was posted) and now one using a class and I must say that the class produces much nicer code. I have no idea how much overhead is going to be generated but I doubt it will rival the improvement to the code itself. Thank you for helping to make me a better programmer. A: The "class" that you've constructed above is what most people would use a struct for in other languages. I'm not sure what the performance implications are in PHP, though I suspect instantiating the objects is probably more costly here, if only by a little bit. That being said, if the cost is relatively low, it IS a bit easier to manage the objects, in my opinion. I'm only saying the following based on the title and your question, but: Bear in mind that classes provide the advantage of methods and access control, as well. So if you wanted to ensure that people weren't changing weights to negative numbers, you could make the weight field private and provide some accessor methods, like getWeight() and setWeight(). Inside setWeight(), you could do some value checking, like so: public function setWeight($weight) { if($weight >= 0) { $this->weight = $weight; } else { // Handle this scenario however you like } } A: It depends exactly what you mean by 'better'. I'd go for the object oriented way (using classes) because I find it makes for cleaner code (at least in my opinion). However, I'm not sure what the speed penalties might be for that option. A: Generally, I follow this rule: 1) Make it a class if multiple parts of your application use the data structure. 2) Make it a 2D array if you're using it for quick processing of data in one part of your application. A: It's the speed that I am thinking of mostly, for anything more complex than what I have here I'd probably go with classes but the question is, what is the cost of a class? This would seem to be premature optimisation. Your application isn't going to take any real-world performance hit either way, but using a class lets you use getter and setter methods and is generally going to be better for code encapsulation and code reuse. With the arrays you're incurring cost in harder to read and maintain code, you can't unit test the code as easily and with a good class structure other developers should find it easier to understand if they need to take it on. And when later on you need to add other methods to manipulate these, you won't have an architecture to extend. A: The class that you have is not a real class in OO terms - its just been contructed to take the space of the instance variables. That said - there propably isnt much issue with speed - its just a style thing in your example. The intresting bit - is if you contsrtucted the object to be a real "person" class - and thinkng about the other attributes and actions that you may want of the person class - then you would notice not only a style performance - writting code - but also speed performance. A: If your code uses lot of functions that operate on those attributes (name/height/weight), then using class could be a good option. A: Teifion, if you use classes as a mere replacement for arrays, you are nowhere near OOP. The essence of OOP is that objects have knowledge and responsibility, can actually do things and cooperate with other classes. Your objects have knowledge only and can't do anything else than idly exist, however they seem to be good candidates for persistence providers (objects that know how to store/retrieve themselves into/from database). Don't worry about performance, too. Objects in PHP are fast and lightweight and performance in general is much overrated. It's cheaper to save your time as a programmer using the right approach than to save microseconds in your program with some obscure, hard to debug and fix piece of code. A: Most tests that time arrays vs classes only test instancing them. Once you actually start to do something with them. I was a "purist" that used only arrays because the performance was SO much better. I wrote the following code to justify to myself to justify the extra hassle of not using classes (even though they are easier on the programmer) Let's just say I was VERY surprised at the results! <?php $rx = ""; $rt = ""; $rf = ""; $ta = 0; // total array time $tc = 0; // total class time // flip these to test different attributes $test_globals = true; $test_functions = true; $test_assignments = true; $test_reads = true; // define class class TestObject { public $a; public $b; public $c; public $d; public $e; public $f; public function __construct($a,$b,$c,$d,$e,$f) { $this->a = $a; $this->b = $b; $this->c = $c; $this->d = $d; $this->e = $e; $this->f = $f; } public function setAtoB() { $this->a = $this->b; } } // begin test echo "<br>test reads: " . $test_reads; echo "<br>test assignments: " . $test_assignments; echo "<br>test globals: " . $test_globals; echo "<br>test functions: " . $test_functions; echo "<br>"; for ($z=0;$z<10;$z++) { $starta = microtime(true); for ($x=0;$x<100000;$x++) { $xr = getArray('aaa','bbb','ccccccccc','ddddddddd','eeeeeeee','fffffffffff'); if ($test_assignments) { $xr['e'] = "e"; $xr['c'] = "sea biscut"; } if ($test_reads) { $rt = $x['b']; $rx = $x['f']; } if ($test_functions) { setArrAtoB($xr); } if ($test_globals) { $rf = glb_arr(); } } $ta = $ta + (microtime(true)-$starta); echo "<br/>Array time = " . (microtime(true)-$starta) . "\n\n"; $startc = microtime(true); for ($x=0;$x<100000;$x++) { $xo = new TestObject('aaa','bbb','ccccccccc','ddddddddd','eeeeeeee','fffffffffff'); if ($test_assignments) { $xo->e = "e"; $xo->c = "sea biscut"; } if ($test_reads) { $rt = $xo->b; $rx = $xo->f; } if ($test_functions) { $xo->setAtoB(); } if ($test_globals) { $xf = glb_cls(); } } $tc = $tc + (microtime(true)-$startc); echo "<br>Class time = " . (microtime(true)-$startc) . "\n\n"; echo "<br>"; echo "<br>Total Array time (so far) = " . $ta . "(100,000 iterations) \n\n"; echo "<br>Total Class time (so far) = " . $tc . "(100,000 iterations) \n\n"; echo "<br>"; } echo "TOTAL TIMES:"; echo "<br>"; echo "<br>Total Array time = " . $ta . "(1,000,000 iterations) \n\n"; echo "<br>Total Class time = " . $tc . "(1,000,000 iterations)\n\n"; // test functions function getArray($a,$b,$c,$d,$e,$f) { $arr = array(); $arr['a'] = $a; $arr['b'] = $b; $arr['c'] = $c; $arr['d'] = $d; $arr['d'] = $e; $arr['d'] = $f; return($arr); } //------------------------------------- function setArrAtoB($r) { $r['a'] = $r['b']; } //------------------------------------- function glb_cls() { global $xo; $xo->d = "ddxxdd"; return ($xo->f); } //------------------------------------- function glb_arr() { global $xr; $xr['d'] = "ddxxdd"; return ($xr['f']); } //------------------------------------- ?> test reads: 1 test assignments: 1 test globals: 1 test functions: 1 Array time = 1.58905816078 Class time = 1.11980104446 Total Array time (so far) = 1.58903813362(100,000 iterations) Total Class time (so far) = 1.11979603767(100,000 iterations) Array time = 1.02581000328 Class time = 1.22492313385 Total Array time (so far) = 2.61484408379(100,000 iterations) Total Class time (so far) = 2.34471416473(100,000 iterations) Array time = 1.29942297935 Class time = 1.18844485283 Total Array time (so far) = 3.91425895691(100,000 iterations) Total Class time (so far) = 3.5331492424(100,000 iterations) Array time = 1.28776097298 Class time = 1.02383089066 Total Array time (so far) = 5.2020149231(100,000 iterations) Total Class time (so far) = 4.55697512627(100,000 iterations) Array time = 1.31235599518 Class time = 1.38880181313 Total Array time (so far) = 6.51436591148(100,000 iterations) Total Class time (so far) = 5.94577097893(100,000 iterations) Array time = 1.3007349968 Class time = 1.07644081116 Total Array time (so far) = 7.81509685516(100,000 iterations) Total Class time (so far) = 7.02220678329(100,000 iterations) Array time = 1.12752890587 Class time = 1.07106018066 Total Array time (so far) = 8.94262075424(100,000 iterations) Total Class time (so far) = 8.09326195717(100,000 iterations) Array time = 1.08890199661 Class time = 1.09139609337 Total Array time (so far) = 10.0315177441(100,000 iterations) Total Class time (so far) = 9.18465089798(100,000 iterations) Array time = 1.6172170639 Class time = 1.14714384079 Total Array time (so far) = 11.6487307549(100,000 iterations) Total Class time (so far) = 10.3317887783(100,000 iterations) Array time = 1.53738498688 Class time = 1.28127002716 Total Array time (so far) = 13.1861097813(100,000 iterations) Total Class time (so far) = 11.6130547523(100,000 iterations) TOTAL TIMES: Total Array time = 13.1861097813(1,000,000 iterations) Total Class time = 11.6130547523(1,000,000 iterations) So, either way the difference is pretty negligible. I was very suprized to find that once you start accessing things globally, classes actually become a little faster. But don't trust me, run it for your self. I personally now feel completely guilt free about using classes in my high performance applications. :D A: @Richard Varno I Ran your exact code (after fixed the small bugs), and got much different results than you. Classes ran much on my PHP 5.3.17 install. Array time = 0.69054913520813 Class time = 1.1762700080872 Total Array time (so far) = 0.69054508209229(100,000 iterations) Total Class time (so far) = 1.1762590408325(100,000 iterations) Array time = 0.99001502990723 Class time = 1.22034907341 Total Array time (so far) = 1.6805560588837(100,000 iterations) Total Class time (so far) = 2.3966031074524(100,000 iterations) Array time = 0.99191808700562 Class time = 1.2245700359344 Total Array time (so far) = 2.6724660396576(100,000 iterations) Total Class time (so far) = 3.6211669445038(100,000 iterations) Array time = 0.9890251159668 Class time = 1.2246470451355 Total Array time (so far) = 3.661484003067(100,000 iterations) Total Class time (so far) = 4.8458080291748(100,000 iterations) Array time = 0.99573588371277 Class time = 1.1242771148682 Total Array time (so far) = 4.6572148799896(100,000 iterations) Total Class time (so far) = 5.9700801372528(100,000 iterations) Array time = 0.88518786430359 Class time = 1.1427340507507 Total Array time (so far) = 5.5423986911774(100,000 iterations) Total Class time (so far) = 7.1128082275391(100,000 iterations) Array time = 0.87605404853821 Class time = 0.95899105072021 Total Array time (so far) = 6.4184486865997(100,000 iterations) Total Class time (so far) = 8.0717933177948(100,000 iterations) Array time = 0.73414516448975 Class time = 1.0223190784454 Total Array time (so far) = 7.1525888442993(100,000 iterations) Total Class time (so far) = 9.0941033363342(100,000 iterations) Array time = 0.95230412483215 Class time = 1.059828042984 Total Array time (so far) = 8.1048839092255(100,000 iterations) Total Class time (so far) = 10.153927326202(100,000 iterations) Array time = 0.75814390182495 Class time = 0.84455919265747 Total Array time (so far) = 8.8630249500275(100,000 iterations) Total Class time (so far) = 10.998482465744(100,000 iterations) TOTAL TIMES: Total Array time = 8.8630249500275(1,000,000 iterations) Total Class time = 10.998482465744(1,000,000 iterations)
{ "language": "en", "url": "https://stackoverflow.com/questions/24130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: From Monorail to ASP.Net MVC The last time I took on a non-trivial .Net/C# application I used Castle Monorail and, on the whole, enjoyed the experience. Early-access/preview releases of .Net MVC were not yet available. Many "Microsoft shops" will now find the "official" solution more appealing. Has anyone gone from Monorail to .Net MVC. How did you find the switch? What are the biggest differences, presently? A: I have made the switch, since you pointed out it will be the preferred way for microsoft shops. The switch was pretty trivial and as Mike pointed out it ships with the webform view engine as the default, but like Mike also said you can still take advantage of the views you wrote in brail and nvelocity with the MvcContrib project. ASP.NET MVC, doesn't tie you to a direct ViewEngine, you can use any ViewEngine you want. I don't necessarily think this is a difference. The biggest difference I found was grouping my controllers and views. In MonoRail you could do this easily with the ControllerDetails attribute, I was able to easily get around this limitation by coding my own, but wish the functionality was built in. I did it by creating my own ViewLocator and creating a ActionFilterAttribute. A: I am a monorail user, so far I still feel more comfortable on MonoRail + ActiveRecord due to the convenience built at ActiveRecord ARSmartDispatchController. However have to say MonoRail does not have a good documentation base so far (I am one of those should be blamed as the community participant who didn't help enough to write the docs) As I saw the comments here, ASP.NET MVC use WebForm view engine. I think MonoRail has that too but was being blamed to be quite problematic, so I wonder how is the experience with ASP.NET MVC WebForm- can you use the web form components mostly the way it works as is or you have to basically abandon most of them and stick to more template style approach (like <%= or <%# ? A: While I haven't made the switch yet, I have developed on both platforms and have been doing some pre-switch analysis. It looks like the biggest difference would be the View Engines. Our Monorail stuff uses the Brail view engine while asp.net mvc comes (stock) with a webforms like view engine. There are other view engines in MvcContrib which could help in this area, though. Also ViewComponents and view "helpers" seem to be handled quite differently the two frameworks. A: Lucky I am not working for an organization use product ship from Microsoft is a needed. So I might not directly answer to your question, However in term of using MonoRail I enjoy every part of the framework although the lack of documentation but test suite are there to guide me through. In short I do not want to invested time in learning new framework although it closely match (each had it own convention) but ASP.NET MVC still lack of some features that I already familiar with such as feature mention by Dale Ragon ControllerDetail, ActiveRecord and so on. A: The ASP.NET MVC team is still making changes before v1.0, so now's a good time to provide feedback. Also, be aware that there are more frequent releases on CodePlex, while the home page on www.asp.net still links to Preview 3.
{ "language": "en", "url": "https://stackoverflow.com/questions/24165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Why are relational set-based queries better than cursors? When writing database queries in something like TSQL or PLSQL, we often have a choice of iterating over rows with a cursor to accomplish the task, or crafting a single SQL statement that does the same job all at once. Also, we have the choice of simply pulling a large set of data back into our application and then processing it row by row, with C# or Java or PHP or whatever. Why is it better to use set-based queries? What is the theory behind this choice? What is a good example of a cursor-based solution and its relational equivalent? A: You wanted some real-life examples. My company had a cursor that took over 40 minutes to process 30,000 records (and there were times when I needed to update over 200,000 records). It took 45 second to do the same task without the cursor. In another case I removed a cursor and sent the processing time from over 24 hours to less than a minute. One was an insert using the values clause instead of a select and the other was an update that used variables instead of a join. A good rule of thumb is that if it is an insert, update, or delete, you should look for a set-based way to perform the task. Cursors have their uses (or the code wouldn't be their in the first place), but they should be extremely rare when querying a relational database (Except Oracle which is optimized to use them). One place where they can be faster is when doing calculations based on the value of the preceeding record (running totals). BUt even that should be tested. Another limited case of using a cursor is to do some batch processing. If you are trying to do too much at once in set-based fashion it can lock the table to other users. If you havea truly large set, it may be best to break it up into smaller set-based inserts, updates or deletes that will not hold the lock too long and then run through the sets using a cursor. A third use of a cursor is to run system stored procs through a group of input values. SInce this is limited to a generally small set and no one should mess with the system procs, this is an acceptable thing for an adminstrator to do. I do not recommend doing the same thing with a user created stored proc in order to process a large batch and to re-use code. It is better to write a set-based version that will be a better performer as performance should trump code reuse in most cases. A: I think the real answer is, like all approaches in programming, that it depends on which one is better. Generally, a set based language is going to be more efficient, because that is what it was designed to do. There are two places where a cursor is at an advantage: * *You are updating a large data set in a database where locking rows is not acceptable (during production hours maybe). A set based update has a possibility of locking a table for several seconds (or minutes), where a cursor (if written correctly) does not. The cursor can meander through the rows updating one at a time and you don't have to worry about affecting anything else. *The advantage to using SQL is that the bulk of the work for optimization is handled by the database engine in most circumstances. With the enterprise class db engines the designers have gone to painstaking lengths to make sure the system is efficient at handling data. The drawback is that SQL is a set based language. You have to be able to define a set of data to use it. Although this sounds easy, in some circumstances it is not. A query can be so complex that the internal optimizers in the engine can't effectively create an execution path, and guess what happens... your super powerful box with 32 processors uses a single thread to execute the query because it doesn't know how to do anything else, so you waste processor time on the database server which generally there is only one of as opposed to multiple application servers (so back to reason 1, you run into resource contentions with other things needing to run on the database server). With a row based language (C#, PHP, JAVA etc.), you have more control as to what happens. You can retrieve a data set and force it to execute the way you want it to. (Separate the data set out to run on multiple threads etc). Most of the time, it still isn't going to be efficient as running it on the database engine, because it will still have to access the engine to update the row, but when you have to do 1000+ calculations to update a row (and lets say you have a million rows), a database server can start to have problems. A: The main reason that I'm aware of is that set-based operations can be optimised by the engine by running them across multiple threads. For example, think of a quicksort - you can separate the list you're sorting into multiple "chunks" and sort each separately in their own thread. SQL engines can do similar things with huge amounts of data in one set-based query. When you perform cursor-based operations, the engine can only run sequentially and the operation has to be single threaded. A: Set based queries are (usually) faster because: * *They have more information for the query optimizer to optimize *They can batch reads from disk *There's less logging involved for rollbacks, transaction logs, etc. *Less locks are taken, which decreases overhead *Set based logic is the focus of RDBMSs, so they've been heavily optimized for it (often, at the expense of procedural performance) Pulling data out to the middle tier to process it can be useful, though, because it removes the processing overhead off the DB server (which is the hardest thing to scale, and is normally doing other things as well). Also, you normally don't have the same overheads (or benefits) in the middle tier. Things like transactional logging, built-in locking and blocking, etc. - sometimes these are necessary and useful, other times they're just a waste of resources. A simple cursor with procedural logic vs. set based example (T-SQL) that will assign an area code based on the telephone exchange: --Cursor DECLARE @phoneNumber char(7) DECLARE c CURSOR LOCAL FAST_FORWARD FOR SELECT PhoneNumber FROM Customer WHERE AreaCode IS NULL OPEN c FETCH NEXT FROM c INTO @phoneNumber WHILE @@FETCH_STATUS = 0 BEGIN DECLARE @exchange char(3), @areaCode char(3) SELECT @exchange = LEFT(@phoneNumber, 3) SELECT @areaCode = AreaCode FROM AreaCode_Exchange WHERE Exchange = @exchange IF @areaCode IS NOT NULL BEGIN UPDATE Customer SET AreaCode = @areaCode WHERE CURRENT OF c END FETCH NEXT FROM c INTO @phoneNumber END CLOSE c DEALLOCATE c END --Set UPDATE Customer SET AreaCode = AreaCode_Exchange.AreaCode FROM Customer JOIN AreaCode_Exchange ON LEFT(Customer.PhoneNumber, 3) = AreaCode_Exchange.Exchange WHERE Customer.AreaCode IS NULL A: In addition to the above "let the DBMS do the work" (which is a great solution), there are a couple other good reasons to leave the query in the DBMS: * *It's (subjectively) easier to read. When looking at the code later, would you rather try and parse a complex stored procedure (or client-side code) with loops and things, or would you rather look at a concise SQL statement? *It avoids network round trips. Why shove all that data to the client and then shove more back? Why thrash the network if you don't need to? *It's wasteful. Your DBMS and app server(s) will need to buffer some/all of that data to work on it. If you don't have infinite memory you'll likely page out other data; why kick out possibly important things from memory to buffer a result set that is mostly useless? *Why wouldn't you? You bought (or are otherwise using) a highly reliable, very fast DBMS. Why wouldn't you use it? A: I think it comes down to using the database is was designed to be used. Relational database servers are specifically developed and optimized to respond best to questions expressed in set logic. Functionally, the penalty for cursors will vary hugely from product to product. Some (most?) rdbmss are built at least partially on top of isam engines. If the question is appropriate, and the veneer thin enough, it might in fact be as efficient to use a cursor. But that's one of the things you should become intimately familiar with, in terms of your brand of dbms, before trying it. A: As has been said, the database is optimized for set operations. Literally engineers sat down and debugged/tuned that database for long periods of time. The chances of you out optimizing them are pretty slim. There are all sorts of fun tricks you can play with if you have a set of data to work with like batching disk reads/writes together, caching, multi-threading. Also some operations have a high overhead cost but if you do it to a bunch of data at once the cost per piece of data is low. If you are only working one row at a time, a lot of these methods and operations just can't happen. For example, just look at the way the database joins. By looking at explain plans you can see several ways of doing joins. Most likely with a cursor you go row by row in one table and then select values you need from another table. Basically it's like a nested loop only without the tightness of the loop (which is most likely compiled into machine language and super optimized). SQL Server on its own has a whole bunch of ways of joining. If the rows are sorted, it will use some type of merge algorithm, if one table is small, it may turn one table into a hash lookup table and do the join by performing O(1) lookups from one table into the lookup table. There are a number of join strategies that many DBMS have that will beat you looking up values from one table in a cursor. Just look at the example of creating a hash lookup table. To build the table is probably m operations if you are joining two tables one of length n and one of length m where m is the smaller table. Each lookup should be constant time, so that is n operations. so basically the efficiency of a hash join is around m (setup) + n (lookups). If you do it yourself and assuming no lookups/indexes, then for each of the n rows you will have to search m records (on average it equates to m/2 searches). So basically the level of operations goes from m + n (joining a bunch of records at once) to m * n / 2 (doing lookups through a cursor). Also the operations are simplifications. Depending upon the cursor type, fetching each row of a cursor may be the same as doing another select from the first table. Locks also kill you. If you have cursors on a table you are locking up rows (in SQL server this is less severe for static and forward_only cursors...but the majority of cursor code I see just opens a cursor without specifying any of these options). If you do the operation in a set, the rows will still be locked up but for a lesser amount of time. Also the optimizer can see what you are doing and it may decide it is more efficient to lock the whole table instead of a bunch of rows or pages. But if you go line by line the optimizer has no idea. The other thing is I have heard that in Oracle's case it is super optimized to do cursor operations so it's nowhere near the same penalty for set based operations versus cursors in Oracle as it is in SQL Server. I'm not an Oracle expert so I can't say for sure. But more than one Oracle person has told me that cursors are way more efficient in Oracle. So if you sacrificed your firstborn son for Oracle you may not have to worry about cursors, consult your local highly paid Oracle DBA :) A: The idea behind preferring to do the work in queries is that the database engine can optimize by reformulating it. That's also why you'd want to run EXPLAIN on your query, to see what the db is actually doing. (e.g. taking advantage of indices, table sizes and sometimes even knowledge about the distributions of values in columns.) That said, to get good performance in your actual concrete case, you may have to bend or break rules. Oh, another reason might be constraints: Incrementing a unique column by one might be okay if constraints are checked after all the updates, but generates a collision if done one-by-one. A: set based is done in one operation cursor as many operations as the rowset of the cursor A: The REAL answer is go get one of E.F. Codd's books and brush up on relational algebra. Then get a good book on Big O notation. After nearly two decades in IT this is, IMHO, one of the big tragedies of the modern MIS or CS degree: Very few actually study computation. You know...the "compute" part of "computer"? Structured Query Language (and all its supersets) is merely a practical application of relational algebra. Yes, the RDBMS have optimized memory management and read/write but the same could be said for procedural languages. As I read it, the original question is not about the IDE, the software, but rather about the efficiency of one method of computation vs. another. Even a quick familiarization with Big O notation will begin to shed light on why, when dealing with sets of data, iteration is more expensive than a declarative statement. A: Simply put, in most cases, it's faster/easier to let the database do it for you. The database's purpose in life is to store/retrieve/manipulate data in set formats and to be really fast. Your VB.NET/ASP.NET code is likely nowhere near as fast as a dedicated database engine. Leveraging this is a wise use of resources.
{ "language": "en", "url": "https://stackoverflow.com/questions/24168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: How does Hive compare to HBase? I'm interested in finding out how the recently-released (http://mirror.facebook.com/facebook/hive/hadoop-0.17/) Hive compares to HBase in terms of performance. The SQL-like interface used by Hive is very much preferable to the HBase API we have implemented. A: Hive is an analytics tool. Just like pig, it was designed for ad hoc batch processing of potentially enourmous amounts of data by leveraging map reduce. Think terrabytes. Imagine trying to do that in a relational database... HBase is a column based key value store based on BigTable. You can't do queries per se, though you can run map reduce jobs over HBase. It's primary use case is fetching rows by key, or scanning ranges of rows. A major feature is being able to have data locality when scanning across ranges of row keys for a 'family' of columns. A: It's hard to find much about Hive, but I found this snippet on the Hive site that leans heavily in favor of HBase (bold added): Hive is based on Hadoop which is a batch processing system. Accordingly, this system does not and cannot promise low latencies on queries. The paradigm here is strictly of submitting jobs and being notified when the jobs are completed as opposed to real time queries. As a result it should not be compared with systems like Oracle where analysis is done on a significantly smaller amount of data but the analysis proceeds much more iteratively with the response times between iterations being less than a few minutes. For Hive queries response times for even the smallest jobs can be of the order of 5-10 minutes and for larger jobs this may even run into hours. Since HBase and HyperTable are all about performance (being modeled on Google's BigTable), they sound like they would certainly be much faster than Hive, at the cost of functionality and a higher learning curve (e.g., they don't have joins or the SQL-like syntax). A: To my humble knowledge, Hive is more comparable to Pig. Hive is SQL-like and Pig is script based. Hive seems to be more complicated with query optimization and execution engines as well as requires end user needs to specify schema parameters(partition etc). Both are intend to process text files, or sequenceFiles. HBase is for key value data store and retrieve...you can scan or filter on those key value pairs(rows). You can not do queries on (key,value) rows. A: Hive and HBase are used for different purpose. Hive: Pros: * *Apache Hive is a data warehouse infrastructure built on top of Hadoop. *It allows for querying data stored on HDFS for analysis via HQL, an SQL-like language, which will be converted into series of Map Reduce Jobs *It only runs batch processes on Hadoop. *it’s JDBC compliant, it also integrates with existing SQL based tools *Hive supports partitions *It supports analytical querying of data collected over a period of time Cons: * *It does not currently support update statements *It should be provided with a predefined schema to map files and directories into columns HBase: Pros: * *A scalable, distributed database that supports structured data storage for large tables *It provides random, real time read/write access to your Big Data. HBase operations run in real-time on its database rather than MapReduce jobs *it supports partitions to tables, and tables are further split into column families *Scales horizontally with huge amount of data by using Hadoop *Provides key based access to data when storing or retrieving. It supports add or update rows. *Supports versoning of data. Cons: * *HBase queries are written in a custom language that needs to be learned *HBase isn’t fully ACID compliant *It can't be used with complicated access patterns (such as joins) *It is also not a complete substitute for HDFS when doing large batch MapReduce Summary: Hive can be used for analytical queries while HBase for real-time querying. Data can even be read and written from Hive to HBase and back again. A: As of the most recent Hive releases, a lot has changed that requires a small update as Hive and HBase are now integrated. What this means is that Hive can be used as a query layer to an HBase datastore. Now if people are looking for alternative HBase interfaces, Pig also offers a really nice way of loading and storing HBase data. Additionally, it looks like Cloudera Impala may offer substantial performance Hive based queries on top of HBase. They are claim up to 45x faster queries over traditional Hive setups. A: From one perspective, Hive consists of five main components: a SQL-like grammar and parser, a query planner, a query execution engine, a metadata repository, and a columnar storage layout. Its primary focus is data warehouse-style analytical workloads, so low latency retrieval of values by key is not necessary. HBase has its own metadata repository and columnar storage layout. It is possible to author HiveQL queries over HBase tables, allowing HBase to take advantage of Hive's grammar and parser, query planner, and query execution engine. See http://wiki.apache.org/hadoop/Hive/HBaseIntegration for more details. A: To compare Hive with Hbase, I'd like to recall the definition below: A database designed to handle transactions isn’t designed to handle analytics. It isn’t structured to do analytics well. A data warehouse, on the other hand, is structured to make analytics fast and easy. Hive is a data warehouse infrastructure built on top of Hadoop which is suitable for long running ETL jobs. Hbase is a database designed to handle real time transactions
{ "language": "en", "url": "https://stackoverflow.com/questions/24179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "57" }
Q: Python code generator for Visual Studio? I had an idea, if I add a python .py file to my C# project, and tag the file with a custom generator that would execute the python file, and treat the output as the result of the code generation, ie. put it into a C# file, that would allow me to do quite a lot of code generation as part of the build process. Does anyone know if such a custom generator for Visual Studio 2008 exists? A: I think Cog does what you want. A: I recall that in previous versions of VS, there was a way to add custom build steps to the build process. I used that a lot to do exactly the kind of automated code generation you describe. I imagine the custom build step feature is still there in 2008. A: OK, I see. Well, as far as I know there isn't any code generator for Python. There is a good introduction on how to roll your own here. Actually, that's quite an under-used part of the environment, I suppose it's so because it needs you to use the IDE to compile the project, as it'd seem only the IDE knows about these "generators", but MSBuild ignores them. A: I don't understand what you are trying to do here. Are you trying to execute a Python script that generates a C# file and then compile that with the project? Or are you trying to compile a Python script to C#? A: I dug through my old bookmarks (I love Del.icio.us!) and found this article: Code Generation with Python, Cog, and Nant. Keep in mind that anything you can do in NAnt can probably be done in MSBuild as well. This should be enough to get you started.
{ "language": "en", "url": "https://stackoverflow.com/questions/24193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }