text
stringlengths
8
267k
meta
dict
Q: What's a 'null defined macro'? I'm learning objective-C and Cocoa. In the Apple tutorial I'm working through there's a side note that says: IBOutlet is a null-defined macro, which the C preprocessor removes at compile time. I'm curious - what's a null-defined macro? A: #define IBOutlet Whenever IBOutlet is used in program text, it will be replaced with nothing at all. A: FYI, in this particular case, the reason the IBOutlet even exists is simply so that Interface Builder can parse the source file and glean bits of understanding from it. It's a clue (well, a bit stronger than a clue) that the variable preceded by IBOutlet should show up as an Outlet in Interface Builder when designing your UIs. A: A null-defined macro is a macro which will be replaced by nothing (will be removed) by the preprocessor. It's role is to give a hint about something in code, such as: #define IN #define OUT #define INOUT int myFunction(IN char *name, INOUT char *address, OUT char *phone); This declaration suggests that name is a input variable for the function, address is both input and output, phone is an output variable. A: Also - if you're unsure how anything is defined - command double-click it and Xcode will open the definition in the original source file. A: Oh and while I'm at it. Option double click will (attempt to) open up the documentation for the double clicked symbol.
{ "language": "en", "url": "https://stackoverflow.com/questions/22001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Switching to ORMs I'm toying with the idea of phasing in an ORM into an application I support. The app is not very structured with no unit tests. So any change will be risky. I'm obviously concerned that I've got a good enough reason to change. The idea is that there will be less boiler plate code for data access and there for greater productivity. Do this ring true with your experiences? Is it possible or even a good idea to phase it in? What are the downsides of an ORM? A: I would strongly recommend getting a copy of Michael Feather's book Working Effectively With Legacy Code (by "Legacy Code" Feathers means any system that isn't adequately covered by unit tests). It is full of good ideas which should help you with your refactoring and phasing in of best practices. Sure, you could phase in the introduction of an ORM, initially using it for accessing some subset of your domain model. And yes, I have found that use of an ORM speeds up development time - this is one of the key benefits and I certainly don't miss the days when I used to laboriously hand-craft data access layers. Downsides of ORM - from experience, there is inevitably a bit of a learning curve in getting to grips with the concepts, configuration and idiosyncracies of the chosen ORM solution. Edit: corrected author's name A: The "Robert C Martin" book, which was actually written by Michael Feathers ("Uncle Bob" is, it seems, a brand name these days!) is a must. It's near-impossible - not to mention insanely time-consuming - to put unit tests into an application not developed with them. The code just won't be amenable. But that's not a problem. Refactoring is about changing design without changing function (I hope I haven't corrupted the meaning too badly there) so you can work in a much broader fashion. Start out with big chunks. Set up a repeatable execution, and capture what happens as the expected result for subsequent executions. Now you have your app, or part of it at least, under test. Not a very good or comprehensive test, sure, but it's a start and things can only get better from there. Now you can start to refactor. You want to start extracting your data access code so that it can be replaced with ORM functionality without disturbing too much. Test often: with legacy apps you'll be surprised what breaks; cohesion and coupling are seldom what they might be. I'd also consider looking at Martin Fowler's Refactoring, which is, obviously enough, the definitive work on the process. A: I work on a large ASP.net application where we recently started to use NHibernate. We moved a large number of domain objects that we had been persisting manually to Sql Server over to NHibernate instead. It simplified things quite a bit and made it much easier to change things over time. We're glad we made the changes and are using NHibernate where appropriate for a lot of our new work. A: The rule for refactoring is. Do unit tests. So maybe first you should place some unittests at least for the core/major things. The ORM should be designed for decreasing boilerplate code. The time/trouble vs. ROI to be enterprisy is up to you to estimate :) A: I heard that TypeMock is often being used to refactor legacy code. A: I seriously think introducing ORM into a legacy application is calling for trouble (and might be the same amount of trouble as a complete rewrite). Other than that, ORM is a great way to go, and should definitely by considered. A: Unless your code is already architectured to allow for "hot swapping" of your model layer backend, changing it in any way will always be extremely risky. Trying to build a safety net of unit tests on poorly architected code isn't going to guarantee success, only make you feel safer about changing it. So, unless you have a strong business case for taking on the risks involved it's probably best to leave well enough alone.
{ "language": "en", "url": "https://stackoverflow.com/questions/22011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Loading assemblies and its dependencies My application dynamically loads assemblies at runtime from specific subfolders. These assemblies are compiled with dependencies to other assemblies. The runtime trys to load these from the application directory. But I want to put them into the modules directory. Is there a way to tell the runtime that the dlls are in a seperate subfolder? A: You can use the <probing> element in a manifest file to tell the Runtime to look in different directories for its assembly files. http://msdn.microsoft.com/en-us/library/823z9h8w.aspx e.g.: <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="bin;bin2\subbin;bin3"/> </assemblyBinding> </runtime> </configuration> A: One nice approach I've used lately is to add an event handler for the AppDomain's AssemblyResolve event. AppDomain currentDomain = AppDomain.CurrentDomain; currentDomain.AssemblyResolve += new ResolveEventHandler(MyResolveEventHandler); Then in the event handler method you can load the assembly that was attempted to be resolved using one of the Assembly.Load, Assembly.LoadFrom overrides and return it from the method. EDIT: Based on your additional information I think using the technique above, specifically resolving the references to an assembly yourself is the only real approach that is going to work without restructuring your app. What it gives you is that the location of each and every assembly that the CLR fails to resolve can be determined and loaded by your code at runtime... I've used this in similar situations for both pluggable architectures and for an assembly reference integrity scanning tool. A: You can use the <codeBase> element found in the application configuration file. More information on "Locating the Assembly through Codebases or Probing". Well, the loaded assembly doesn't have an application configuration file. Well if you know the specific folders at runtime you can use Assembly.LoadFrom.
{ "language": "en", "url": "https://stackoverflow.com/questions/22012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: OpenID as a Single Sign On option? I'm just looking for different opinions. Do you consider OpenID a good "Single Sign On" solution? The way it works seems to be a little bit confusing for an average user and there could be problems related to "putting all your eggs in the same basket". Anyway, have anyone tried to implement his own OpenId solution within the context of an Intranet where there are many different applications (Wordpress, Elgg, Media Wiki, ..)?? I consider it could be a great solution to solve the "Digital Identity" problem but I don't know if it will work with the "login once and surf the Intranet" problem. Opinions? A: Also, SSO (as you mentioned) usually implies that I only have to login once (presumably to my workstation) and then from there on, I don't need to sign-in anywhere. OpenID of course doesn't solve that problem. For example, if I use OpenID to sign in to StackOverflow, it doesn't mean I don't need to sign in to another website again using the same openID. A: I have to say that I absolutely agree with the statements on it being too difficult for the "average" Internet user. I think that OpenID could still be considered "new", even though the original proposal was back in 2005. More high traffic sites are taking it up as just an option for creating an account, rather than requiring users to have an OpenID present. In my opinion, as long as normal username/password account creation is offered alongside OpenID, average Internet users will naturally begin to try and eventually stick with using OpenID. The authentication issues apply just as much to OpenID as registering on any website. You put your trust in the website with your password (assuming you do not use a password storage program) so that shouldn't be used against OpenID. All that aside, the standardization of account creation is absolutely cream gravy to a web developer. I'd just love to not even have to worry about the normal creation process, and rather just drop in an OpenID library and reference it to the database. A: It took me a while to understand OpenID (so many providers!) but I really like the concept. Tie it in with Gravatar and rewriting your profile is much more painless - perhaps one or two fields. The only issues are that you have to trust your OpenID provider - but that's not really what I'd call a problem, more like common sense. Edit: People having problems with OpenID providers should consider setting up a new one. My provider is myopenid.com and I've had no problems. You can setup multiple personas (like profiles) so I have one for blog comments, one for technology sites like this. As for having a new SO profile Jeff said something about being able to change your OpenID without losing your profile stats in the future. A: There is one tiny problem with OpenID. Seamlessly logging in with OpenID requires automatic (unverified) redirection between domains. That makes the OpenID server a 3rd party. This can cause cookies for the OpenID server to be rejected if you turn off 3rd party cookies and your browser strictly follows the Unverifiable Transactions rule in 3.3.6 of RFC2965. An example of this is Opera. If you turn off 3rd party cookies (by setting the global to "Accept only cookies from the site I visit"), you can't log in with OpenID because the server script you submit to automatically (without your interaction to approve it) redirects you to the OpenID server and the OpenID server does the same to get you back. But, you get lucky in Firefox, IE and Safari with their corresponding blocking of 3rd party cookies because they violate RFC2965 in multiple situations. Having to use OpenID in this case does a disservice to more compliant clients. As a workaround, in Opera, besides accepting all cookeis, you can goto tools -> preferences -> advanced -> Network and turn off Automatic Redirection. Then, you'll be able to verify and click each link you're redirected to and the cookies won't be rejected because the transactions are verified. It should also work if you keep Automatic Redirection on and both servers generate a page with a link for you to click on so you can verify the transaction. But, there can't be any automatic redirects anywhere. Logging in with just a username and password where you're only dealing with first party cookies would be much better in this case. OpenID is still cool though and I guess Opera just needs an option to allow unverifiable transactions between SO and your OpenID server so that you can use "Accept only cookies from the site I visit" here. A: The best answer on can someone briefly explain Single sign on? i want to use openid as SSO explains well how OpenID and SSO are different: Single-sign-on is about logging on in one place and having that authenticate you at other locations automatically. OpenID is about delegating authentication to an OpenID provider so you can effectively log on to multiple sites with the one set of credentials. The same post also gives an excellent answer to the original question: You could use OpenID as your authentication scheme for SSO but that's incidental. A: I'm pretty ambivalent on OpenID. One the one hand, it addresses the 'identity provider discovery problem' (how the relying party site figures out where to send the user to authenticate). On the other hand, URLs are tremendously clunky to the average user. I see OpenID as it currently stands as being a useful stop on the road to a solution for Web identity, but certainly not the ultimate destination. Specifically addressing your intranet question, OpenID is probably not the right answer. As I mentioned above, OpenID buys you the ability to locate the identity provider, at the cost of typing in that URL at every relying party. If you're going to be authenticating all your users at some internal identity provider, and only accepting users from that identity provider, OpenID really doesn't gain you much. I would look at a system such as CAS or OpenSSO, either of which will redirect users to the login page without any need to enter a URL. I recently blogged about a company that rolled out OpenSSO to 40 intranet applications for 3000 users in just 4 months, with apps on IIS 6.0, Apache, JBoss and Tomcat. A: I think OpenID is far too confusing and clunky to force on any user, and I'm not even convinced it's solving an authentic problem. Having to register on each site I use has never struck me as a major issue. Particularly as it doesn't especially solve that problem; when I linked my OpenID to StackOverflow I had to fill out extra details anyway. It might as well have had a regular registration process for all the difference it makes. A: Well.. I'd have liked a simple login-pwd combo (that I'd breeze thru with Passwordmaker.org). However being a developer, I can understand that they didnt want to reinvent the login wheel again... OpenID: I enter my blog url => Google sign in => I'm in. It's an extra level.. but it's OK. A: Actually, in the case of StackOverflow, a separate account would have saved me a lot of trouble. I decided to use my WordPress.com OpenID, since that's where I'm hosting my blog, but it turned out that WordPress.com have serious problems with their OpenID service, and most of the time I am not able to log on to StackOverflow at all. Of course, I can use a different OpenID provider to log on with, but then I will have a different identity on the site. I guess you could say WordPress.com is to blame for this, but the problem reimains the same. By using OpenID you are depending on another site's service to function. Any problems on the third party's site will in effect also disable your site. As an alternative solution i tried signing in with my Yahoo OpenID, but then I got some random string as the user name, and as DrPizza already pointed out, I would have to edit my personal details anyway. OpenID is a nice idea, but it's still not something I would rely on with the current state of things. A: At least in the intranet scenario, I think Active Directory (or similar) is still one of the best options. A: At least in the intranet scenario, I think Active Directory (or similar) is still one of the best options. Yep, in any case, Active Directory is behind the curtains of the OpenId Server Provider. In order to develop a SSO solution within an Intranet there are commercial options such as Access Manager (former IChain)+Active Directory but I dont know if theres an open solution apart from "Own OpenId Server"+"Something cool yet to develop"+LDAP. A: OpenID of course doesn't solve that problem. For example, if I use OpenID to sign in to StackOverflow, it doesn't mean I don't need to sign in to another website again using the same openID. -- tj9991 It can mean that, though. If your sign-on on the OpenID site is rememberd (for example, through cookies), you would effectively only need to sign on once per browser session (or once per week, once per month...) for all OpenID sites you visit. Browser support and an API could even do away with the password prompt and the page redirect. Great idea! A: It's not such a usability problem on stack overflow since all the users are programmers anyway, but i can't think of many other sites that could get away with it. I think openID will improve over time though and once all the sites using it start implementing all the features (like auto-filling the about me stuff) it'll be more worthwhile. A: OpenID implementations require a lot of effort and thought to be successful, and even then, you can be thwarted by bad identity providers (for example Yahoo). OpenID can work very well if you've worked out the user experience issues, but a bad implementation is just horribly difficult for most users to work out. In my opinion, the biggest problem with OpenID is that people tried to solve the issue with user awareness. They would have been better off simply giving a list of OpenID providers, and having the users click the one they wanted to use. This sometimes requires knowledge of how a provider has implemented OpenID if they don't support version 2.0 of the spec, but gives a much better overall experience for the end user.
{ "language": "en", "url": "https://stackoverflow.com/questions/22015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: How do content discovery engines, like Zemanta and Open Calais work? I was wondering how as semantic service like Open Calais figures out the names of companies, or people, tech concepts, keywords, etc. from a piece of text. Is it because they have a large database that they match the text against? How would a service like Zemanta know what images to suggest to a piece of text for instance? A: Michal Finkelstein from OpenCalais here. First, thanks for your interest. I'll reply here but I also encourage you to read more on OpenCalais forums; there's a lot of information there including - but not limited to: http://opencalais.com/tagging-information http://opencalais.com/how-does-calais-learn Also feel free to follow us on Twitter (@OpenCalais) or to email us at [email protected] Now to the answer: OpenCalais is based on a decade of research and development in the fields of Natural Language Processing and Text Analytics. We support the full "NLP Stack" (as we like to call it): From text tokenization, morphological analysis and POS tagging, to shallow parsing and identifying nominal and verbal phrases. Semantics come into play when we look for Entities (a.k.a. Entity Extraction, Named Entity Recognition). For that purpose we have a sophisticated rule-based system that combines discovery rules as well as lexicons/dictionaries. This combination allows us to identify names of companies/persons/films, etc., even if they don't exist in any available list. For the most prominent entities (such as people, companies) we also perform anaphora resolution, cross-reference and name canonization/normalization at the article level, so we'll know that 'John Smith' and 'Mr. Smith', for example, are likely referring to the same person. So the short answer to your question is - no, it's not just about matching against large databases. Events/Facts are really interesting because they take our discovery rules one level deeper; we find relations between entities and label them with the appropriate type, for example M&As (relations between two or more companies), Employment Changes (relations between companies and people), and so on. Needless to say, Event/Fact extraction is not possible for systems that are based solely on lexicons. For the most part, our system is tuned to be precision-oriented, but we always try to keep a reasonable balance between accuracy and entirety. By the way there are some cool new metadata capabilities coming out later this month so stay tuned. Regards, Michal A: I'm not familiar with the specific services listed, but the field of natural language processing has developed a number of techniques that enable this sort of information extraction from general text. As Sean stated, once you have candidate terms, it's not to difficult to search for those terms with some of the other entities in context and then use the results of that search to determine how confident you are that the term extracted is an actual entity of interest. OpenNLP is a great project if you'd like to play around with natural language processing. The capabilities you've named would probably be best accomplished with Named Entity Recognizers (NER) (algorithms that locate proper nouns, generally, and sometimes dates as well) and/or Word Sense Disambiguation (WSD) (eg: the word 'bank' has different meanings depending on it's context, and that can be very important when extracting information from text. Given the sentences: "the plane banked left", "the snow bank was high", and "they robbed the bank" you can see how dissambiguation can play an important part in language understanding) Techniques generally build on each other, and NER is one of the more complex tasks, so to do NER successfully, you will generally need accurate tokenizers (natural language tokenizers, mind you -- statistical approaches tend to fare the best), string stemmers (algorithms that conflate similar words to common roots: so words like informant and informer are treated equally), sentence detection ('Mr. Jones was tall.' is only one sentence, so you can't just check for punctuation), part-of-speech taggers (POS taggers), and WSD. There is a python port of (parts of) OpenNLP called NLTK (http://nltk.sourceforge.net) but I don't have much experience with it yet. Most of my work has been with the Java and C# ports, which work well. All of these algorithms are language-specific, of course, and they can take significant time to run (although, it is generally faster than reading the material you are processing). Since the state-of-the-art is largely based on statistical techniques, there is also a considerable error rate to take into account. Furthermore, because the error rate impacts all the stages, and something like NER requires numerous stages of processing, (tokenize -> sentence detect -> POS tag -> WSD -> NER) the error rates compound. A: Open Calais probably use language parsing technology and language statics to guess which words or phrases are Names, Places, Companies, etc. Then, it is just another step to do some kind of search for those entities and return meta data. Zementa probably does something similar, but matches the phrases against meta-data attached to images in order to acquire related results. It certainly isn't easy.
{ "language": "en", "url": "https://stackoverflow.com/questions/22059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: MS Project Gantt chart control usage in C# Has anybody used the MS Project Gantt chart control in C#? If yes, can you share some resources regarding this? A: You could also check Gantt Chart Library for WPF or Windows Forms, they do not require Microsoft Project installed on the client computer, but provide similar UI for project and related Gantt Charts. A: Try these links for a start. http://www.ilog.com/products/ganttnet/ http://www.netronic.com/products-for-developers/gantt-charts.html?gclid=COLdutasoZUCFQunQwodoWOPkw A: My company decided to buy the Infragistics NetAdvantage for .NET. We will be using their Gantt control. Thanks for your answers. A: If you are looking for simple Gantt Chart control in asp.net i recommend jsGantt.It's purely written in javascript ript,html,css and very fast.Also easy to integrate with any of the language in web. Here is a good tutorial of using jsGantt in ASP.net Here in CodeGlobe A: If you are using Microsoft Gantt and you want help in developing Gantt Chart application for Microsoft Office Project, you can have help from below link http://blog.functionalfun.net/2008/09/how-to-create-gantt-control-in-wpf.html
{ "language": "en", "url": "https://stackoverflow.com/questions/22067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Controls versus standard HTML I'm getting into ASP.NET (C# - I know it doesn't matter for this particular question, but full disclosure and all that), and while I love that the asp:-style controls save me a lot of tedious HTML-crafting, I am often frustrated with certain behaviors. I encountered one last night when working with Master Pages: my <asp:BulletedList ID="nav">, when converted into HTML, became <ul id="ct100_nav">. There are other issues--I noticed that when you auto-populate a DataGrid, it adds attributes to the resulting table that I don't necessarily want there. I know that there is a certain amount of "convention over configuration" that you have to accept when you rely on a framework to take over some of your tedious duties, but the "conventions" in these cases aren't so much any established conventions, but rather unnecessary extras. I know why the ID adds the prefix, but I should be able to tweak and turn things like this off, especially since, as a bit of a web standards evangelist, I don't duplicated HTML id's in a single page anyway. So the question here is for those ASP.NET devs more seasoned than I: in your experiences in developing and deploying apps, how do you leverage these controls? Do you find yourself resorting back to hard-coded HTML? Do you use a blend? I don't want to design my HTML around idiosyncratic quirks in these controls, but, if possible, I'd like to leverage them when possible. What's a boy to do? A: The short answer is that you should never use an asp:... version of a standard HTML control unless you have a really good reason. Junior developers often get suckered into using those controls because they're covered in most ASP.NET books, so it's assumed that they must be better. They're not. At this point, after 8 years of daily ASP.NET development, I can only think of 2 or 3 cases where it actually makes sense to use an asp:... INPUT control over a standard HTML one. A: As for the ID's on server-controls: You can find the actually ID that is going to be written to the browser by accessing ClientID. That way you can combine server-side og client-side scripting and still dont have to hardcode _id="ct100_nav"_ I always try to use the included controls instead of "hacking" HTML, because if there is an update or some improvement later on, all my code will still work by just replacing the framework and I don't have to change any HTML. Hope this helps A: @Brian, Yup! You can pretty much control all the behaviour.. Consider looking into creating Custom Controls (there are three types). I recently gave an overview of them in my question here. I would strongly recommend checking them out, has help me no end :) A: Personally, I think the standard ASP.NET controls are fine for inhouse stuff - quick and dirty is good in that scenario. But, I once worked with a web developer who was also a designer and he refused to use the ASP.NET controls and only code in HTML and add runat="server" tags when needed. This was more because he wanted to know exactly how his HTML was going to be rendered, and at the time anyway, some of the ASP.NET controls wouldn't render to standards compliance. I sit somewhere in the middle - use HTML where appropriate and not when not. You can sort of best of both worlds with the CSS control Adapters A: I'm actually quite relieved to see some opinions here agreeing with my own: ASP.NET as a template language is very poor. I'd just like to rebut a couple of the pro points made here (flamesuit on!): Dave Ward mentions ID collisions - this is true, but my how badly handled. I would have preferred to see nodes referenced by xpath or deep css selectors than by making the ID effectively useless except by deferring to ASP.NET internals like clientID - it just makes writing CSS and JS that much harder pointlessly. Rob Cooper talks about how the controls are a replacement for HTML so it's all fine (paraphrasing, forgive me Rob) - well it's not fine, because they took an existing and well understood language and said "no, you have to do things our way now", and their way is very poorly implemented. e.g. asp:panel renders a table in one browser and a div in another! Without documentation or execution, the markup for a login control (and many others) isn't predictable. How are you going to get a designer to write CSS against that? Espo writes about how controls give you the benefits of abstraction if the platform changes the html - well this is clearly circular (It's only changing because the platform is changing, and wouldn't need to if I just had my own HTML there instead) and actually creates a problem. If the control is going to change with updates again how is my CSS supposed to cope with that? Apologists will say "yes but you can change this in the config" or talk about overriding controls and custom controls. Well why should I have to? The css friendly controls package meant to fix some of these problems is anything but with it's unsemantic markup and it doesn't address the ID issue. It's impossible to implement MVC (the abstract concept, not the 3.5 implementation) out of the box with webform apps becuase these controls so tightly bind the view and control. There's a barrier of entry for the traditional web designer now because he has to get involved with server side code to implement what used to be the separate domains of CSS and JS. I sympathise with these people. I do strongly agree with Kiwi's point that controls allow for some very rapid development for apps of a certain profile, and I accept that for whatever reason some programmers find HTML unpleasant, and further that the advantages the other parts of ASP.NET give you, which requires these controls, may be worth the price. However, I resent the loss of control, I feel the model of dealing with things like classes, styles and scripting on the codebehind is a wrongheaded step backwards, and I further feel that there are better models for templating (implementation of microformats and xslt for this platform) although replacing controls with these is non-trivial. I think ASP.NET could learn a lot from related tech in LAMP and rails world, until then I hope to work with 3.5 MVC where I can. (sorry that was so long </rant>) A: I too am on my adventure into ASP.NET and have also had similar frustrations.. However, you soon get used to it. You just need to remember, the reason you dont have the tedious HTML crafting is because the ASP.NET controls do it all for you. To some extent you can control/tweak these things, even if it means inheriting the control and tweaking the HTML output from there. I have had to do that in the past, where certain controls were not passing W3C validation by default by putting some extra markup here and there, so I simply overrode and edited as necessary (a fix that too literally a couple of minutes).. I would say learn about how the controls system works.. Then knock a few together yourself, this has really helped me grok whats going on under the hood, so if I ever get any problems, I have an idea where to go. A: The HTML renders with those sort of IDs because its ASP.NET's way of preventing ID collisions. Each container control, such as a Master page or Wizard control, will prepend an "ID_" on its childrens' IDs. In the case of your bullet list, the ListView provides a nice middle ground. You can still bind it to a datasource, but it gives you much tighter control over the rendered HTML. Scott Gu has a nice intro to the ListView here: http://weblogs.asp.net/scottgu/archive/2007/08/10/the-asp-listview-control-part-1-building-a-product-listing-page-with-clean-css-ui.aspx A: If the ID's prefix added by ASP.NET is an issue for you to access them later using JS or something... you have the .ClientID property server side. If the overhead added by ASP.NET you should consider ASP.NET MVC (still preview) where you have full control over the emitted html. I'm moving to MVC because I don't like all that stuffs added too.... A: I think most of the answers here take a designer's point of view. On a small-to-medium project it might seem like an overhead to synchronize code and CSS/HTML and make them standards-compliant and clean. A designer's way to do that is to have full control over rendered HTML. But there's many ways to have that full control in ASP.NET. And for me, having the required HTML in the aspx/ascx file is the most non-scalable and dirty way to do it. If you want to style controls through CSS, you can always set a class server-side through the CssClass property. If you want to access them through JS, you can emit the JS with right IDs server-side again. The only disadvantage that this provides is that a dev and a designer have to work together closely. On any large project this is unavoidable anyway. But the advantages ASP.NET provides far outnumber these difficulties. Still, if you want standards-compliant HTML, skinning support and other goodies to control rendered markup, you can always use thrid-party controls. A: As Dave Ward has already mentioned, "it's ASP.NET's way of preventing ID collisions." A very good example of this is if you're attempting to put a control inside of a custom control, then use that custom control in a repeater, so that custom control's HTML would be output multiple times for the page. As others have mentioned, if you need to access the controls for javascript, use the ClientScript property which will give you access to a ClientScriptManager and register your scripts to the page this way. Be sure when writing your scripts to use the ClientID property on the control you're trying to reference instead of just typing in the control's ID. A: If you want that much control over the rendered HTML, look into ASP.NET MVC instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/22084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Difference between `/dev/ttyS0` and `/dev/ttys0`? In Linux, what is the difference between /dev/ttyS0 and /dev/ttys0? I know that the first is a serial port, but what about the second, with the small s? A: see this For a pseudo terminal pair such as ptyp3 and ttyp3, the pty... is the master or controlling terminal and the tty... is the slave. There are only 16 ttyp's: ttyp0-ttypf (f is a hexadecimal digit). To get more pairs, the 3 letters q, r, s may be used instead of p. For example the pair ttys8, ptys8 is a pseudo terminal pair. The master and slave are really the same "port" but the slave is used by the application program and the master is used by a network program (or the like) which supplies (and gets) data to/from the slave port. A: And this: http://lists.opensuse.org/archive/opensuse/2003-12/msg02404.html A: In the Linux devices.txt file in the kernel docs it says: 3 char Pseudo-TTY slaves 0 = /dev/ttyp0 First PTY slave 1 = /dev/ttyp1 Second PTY slave ... 255 = /dev/ttyef 256th PTY slave These are the old-style (BSD) PTY devices; Unix98 devices are on major 136 and above. and goes on to say 4 char TTY devices 0 = /dev/tty0 Current virtual console 1 = /dev/tty1 First virtual console ... 63 = /dev/tty63 63rd virtual console 64 = /dev/ttyS0 First UART serial port ... 255 = /dev/ttyS191 192nd UART serial port UART serial ports refer to 8250/16450/16550 series devices. Older versions of the Linux kernel used this major number for BSD PTY devices. As of Linux 2.1.115, this is no longer supported. Use major numbers 2 and 3. I don't know how much this helps you, but should get you started in the right direction.
{ "language": "en", "url": "https://stackoverflow.com/questions/22106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using Regex to generate Strings rather than match them I am writing a Java utility that helps me to generate loads of data for performance testing. It would be really cool to be able to specify a regex for Strings so that my generator spits out things that match this. Is something out there already baked that I can use to do this? Or is there a library that gets me most of the way there? A: Edit: Complete list of suggested libraries on this question: * *Xeger* - Java *Generex* - Java *Rgxgen - Java *rxrdg - C# * - Depends on dk.brics.automaton Edit: As mentioned in the comments, there is a library available at Google Code to achieve this: https://code.google.com/archive/p/xeger/ See also https://github.com/mifmif/Generex as suggested by Mifmif Original message: Firstly, with a complex enough regexp, I believe this can be impossible. But you should be able to put something together for simple regexps. If you take a look at the source code of the class java.util.regex.Pattern, you'll see that it uses an internal representation of Node instances. Each of the different pattern components have their own implementation of a Node subclass. These Nodes are organised into a tree. By producing a visitor that traverses this tree, you should be able to call an overloaded generator method or some kind of Builder that cobbles something together. A: I've gone the root of rolling my own library for that (In c# but should be easy to understand for a Java developer). Rxrdg started as a solution to a problem of creating test data for a real life project. The basic idea is to leverage the existing (regular expression) validation patterns to create random data that conforms to such patterns. This way valid random data is created. It is not that difficult to write a parser for simple regex patterns. Using an abstract syntax tree to generate strings should be even easier. A: On stackoverflow podcast 11: Spolsky: Yep. There's a new product also, if you don't want to use the Team System there our friends at Redgate have a product called SQL Data Generator [http://www.red-gate.com/products/sql_data_generator/index.htm]. It's $295, and it just generates some realistic test data. And it does things like actually generate real cities in the city column that actually exist, and then when it generates those it'll get the state right, instead of getting the state wrong, or putting states into German cities and stuff like... you know, it generates pretty realistic looking data. I'm not really sure what all the features are. This is probably not what you are looking for, but it might be a good starting off point, instead of creating your own. I can't seem to find anything in google, so I would suggest tackling the problem by parsing a given regular expression into the smallest units of work (\w, [x-x], \d, etc) and writing some basic methods to support those regular expression phrases. So for \w you would have a method getRandomLetter() which returns any random letter, and you would also have getRandomLetter(char startLetter, char endLetter) which gives you a random letter between the two values. A: I am on flight and just saw the question: I have written easiest but inefficient and incomplete solution. I hope it may help you to start writing your own parser: public static void main(String[] args) { String line = "[A-Z0-9]{16}"; String[] tokens = line.split(line); char[] pattern = new char[100]; int i = 0; int len = tokens.length; String sep1 = "[{"; StringTokenizer st = new StringTokenizer(line, sep1); while (st.hasMoreTokens()) { String token = st.nextToken(); System.out.println(token); if (token.contains("]")) { char[] endStr = null; if (!token.endsWith("]")) { String[] subTokens = token.split("]"); token = subTokens[0]; if (!subTokens[1].equalsIgnoreCase("*")) { endStr = subTokens[1].toCharArray(); } } if (token.startsWith("^")) { String subStr = token.substring(1, token.length() - 1); char[] subChar = subStr.toCharArray(); Set set = new HashSet<Character>(); for (int p = 0; p < subChar.length; p++) { set.add(subChar[p]); } int asci = 1; while (true) { char newChar = (char) (subChar[0] + (asci++)); if (!set.contains(newChar)) { pattern[i++] = newChar; break; } } if (endStr != null) { for (int r = 0; r < endStr.length; r++) { pattern[i++] = endStr[r]; } } } else { pattern[i++] = token.charAt(0); } } else if (token.contains("}")) { char[] endStr = null; if (!token.endsWith("}")) { String[] subTokens = token.split("}"); token = subTokens[0]; if (!subTokens[1].equalsIgnoreCase("*")) { endStr = subTokens[1].toCharArray(); } } int length = Integer.parseInt((new StringTokenizer(token, (",}"))).nextToken()); char element = pattern[i - 1]; for (int j = 0; j < length - 1; j++) { pattern[i++] = element; } if (endStr != null) { for (int r = 0; r < endStr.length; r++) { pattern[i++] = endStr[r]; } } } else { char[] temp = token.toCharArray(); for (int q = 0; q < temp.length; q++) { pattern[i++] = temp[q]; } } } String result = ""; for (int j = 0; j < i; j++) { result += pattern[j]; } System.out.print(result); } A: It's too late to help the original poster, but it could help a newcomer. Generex is a useful java library that provides many features for using regexes to generate strings (random generation, generating a string based on its index, generating all strings...). Example : Generex generex = new Generex("[0-3]([a-c]|[e-g]{1,2})"); // generate the second String in lexicographical order that matches the given Regex. String secondString = generex.getMatchedString(2); System.out.println(secondString);// it print '0b' // Generate all String that matches the given Regex. List<String> matchedStrs = generex.getAllMatchedStrings(); // Using Generex iterator Iterator iterator = generex.iterator(); while (iterator.hasNext()) { System.out.print(iterator.next() + " "); } // it prints 0a 0b 0c 0e 0ee 0e 0e 0f 0fe 0f 0f 0g 0ge 0g 0g 1a 1b 1c 1e // 1ee 1e 1e 1f 1fe 1f 1f 1g 1ge 1g 1g 2a 2b 2c 2e 2ee 2e 2e 2f 2fe 2f 2f 2g // 2ge 2g 2g 3a 3b 3c 3e 3ee 3e 3e 3f 3fe 3f 3f 3g 3ge 3g 3g 1ee // Generate random String String randomStr = generex.random(); System.out.println(randomStr);// a random value from the previous String list Disclosure The project mentioned on this post belongs to the user answering (Mifmif) the question. As per the rules, this need to be brought up. A: Xeger (Java) is capable of doing it as well: String regex = "[ab]{4,6}c"; Xeger generator = new Xeger(regex); String result = generator.generate(); assert result.matches(regex); A: You'll have to write your own parser, like the author of String::Random (Perl) did. In fact, he doesn't use regexes anywhere in that module, it's just what perl-coders are used to. On the other hand, maybe you can have a look at the source, to get some pointers. EDIT: Damn, blair beat me to the punch by 15 seconds. A: I know there's already an accepted answer, but I've been using RedGate's Data Generator (the one mentioned in Craig's answer) and it works REALLY well for everything I've thrown at it. It's quick and that leaves me wanting to use the same regex to generate the real data for things like registration codes that this thing spits out. It takes a regex like: [A-Z0-9]{3,3}-[A-Z0-9]{3,3} and it generates tons of unique codes like: LLK-32U Is this some big secret algorithm that RedGate figured out and we're all out of luck or is it something that us mere mortals actually could do? A: This question is really old, though the problem was actual for me. I've tried xeger and Generex and they doesn't seem to meet my reguirements. They actually fail to process some of the regex patterns (like a{60000}) or for others (e.g. (A|B|C|D|E|F)) they just don't produce all possible values. Since I didn't find any another appropriate solution - I've created my own library. https://github.com/curious-odd-man/RgxGen This library can be used to generate both matching and non-matching string. There is also artifact on maven central available. Usage example: RgxGen rgxGen = new RgxGen(aRegex); // Create generator String s = rgxGen.generate(); // Generate new random value A: It's far from supporting a full PCRE regexp, but I wrote the following Ruby method to take a regexp-like string and produce a variation on it. (For language-based CAPTCHA.) # q = "(How (much|many)|What) is (the (value|result) of)? :num1 :op :num2?" # values = { :num1=>42, :op=>"plus", :num2=>17 } # 4.times{ puts q.variation( values ) } # => What is 42 plus 17? # => How many is the result of 42 plus 17? # => What is the result of 42 plus 17? # => How much is the value of 42 plus 17? class String def variation( values={} ) out = self.dup while out.gsub!( /\(([^())?]+)\)(\?)?/ ){ ( $2 && ( rand > 0.5 ) ) ? '' : $1.split( '|' ).random }; end out.gsub!( /:(#{values.keys.join('|')})\b/ ){ values[$1.intern] } out.gsub!( /\s{2,}/, ' ' ) out end end class Array def random self[ rand( self.length ) ] end end A: This question is very old, but I stumbled across it on my own search, so I will include a couple links for others who might be searching for the same functionality in other languages. * *There is a Node.js library here: https://github.com/fent/randexp.js *There is a PHP library here: https://github.com/icomefromthenet/ReverseRegex *The PHP faker package includes a "regexify" method that accomplishes this: https://packagist.org/packages/fzaninotto/faker A: If you want to generate "critical" strings, you may want to consider: EGRET http://elarson.pythonanywhere.com/ that generates "evil" strings covering your regular expressions MUTREX http://cs.unibg.it/mutrex/ that generates fault-detecting strings by regex mutation Both are academic tools (I am one of the authors of the latter) and work reasonably well.
{ "language": "en", "url": "https://stackoverflow.com/questions/22115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "131" }
Q: How do I use NTLM authentication with Active Directory I am trying to implement NTLM authentication on one of our internal sites and everything is working. The one piece of the puzzle I do not have is how to take the information from NTLM and authenticate with Active Directory. There is a good description of NTLM and the encryption used for the passwords, which I used to implement this, but I am not sure of how to verify if the user's password is valid. I am using ColdFusion but a solution to this problem can be in any language (Java, Python, PHP, etc). Edit: I am using ColdFusion on Redhat Enterprise Linux. Unfortunately we cannot use IIS to manage this and instead have to write or use a 3rd party tool for this. Update - I got this working and here is what I did I went with the JCIFS library from samba.org. Note that the method below will only work with NTLMv1 and DOES NOT work with NTLMv2. If you are unable to use NTLMv1 you can try Jespa, which supports NTLMv2 but is not open source, or you can use Kerberos/SPNEGO. Here is my web.xml: <web-app> <display-name>Ntlm</display-name> <filter> <filter-name>NtlmHttpFilter</filter-name> <filter-class>jcifs.http.NtlmHttpFilter</filter-class> <init-param> <param-name>jcifs.http.domainController</param-name> <param-value>dc01.corp.example.com</param-value> </init-param> <init-param> <param-name>jcifs.smb.client.domain</param-name> <param-value>CORP.EXAMPLE.COM</param-value> </init-param> </filter> <filter-mapping> <filter-name>NtlmHttpFilter</filter-name> <url-pattern>/admin/*</url-pattern> </filter-mapping> </web-app> Now all URLs matching /admin/* will require NTLM authentication. A: As I understand it. NTLM is one of IIS built in authentication methods. If the the Host is registered on the domain of said active directory, it should be automatic. One thing to watch out for is the username should be in one of two formats. * *domain\username *[email protected] If you are trying to go against a different active directory you should be using a forms style authentication and some LDAP code. If you are trying to do the Intranet No Zero Login thing with IIS Integrated authentication * *the domain needs to be listed as a trusted site in IEx browser *or use a url the uses the netbios name instead of the DNS name. *for it to work in firefox read here A: The ModNTLM source for Apache may provide you with the right pointers. If possible, you should consider using Kerberos instead. It lets you authenticate Apache against AD, and it's a more active project space than NTLM. A: What you're really asking is: Is there any way to validate the "WWW-Authenticate: NTLM" tokens submitted by IE and other HTTP clients when doing Single Sign-On (SSO). SSO is when the user enters their password a "single" time when they do Ctrl-Alt-Del and the workstation remembers and uses it as necessary to transparently access other resources without prompting the user for a password again. Note that Kerberos, like NTLM, can also be used to implement SSO authentication. When presented with a "WWW-Authenticate: Negotiate" header, IE and other browsers will send SPNEGO wrapped Kerberos and / or NTLM tokens. More on this later but first I will answer the question as asked. The only way to validate an NTLMSSP password "response" (like the ones encoded in "WWW-Authenticate: NTLM" headers submitted by IE and other browsers) is with a NetrLogonSamLogon(Ex) DCERPC call with the NETLOGON service of an Active Directory domain controller that is an authority for, or has a "trust" with an authority for, the target account. Additionally, to properly secure the NETLOGON communication, Secure Channel encryption should be used and is required as of Windows Server 2008. Needless to say, there are very few packages that implement the necessary NETLOGON service calls. The only ones I'm aware of are: * *Windows (of course) *Samba - Samba is a set of software programs for UNIX that implements a number of Windows protocols including the necessary NETLOGON service calls. In fact, Samba 3 has a special daemon for this called "winbind" that other programs like PAM and Apache modules can (and do) interface with. On a Red Hat system you can do a yum install samba-winbind and yum install mod_auth_ntlm_winbind. But that's the easy part - setting these things up is another story. *Jespa - Jespa (http://www.ioplex.com/jespa.html) is a 100% Java library that implements all of the necessary NETLOGON service calls. It also provides implementations of standard Java interfaces for authenticating clients in various ways such as with an HTTP Servlet Filter, SASL server, JAAS LoginModule, etc. Beware that there are a number of NTLM authentication acceptors that do not implement the necessary NETLOGON service calls but instead do something else that ultimately leads to failure in one scenario or another. For example, for years, the way to do this in Java was with the NTLM HTTP authentication Servlet Filter from a project called JCIFS. But that Filter uses a man-in-the-middle technique that has been responsible for a long-standing "hiccup bug" and, more important, it does not support NTLMv2. For these reasons and others it is scheduled to be removed from JCIFS. There are several projects that have been unintentionally inspired by that package that are now also equally doomed. There are also a lot of code fragments posted in Java forums that decode the header token and pluck out the domain and username but do absolutely nothing to actually validate the password responses. Suffice it to say, if you use one of those code fragments, you might as well walk around with your pants down. As I eluded to earlier, NTLM is only one of several Windows Security Support Providers (SSP). There's also a Digest SSP, Kerberos SSP, etc. But the Negotiate SSP, which is also known as SPNEGO, is usually the provider that MS uses in their own protocol clients. The Negotiate SSP actually just negotiates either the NTLM SSP or Kerberos SSP. Note that Kerberos can only be used if both the server and client have accounts in the target domain and the client can communicate with the domain controller sufficiently to acquire a Kerberos ticket. If these conditions are not satisfied, the NTLM SSP is used directly. So NTLM is by no means obsolete. Finally, some people have mentioned using an LDAP "simple bind" as a make-shift password validation service. LDAP is not really designed as an authentication service and for this reason it is not efficient. It is also not possible to implement SSO using LDAP. SSO requires NTLM or SPNEGO. If you can find a NETLOGON or SPNEGO acceptor, you should use that instead. Mike A: Check out Waffle. It implements SSO for Java servers using Win32 API. There're servlet, tomcat valve, spring-security and other filters. A: You can resolve the Firefox authentication popup by performing the following steps in Firefox: * *Open Mozilla Firefox *Type about:config in address bar *Enter network.automatic-ntlm-auth.trusted-uris in Search texfield *Double click preference name and key in your server name as String value *Close the tab *Restart Firefox. A: Hm, I'm not sure what you're trying to accomplish. Usually implementing NTLM on an internal site is as simple as unchecking "Enable Anonymous Access" in "Authentication and Access Control" in the "Directory Security" tab of website properties in IIS. If that is cleared, then your web application users will see a pop-up NTLM dialog. There's no need for you to write any code that interfaces with Active Directory. IIS takes care of the authentication for you. Can you be more specific about what you're trying to do? A: i assume that you are wanting get to some of the attributes that are set against the LDAP account - role - department etc. for coldfusion check this out http://www.adobe.com/devnet/server_archive/articles/integrating_cf_apps_w_ms_active_directory.html and the cfldap tag http://livedocs.adobe.com/coldfusion/6.1/htmldocs/tags-p69.htm#wp1100581 As to other languages - others will do it with there respective APIs
{ "language": "en", "url": "https://stackoverflow.com/questions/22135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Can you compile Apache HTTP Server and redeploy its binaries to a different location? As part of our product release we ship Apache HTTP Server binaries that we have compiled on our (UNIX) development machine. We tell our clients to install the binaries (on their UNIX servers) under the same directory structure that we compiled it under. For some clients this is not appropriate, e.g. where there are restrictions on where they can install software on their servers and they don't want to compile Apache themselves. Is there a way of compiling Apache HTTP Server so its installation location(s) can be specified dynamically using environment variables ? I spent a few days trying to sort this out and couldn't find a way to do it. It led me to believe that the Apache binaries were hard coding some directory paths at compilation preventing the portability we require. Has anyone managed to do this ? A: I think the way to do(get around) this problem is to develop a "./configure && make" script that your client uses to install, specify and compile the binaries. That would offcourse require that the client has all the source-code installed on his server or you can make it available on an NFS share. A: If you are compiling Apache2 for a particular location but want your clients to be able to install it somewhere else (and I'm assuming they have the same architecture and OS as your build machine) then you can do it but the apachectl script will need some after-market hacking. I just tested these steps: * *Unpacked the Apache2 source (this should work with Apache 1.3 as well though) and ran ./configure --prefix=/opt/apache2 *Ran make then sudo make install to install on the build machine. *Switch to the install directory (/opt/apache2) and tar and gzip up the binaries and config files. I used cd /opt/apache2; sudo tar cf - apache2 | gzip -c > ~/apache2.tar.gz *Move the tar file to the target machine. I decided to install in /opt/mynewdir/dan/apache2 to test. So basically, your clients can't use rpm or anything like that -- unless you know how to make that relocatable (I don't :-) ). *Anyway, your client's conf/httpd.conf file will be full of hard-coded absolute paths -- they can just change these to whatever they need. The apachectl script also has hard coded paths. It's just a shell script so you can hack it or give them a sed script to convert the old paths from your build machine to the new path on your clients. *I skipped all that hackery and just ran ./bin/httpd -f /opt/mynewdir/dan/conf/httpd.conf :-) Hope that helps. Let us know any error messages you get if it's not working for you. A: I think the way to do(get around) this problem is to develop a "./configure && make" script that your client uses to install, specify and compile the binaries. That would offcourse require that the client has all the source-code installed on his server or you can make it available on an NFS share. Not to mention a complete build toolchain. These days, GCC doesn't come default with most major distributions. Wouldn't it be sane to force the client to install it to /opt/my_apache2/ or something like that? A: @Hissohathair I suggest 1 change to @Hissohathair's answer. 6). ./bin/httpd -d <server path> (although it can be overridden in the config file) In apacheclt there is a variable for HTTPD where you could override to use it.
{ "language": "en", "url": "https://stackoverflow.com/questions/22140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Calculating a cutting list with the least amount of off cut waste I am working on a project where I produce an aluminium extrusion cutting list. The aluminium extrusions come in lengths of 5m. I have a list of smaller lengths that need to be cut from the 5m lengths of aluminium extrusions. The smaller lengths need to be cut in the order that produces the least amount of off cut waste from the 5m lengths of aluminium extrusions. Currently I order the cutting list in such a way that generally the longest of the smaller lengths gets cut first and the shortest of smaller lengths gets cut last. The exception to this rule is whenever a shorter length will not fit in what is left of the 5m length of aluminium extrusion, I use the longest shorter length that will fit. This seems to produce a very efficient (very little off cut waste) cutting list and doesn't take long to calculate. I imagine, however, that even though the cutting list is very efficient, it is not necessarily the most efficient. Does anyone know of a way to calculate the most efficient cutting list which can be calculated in a reasonable amount of time? EDIT: Thanks for the answers, I'll continue to use the "greedy" approach as it seems to be doing a very good job (out performs any human attempts to create an efficient cutting list) and is very fast. A: No specific ideas on this problem, I'm afraid - but you could look into a 'genetic algorithm' (which would go something like this)... Place the lengths to cut in a random order and give that order a score based on how good a match it is to your ideal solution (0% waste, presumably). Then, iteratively make random alterations to the order and re-score it. If the score is higher, ditch the result. If the score is lower, keep it and use it as the basis for your next calculation. Keep going until you get your score within acceptable limits. A: What you described is indeed classified as a Cutting Stock problem, as Wheelie mentioned, and not a Bin Packing problem because you try to minimize the waste (sum of leftovers) rather than the number of extrusions used. Both of those problems can be very hard to solve, but the 'best fit' algorithm you mentioned (using the longest 'small length' that fits the current extrusion) is likely to give you very good answers with a very low complexity. A: Actually, since the size of material is fixed, but the requests are not, it's a bin packing problem. Again, wikipedia to the rescue! (Something I might have to look into for work too, so yay!) A: This is a classic, difficult problem to solve efficiently. The algorithm you describe sounds like a Greedy Algorithm. Take a look at this Wikipedia article for more information: The Cutting Stock Problem A: That's an interesting problem because I suppose it depends on the quantity of each length you're producing. If they are all the same quantity and you can get Each different length onto one 5m extrusion then you have the optimum soloution. However if they don't all fit onto one extrusion then you have a greater problem. To keep the same amount of cuts for each length you need to calculate how many lengths (not necessarily in order) can fit on one extrusion and then go in an order through each extrusion. A: I've been struggling with this exact ( the lenght for my problem is 6 m) problem here too. The solution I'm working on is a bit ugly, but I don't settle for your solution. Let me explain: Stock size 5 m Needs to cut in sizes(1 of each): **3,5 1 1,5** Your solution: 3,5 | 1 with a waste of 0,5 1,5 with a left over of 3,5 See the problem? The solution I'm working on -> Brute force 1 - Test every possible solution 2 - Order the solutuion by their waste 3 - Choose the best solution 4 - Remove the items in the solution from the "Universe" 5 - Goto 1 I know it's time consuming (but I take 1h30 m to lunch... so... :) ) I really need the optimum solution (I do an almoust optimum solution by hand (+-) in excel) not just because I'm obsecive but also the product isn't cheap. If anyone has an easy better solution I'd love it A: The Column generation algorithm will quickly find a solution with the minimum possible waste. To summarize, it works well because it doesn't generate all possible combinations of cuts that can fit on a raw material length. Instead, it iteratively solves for combinations that would improve the overall solution, until it reaches an optimum solution. If anyone needs a working version of this, I've implemented it with python and posted it on GitHub: LengthNestPro
{ "language": "en", "url": "https://stackoverflow.com/questions/22145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Unicode vs UTF-8 confusion in Python / Django? I stumbled over this passage in the Django tutorial: Django models have a default str() method that calls unicode() and converts the result to a UTF-8 bytestring. This means that unicode(p) will return a Unicode string, and str(p) will return a normal string, with characters encoded as UTF-8. Now, I'm confused because afaik Unicode is not any particular representation, so what is a "Unicode string" in Python? Does that mean UCS-2? Googling turned up this "Python Unicode Tutorial" which boldly states Unicode is a two-byte encoding which covers all of the world's common writing systems. which is plain wrong, or is it? I have been confused many times by character set and encoding issues, but here I'm quite sure that the documentation I'm reading is confused. Does anybody know what's going on in Python when it gives me a "Unicode string"? A: Meanwhile, I did a refined research to verify what the internal representation in Python is, and also what its limits are. "The Truth About Unicode In Python" is a very good article which cites directly from the Python developers. Apparently, internal representation is either UCS-2 or UCS-4 depending on a compile-time switch. So Jon, it's not UTF-16, but your answer put me on the right track anyway, thanks. A: what is a "Unicode string" in Python? Does that mean UCS-2? Unicode strings in Python are stored internally either as UCS-2 (fixed-length 16-bit representation, almost the same as UTF-16) or UCS-4/UTF-32 (fixed-length 32-bit representation). It's a compile-time option; on Windows it's always UTF-16 whilst many Linux distributions set UTF-32 (‘wide mode’) for their versions of Python. You are generally not supposed to care: you will see Unicode code-points as single elements in your strings and you won't know whether they're stored as two or four bytes. If you're in a UTF-16 build and you need to handle characters outside the Basic Multilingual Plane you'll be Doing It Wrong, but that's still very rare, and users who really need the extra characters should be compiling wide builds. plain wrong, or is it? Yes, it's quite wrong. To be fair I think that tutorial is rather old; it probably pre-dates wide Unicode strings, if not Unicode 3.1 (the version that introduced characters outside the Basic Multilingual Plane). There is an additional source of confusion stemming from Windows's habit of using the term “Unicode” to mean, specifically, the UTF-16LE encoding that NT uses internally. People from Microsoftland may often copy this somewhat misleading habit. A: Python stores Unicode as UTF-16. str() will return the UTF-8 representation of the UTF-16 string. A: so what is a "Unicode string" in Python? Python 'knows' that your string is Unicode. Hence if you do regex on it, it will know which is character and which is not etc, which is really helpful. If you did a strlen it will also give the correct result. As an example if you did string count on Hello, you will get 5 (even if it's Unicode). But if you did a string count of a foreign word and that string was not a Unicode string than you will have much larger result. Pythong uses the information form the Unicode Character Database to identify each character in the Unicode String. Hope that helps. A: From Wikipedia on UTF-8: UTF-8 (8-bit UCS/Unicode Transformation Format) is a variable-length character encoding for Unicode. It is able to represent any character in the Unicode standard, yet the initial encoding of byte codes and character assignments for UTF-8 is backwards compatible with ASCII. For these reasons, it is steadily becoming the preferred encoding for e-mail, web pages[1], and other places where characters are stored or streamed. So, it's anywhere between one and four bytes depending on which character you wish to represent within the realm of Unicode. From Wikipedia on Unicode: In computing, Unicode is an industry standard allowing computers to consistently represent and manipulate text expressed in most of the world's writing systems. So it's able to represent most (but not all) of the world's writing systems. I hope this helps :)
{ "language": "en", "url": "https://stackoverflow.com/questions/22149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: What is the difference between HttpHandler and a Web User Control and when to use each one? I've been using user controls extensively but never use a HttpHandler and was wondering if I am doing something suboptimal or wrong A: Unfortunately your question is a little like "Should I use a sandwich or a cement mixer". HttpHandlers and User controls are completely different things. HttpHandlers are used to process HTTP requests. For example, if you wanted to dynamically create an RSS feed, you could write an HTTP handler that handles all requests for ".rss" files, creates the output and sends it back to the user. User controls are used within ASPX pages to encapsulate units of functionality that you want to re-use accross many pages. Chances are, if you're using user controls successfully, you don't want to use HttpHandlers! A: Basically a user control is a piece of server logic and UI. An HTTP Handler is only a piece of logic that is executed when a resource on your server is requested. For example you may decide to handle requests for images sent to your server through your own handler and serve images from a database instead of the file system. However, in this case there's no interface that the user sees and when he visits a URL on your server he would get the response you constructed in your own handler. Handlers are usually done for specific extensions and HTTP request types (POST, GET). Here's some more info on MSDN: http://msdn.microsoft.com/en-us/library/ms227675(VS.80).aspx A: Even an Asp.Net page is an HttpHandler. public class Page : TemplateControl, IHttpHandler A user control actually resides within the asp.net aspx page. A: Expect a better answer (probably before I finish typing this) but as a quick summary. A user control is something that can be added to a page. A HttpHandler can be used instead of a page. A: Just to clarify the question. I was reading the Hanselman post http://www.hanselman.com/blog/CompositingTwoImagesIntoOneFromTheASPNETServerSide.aspx and thinking that I would never solved the problem with a HttpHandler, maybe with a simple page returning a binary content. This led me to think that I should add HttpHandler to my developer tool belt.
{ "language": "en", "url": "https://stackoverflow.com/questions/22156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Transactional Design Pattern I have a need to create a "transactional" process using an external API that does not support COM+ or .NET transactions (Sharepoint to be exact) What I need to do is to be able to perform a number of processes in a sequence, but any failure in that sequence means that I will have to manually undo all of the previous steps. In my case there are only 2 types of step, both af which are fairly easy to undo/roll back. Does anyony have any suggestions for design patterns or structures that could be usefull for this ? A: The GoF Command Pattern supports undoable operations. I think the same pattern can be used for sequential operations (sequential commands). A: If your changes are done to the SharePoint object model, you can use the fact that changes are not committed until you call the Update() method of the modified object, such as SPList.Update() or SPWeb.Update(). Otherwise, I would use the Command Design Pattern. Chapter 6 in Head First Design Patterns even has an example that implements the undo functionality. A: Another good way for rollback/undo is the Memento Pattern. It's usually used to take a snapshot of the object at a given time and let the object state to be reverted to the memento. A: Next to the GOF Command Pattern you might also want to have a look at the Transaction Script pattern from P of EAA. You should probably create a Composite Command (or Transaction Script) that executes in sequence. A: You might want to have a look at the Compensating Resource Manager: http://msdn.microsoft.com/en-us/library/8xkdw05k(VS.80).aspx A: If you're using C++ (or any other language with deterministic destructor execution when scopes end) you can take a look at Scope Guards. This technique can probably also be adapted to .NET by making ScopeGuard implement IDisposable and sprinkling "using" statements as needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/22165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: ADO.NET Mapping From SQLDataReader to Domain Object? I have a very simple mapping function called "BuildEntity" that does the usual boring "left/right" coding required to dump my reader data into my domain object. (shown below) My question is this - If I don't bring back every column in this mapping as is, I get the "System.IndexOutOfRangeException" exception and wanted to know if ado.net had anything to correct this so I don't need to bring back every column with each call into SQL ... What I'm really looking for is something like "IsValidColumn" so I can keep this 1 mapping function throughout my DataAccess class with all the left/right mappings defined - and have it work even when a sproc doesn't return every column listed ... Using reader As SqlDataReader = cmd.ExecuteReader() Dim product As Product While reader.Read() product = New Product() product.ID = Convert.ToInt32(reader("ProductID")) product.SupplierID = Convert.ToInt32(reader("SupplierID")) product.CategoryID = Convert.ToInt32(reader("CategoryID")) product.ProductName = Convert.ToString(reader("ProductName")) product.QuantityPerUnit = Convert.ToString(reader("QuantityPerUnit")) product.UnitPrice = Convert.ToDouble(reader("UnitPrice")) product.UnitsInStock = Convert.ToInt32(reader("UnitsInStock")) product.UnitsOnOrder = Convert.ToInt32(reader("UnitsOnOrder")) product.ReorderLevel = Convert.ToInt32(reader("ReorderLevel")) productList.Add(product) End While A: Also check out this extension method I wrote for use on data commands: public static void Fill<T>(this IDbCommand cmd, IList<T> list, Func<IDataReader, T> rowConverter) { using (var rdr = cmd.ExecuteReader()) { while (rdr.Read()) { list.Add(rowConverter(rdr)); } } } You can use it like this: cmd.Fill(products, r => r.GetProduct()); Where "products" is the IList<Product> you want to populate, and "GetProduct" contains the logic to create a Product instance from a data reader. It won't help with this specific problem of not having all the fields present, but if you're doing a lot of old-fashioned ADO.NET like this it can be quite handy. A: Use the GetSchemaTable() method to retrieve the metadata of the DataReader. The DataTable that is returned can be used to check if a specific column is present or not. A: Why not just have each sproc return complete column set, using null, -1, or acceptable values where you don't have the data. Avoids having to catch IndexOutOfRangeException or re-writing everything in LinqToSql. A: Although connection.GetSchema("Tables") does return meta data about the tables in your database, it won't return everything in your sproc if you define any custom columns. For example, if you throw in some random ad-hoc column like *SELECT ProductName,'Testing' As ProductTestName FROM dbo.Products" you won't see 'ProductTestName' as a column because it's not in the Schema of the Products table. To solve this, and ask for every column available in the returned data, leverage a method on the SqlDataReader object "GetSchemaTable()" If I add this to the existing code sample you listed in your original question, you will notice just after the reader is declared I add a data table to capture the meta data from the reader itself. Next I loop through this meta data and add each column to another table that I use in the left-right code to check if each column exists. Updated Source Code Using reader As SqlDataReader = cmd.ExecuteReader() Dim table As DataTable = reader.GetSchemaTable() Dim colNames As New DataTable() For Each row As DataRow In table.Rows colNames.Columns.Add(row.ItemArray(0)) Next Dim product As Product While reader.Read() product = New Product() If Not colNames.Columns("ProductID") Is Nothing Then product.ID = Convert.ToInt32(reader("ProductID")) End If product.SupplierID = Convert.ToInt32(reader("SupplierID")) product.CategoryID = Convert.ToInt32(reader("CategoryID")) product.ProductName = Convert.ToString(reader("ProductName")) product.QuantityPerUnit = Convert.ToString(reader("QuantityPerUnit")) product.UnitPrice = Convert.ToDouble(reader("UnitPrice")) product.UnitsInStock = Convert.ToInt32(reader("UnitsInStock")) product.UnitsOnOrder = Convert.ToInt32(reader("UnitsOnOrder")) product.ReorderLevel = Convert.ToInt32(reader("ReorderLevel")) productList.Add(product) End While This is a hack to be honest, as you should return every column to hydrate your object correctly. But I thought to include this reader method as it would actually grab all the columns, even if they are not defined in your table schema. This approach to mapping your relational data into your domain model might cause some issues when you get into a lazy loading scenario. A: Why don't you use LinqToSql - everything you need is done automatically. For the sake of being general you can use any other ORM tool for .NET A: I would call reader.GetOrdinal for each field name before starting the while loop. Unfortunately GetOrdinal throws an IndexOutOfRangeException if the field doesn't exist, so it won't be very performant. You could probably store the results in a Dictionary<string, int> and use its ContainsKey method to determine if the field was supplied. A: If you don't want to use an ORM you can also use reflection for things like this (though in this case because ProductID is not named the same on both sides, you couldn't do it in the simplistic fashion demonstrated here): List Provider in C# A: I ended up writing my own, but this mapper is pretty good (and simple): https://code.google.com/p/dapper-dot-net/
{ "language": "en", "url": "https://stackoverflow.com/questions/22181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Erlang-style Concurrency for Other Languages What libraries exist for other programming languages to provide an Erlang-style concurrency model (processes, mailboxes, pattern-matching receive, etc.)? Note: I am specifically interested in things that are intended to be similar to Erlang, not just any threading or queueing library. A: Message Passing Interface (MPI) (http://www-unix.mcs.anl.gov/mpi/) is a highly scalable and robust library for parallel programming, geared original towards C but now available in several flavors http://en.wikipedia.org/wiki/Message_Passing_Interface#Implementations. While the library doesn't introduce new syntax, it provides a communication protocol to orchestrate the sharing of data between routines which are parallelizable. Traditionally, it is used in large cluster computing rather than on a single system for concurrency, although multi-core systems can certainly take advantage of this library. Another interesting solution to the problem of parallel programming is OpenMP, which is an attempt to provide a portable extension on various platforms to provide hints to the compiler about what sections of code are easily parallelizable. For example (http://en.wikipedia.org/wiki/OpenMP#Work-sharing_constructs): #define N 100000 int main(int argc, char *argv[]) { int i, a[N]; #pragma omp parallel for for (i=0;i<N;i++) a[i]= 2*i; return 0; } There are advantages and disadvantages to both, of course, but the former has proven to be extremely successful in academia and other heavy scientific computing applications. YMMV. A: Microsoft Concurrency and Coordination Runtime for .NET. The CCR is appropriate for an application model that separates components into pieces that can interact only through messages. Components in this model need means to coordinate between messages, deal with complex failure scenarios, and effectively deal with asynchronous programming. A: Scala supports actors. But I would not call scala intentionally similar to Erlang. Nonetheless scala is absolutely worth taking a look! A: Also kilim is a library for java, that brings erlang style message passing/actors to the Java language. A: Mike Rettig created a .NET library called Retlang and a Java port called Jetlang that is inspired by Erlang's concurrency model. A: Termite for Gambit Scheme. A: If you are using Ruby, take a look at Revactor. Revactor is an Actor model implementation for Ruby 1.9 built on top of the Rev high performance event library. Revactor is primarily designed for writing Erlang-like network services and tools. Take a look at this code sample: myactor = Actor.spawn do Actor.receive do |filter| filter.when(:dog) { puts "I got a dog!" } end end Revactor only runs on Ruby 1.9. I believe the author of the library has discontinued maintaining it but the documentation on their site is very good. You might also want to take a look at Reia: a ruby-like scripting language built on top of the Erlang VM. Reia is the new project of the creator of Revactor: Tony Arcieri. A: Microsoft's Not-Production-Ready Answer to Erlang: Microsoft Axum A: For python you can try using processing module. A: Warning: shameless plug! I developed a library for this kind of message passing in Haskell: Erlang-style Distributed Haskell. Volker A: JoCaml extends OCaml with join calculus for concurrent and distributed programming. A: Akka (http://akka.io) is heavily influenced by erlangs OTP. It has built on scala's actors and is great for concurrency on the JVM. A: Ulf Wiger had a great post recently on this topic - here are the properties he defines as required before you can call something "Erlang Style Concurrency": * *Fast process creation/destruction *Ability to support >> 10 000 concurrent processes with largely unchanged characteristics. *Fast asynchronous message passing. *Copying message-passing semantics (share-nothing concurrency). *Process monitoring. *Selective message reception. Number 2 above is the hardest to support in VMs and language implementations that weren't initially designed for concurrency. This is not to knock Erlang-ish concurrency implementations in other languages, but a lot of Erlang's value comes from being able to create millions of processes, which is pretty damn hard if the process abstraction has a 1-1 relationship with an OS-level thread or process. Ulf has a lot more on this in the link above.
{ "language": "en", "url": "https://stackoverflow.com/questions/22187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Web framework programming mindset I am just starting to play with Django/Python and am trying to shift into the MTV mode of programming that Django asks for (insists on). Deciding on what functions should be methods of a model vs simple being a function in a view has so far been confusing. Does anyone know of a book, website, blog, slideshow, whatever that discusses Web Framework programming in more general, abstract terms? I imagine just a book on object oriented programming would do it, but I feel like that would be overkill - I was looking for something web framework specific. A: My basic rule in Django is: if you could conceivably need the functionality from somewhere other than the view itself, it doesn't belong in the view function. I'd also recommend downloading some of the plethora of apps on Django Pluggables and seeing how they do it. A: Once you do find some good guide, here's something to remember: Django is a bit special with its terminology. It uses "MTV" for Model, Template and View (and can mention also a URL Dispatcher somewhere along the way), whereas a more standard set of terms is "MVC" for Model, View and Controller. Model is the same in both meanings - a model of a data entity, often linked to a database table, if the framework implements Object/Relational Mapping (which Django does). But the two remaining terms might be confusing; where Django talks about Views, the 'rest of the world' talks about Controllers. The basic idea is that this is where the presentation logic is done. Calculations are calculated, arrays are sorted, data is retrieved, etc. I'd say that Django's URL dispatcher is also a part of the conventional Controller concept. Django's Templates are comparable to Views elsewhere - here you have your presentation, nothing else. Where Django forces you to a very small set of logical commands, other frameworks often just recommend you not to do anything than present HTML, with some presentation logical elements (like loops, branches, etc), but don't stop you from doing other stuff. So, to recap: * *Model: Data objects *Controller (View in Django): Data process *View (Template in Django): Presentation Oh, btw: For a Django-specific guide, consider reading The Django Book A: I've not really used Django in anger before, but in Rails and CakePHP (and by extension, any MVC web-framework) the Fat Model, Skinny Controller approach to organising your methods has been a real eye-opener for me. A: If you aren't absolutely set on diving into Django and don't mind trying something else as a start, you might want to give WSGI a shot, which allows you to template your application your own way using a third party engine, rather than having to go exactly by Django's rules. This also allows you to peek at a lower level of handling requests, so you get a bit better understanding of what Django is doing under the hood. A: Here are a few links that might be helpful as an overview. From my own experience, when I first started using MVC based web-frameworks the biggest issue I had was with the Models. Prying SQL out of my fingers and making me use Objects just felt strange. Once I started thinking of my data as Objects instead of SELECT statements it started getting easier. * *MVC In laymen's terms *MVC: The Most Vexing Conundrum *How to use Model-View-Controller A: View function should only contain display helpers or display logic. View functions should never access the model itself, but should take parameters of model data. It is important to separate the model from the view. So if the function handles accessing the database or database objects, it belongs in the model. If the function handles formatting display, it belongs in the view.
{ "language": "en", "url": "https://stackoverflow.com/questions/22211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: TClientDataSet Aggregates specification aren't added automatically when creating an Aggregate field I need to create an Aggregate Field in a TClientDataSet, but as said in the docs: Choose OK. The newly defined aggregate field is automatically added to the client dataset and its Aggregates property is automatically updated to include the appropriate aggregate specification. When I add a new aggregate field, the aggregate specification isn't added to the TClientDataSet's Aggregates, therefore I can't use its OnUpdateMethod. I also tried handling the OnChange event of my new Aggregate Field, but it isn't fired at all. Am I doing something wrong? I just want to have an aggregated field and fire an event everything it's value change. Is this broken on delphi? Because what is in the documentation doesn't reflect the actual behavior. edit: @Michal Sznajder I'm using Delphi 2007 A: I think you may be getting confused between TAggregate and TAggregateField objects, and the Delphi documentation probably isn't helping. AFAICT, TAggregateField objects are automatically 'recalculated' and can be bound to data-aware controls like TDBText, but don't have any OnUpdate event. "TAggregate" objects, on the other hand, do have an OnUpdate event, but can't be bound to data-aware controls. This may be enlightening: http://dn.codegear.com/article/29272 A: Which version of Delphi ? I just tried clean D7 application and TAggregateField was added.
{ "language": "en", "url": "https://stackoverflow.com/questions/22212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why does int main() {} compile? (I'm using Visual C++ 2008) I've always heard that main() is required to return an integer, but here I didn't put in return 0; and and it compiled with 0 errors and 0 warnings! In the debug window it says the program has exited with code 0. If this function is named anything other than main(), the compiler complains saying 'blah' must return a value. Sticking a return; also causes the error to appear. But leaving it out completely, it compiles just fine. #include <iostream> using namespace std; int main() { cout << "Hey look I'm supposed to return an int but I'm not gonna!\n"; } Could this be a bug in VC++? A: 3.6.1 Main function .... 2 An implementation shall not predefine the main function. This function shall not be overloaded. It shall have a return type of type int, but otherwise its type is implementation-defined. All implementations shall allow both of the following definitions of main: int main() { /* ... */ } and int main(int argc, char* argv[]) { /* ... */ } .... and it continues to add ... 5 A return statement in main has the effect of leaving the main function (destroying any objects with automatic storage duration) and calling exit with the return value as the argument. If control reaches the end of main without encountering a return statement, the effect is that of executing return 0; attempting to find an online copy of the C++ standard so I could quote this passage I found a blog post that quotes all the right bits better than I could. A: I'm pretty sure VC++ just inserts a return 0 if you don't include one in main functions. The same thing can happen with functions too, but in those cases at least you'll get a warning. A: Section 6.6.3/2 states- "Flowing off the end of a function is equivalent to a return with no value; this results in undefined behavior in a value-returning function.". An example is the code below which at best gives warning on VS 2010/g++ int f(){ if(0){ if(1) return true; } } int main(){ f(); } So the whole point is that 'main' is special as the previous responses have pointed out. A: This is part of the C++ language standard. An implicit return 0 is generated for you if there's no explicit return statement in main.
{ "language": "en", "url": "https://stackoverflow.com/questions/22239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: How do I change my workspace in Team Foundation Server 2005 and 2008? I have multiple projects in a couple of different workspaces. However, it seems like I can never figure out how to change my current workspace. The result is that files that I have checked out on my machine are shown to be checked out by somebody else and are not accessible. A: I'm going to assume you mean "workspace", not "workstation", as your question doesn't quite make sense to me otherwise. In Visual Studio, go to the Source Control Explorer (View->Other Windows->Source Control Explorer). At the top of the source control explorer window you should have a toolbar with a few buttons. Somewhere on that toolbar (for me it's at the right) there should be a Workspace dropdown. Just select the workspace you want to use from that dropdown. A: Are you wanting to change the location of the files on the workstation? If so, here's how I do it: * *Open Visual Studio *Open the Source Control Explorer window. *From the Workspace dropdown select "Workspaces..." *The Manage Workspaces dialog should show up. *Select the workspace you want to modify, and click Edit... *You should be able to adjust the folders from here. A: First, you should active you workspace window. * *choose the window menu *click Source Control Explore. *click Active button. *The Workspace window appears *click the WorkSpace name in Workspace window. *from the popup list choose the Workspace name you want. A: In Visual Studio 2013 If you just regret which local folder you choose for a project under version control. Do like follows: In the Source Control Explorer in the Folders pane Select the project which local folder destination/mapping you are not pleased with. Right click. --> Advanced --> Remove mapping. A window opens: Press the browse button and choose another local folder for the project and then click "Change". A: Click on: File -> Source Control -> Advanced -> Workspace and then you can edit or remove the existing mapped locations A: I don't entirely understand your question. Are you saying that files you check out on one machine seems to be unaccessible on another of your machines? I'd say that would be entirely by design, as now you have a file that has local modifications done on one machine, which may or may not be available on your other machines. When you say checked out by somebody else, what does that mean exactly? How are you verifying this, what are you looking at? Or do you mean something else? In that case, please elaborate.
{ "language": "en", "url": "https://stackoverflow.com/questions/22245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: How do I resize and convert an uploaded image to a PNG using GD? I want to allow users to upload avatar-type images in a variety of formats (GIF, JPEG, and PNG at least), but to save them all as PNG database BLOBs. If the images are oversized, pixelwise, I want to resize them before DB-insertion. What is the best way to use GD to do the resizing and PNG conversion? Edit: Sadly, only GD is available on the server I need to use, no ImageMagick. A: Your process steps should look like this: * *Verify the filetype *Load the image if it is a supported filetype into GD using imagecreatefrom* *Resizing using imagecopyresize or imagecopyresampled *Save the image using imagepng($handle, 'filename.png', $quality, $filters) ImageMagick is faster, generates better images, is more configurable, and finally is (IMO) much easier to code for. @ceejayoz Just wait for the new GD - it's OOP like MySQLi and it's actually not bad :) A: If you want to use gdlib, use gdlib 2 or higher. It has a function called imagecopyresampled(), which will interpolate pixels while resizing and look much better. Also, I've always heard noted around the net that storing images in the database is bad form: * *It's slower to access than the disk *Your server will need to run a script to get to the image instead of simply serving a file *Your script now is responsible for a lot of stuff the web server used to handle: * *Setting the proper Content-Type header *Setting the proper caching/timeout/E-tag headers, so clients can properly cache the image. If do not do this properly, the image serving script will be hit on every request, increasing the load on the server even more. The only advantage I can see is that you don't need to keep your database and image files synchronized. I would still recommend against it though. A: Are you sure you have no ImageMagick on server? I guest you use PHP (question is tagged with PHP). Hosting company which I use has no ImageMagick extension turned on according to phpinfo(). But when I asked them about they said here is the list of ImageMagick programs available from PHP code. So simply -- there are no IM interface in PHP, but I can call IM programs directly from PHP. I hope you have the same option. And I strongly agree -- storing images in database is not good idea. A: Something like this, perhaps: <?php //Input file $file = "myImage.png"; $img = ImageCreateFromPNG($file); //Dimensions $width = imagesx($img); $height = imagesy($img); $max_width = 300; $max_height = 300; $percentage = 1; //Image scaling calculations if ( $width > $max_width ) { $percentage = ($height / ($width / $max_width)) > $max_height ? $height / $max_height : $width / $max_width; } elseif ( $height > $max_height) { $percentage = ($width / ($height / $max_height)) > $max_width ? $width / $max_width : $height / $max_height; } $new_width = $width / $percentage; $new_height = $height / $percentage; //scaled image $out = imagecreatetruecolor($new_width, $new_height); imagecopyresampled($out, $img, 0, 0, 0, 0, $new_width, $new_height, $width, $height); //output image imagepng($out); ?> I haven't tested the code so there might be some syntax errors, however it should give you a fair presentation on how it could be done. Also, I assumed a PNG file. You might want to have some kind of switch statement to determine the file type. A: <?php /* Resizes an image and converts it to PNG returning the PNG data as a string */ function imageToPng($srcFile, $maxSize = 100) { list($width_orig, $height_orig, $type) = getimagesize($srcFile); // Get the aspect ratio $ratio_orig = $width_orig / $height_orig; $width = $maxSize; $height = $maxSize; // resize to height (orig is portrait) if ($ratio_orig < 1) { $width = $height * $ratio_orig; } // resize to width (orig is landscape) else { $height = $width / $ratio_orig; } // Temporarily increase the memory limit to allow for larger images ini_set('memory_limit', '32M'); switch ($type) { case IMAGETYPE_GIF: $image = imagecreatefromgif($srcFile); break; case IMAGETYPE_JPEG: $image = imagecreatefromjpeg($srcFile); break; case IMAGETYPE_PNG: $image = imagecreatefrompng($srcFile); break; default: throw new Exception('Unrecognized image type ' . $type); } // create a new blank image $newImage = imagecreatetruecolor($width, $height); // Copy the old image to the new image imagecopyresampled($newImage, $image, 0, 0, 0, 0, $width, $height, $width_orig, $height_orig); // Output to a temp file $destFile = tempnam(); imagepng($newImage, $destFile); // Free memory imagedestroy($newImage); if ( is_file($destFile) ) { $f = fopen($destFile, 'rb'); $data = fread($f); fclose($f); // Remove the tempfile unlink($destFile); return $data; } throw new Exception('Image conversion failed.'); } A: Is GD absolutely required? ImageMagick is faster, generates better images, is more configurable, and finally is (IMO) much easier to code for. A: This article seems like it would fit what you want. You'll need to change the saving imagejpeg() function to imagepng() and have it save the file to a string rather than output it to the page, but other than that it should be easy copy/paste into your existing code. A: I think this page is a good starting point. It uses imagecreatefrom(jpeg/gif/png) and resize and converts the image and then outputs to the browser. Instead of outputting the browser you could output to a BLOB in a DB without many minuttes of code-rewrite. A: phpThumb is a high-level abstraction that may be worth looking at.
{ "language": "en", "url": "https://stackoverflow.com/questions/22259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Get a list of available domains (NT4 and Active Directory) Does anyone know (in c#) a way of getting the available NT4 domains (a bit like the WinXP login box dropdown)? I know that this is fairly easy for Active Directory using the DirectoryServices namespace, but I can't find anything for the old NT4 domains. I'd rather not use API calls if at all possible (that might be asking a bit much however). Also, for bonus points (!), we are finally switching to Active Directory later on this autumn, so how would I construct a way of my domain list automatically switching over from NT4 to AD, when we migrate (so I don't need to recompile and re-release) A: Unfortunately I think your only option is to use the ADSI API. You can switch between NT4 and Active Directory by changing providers in your code. NT4 uses the WinNT provider and Active Directory uses the LDAP provider. If you query the RootDSE node of whichever provider you are using, that should return naming contexts to which you can bind, including domains. RootDSE is an LDAP schema specific identifier. For WinNT you can query the root object as "WinNT:" to get available domains. ADSI is available through VB script BTW.
{ "language": "en", "url": "https://stackoverflow.com/questions/22265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Authenticate on an ASP.Net Forms Authorization website from a console app I'm trying to build a C# console application to automate grabbing certain files from our website, mostly to save myself clicks and - frankly - just to have done it. But I've hit a snag that for which I've been unable to find a working solution. The website I'm trying to which I'm trying to connect uses ASP.Net forms authorization, and I cannot figure out how to authenticate myself with it. This application is a complete hack so I can hard code my username and password or any other needed auth info, and the solution itself doesn't need to be something that is viable enough to release to general users. In other words, if the only possible solution is a hack, I'm fine with that. Basically, I'm trying to use HttpWebRequest to pull the site that has the list of files, iterating through that list and then downloading what I need. So the actual work on the site is fairly trivial once I can get the website to consider me authorized. A: This page should get you started. You need to first make a request to the page, and then saving the cookie to a container that you include in all later request. That should keep you logged in, and able to retrieve the files. A: I have dealt with something similar, and the hardest part is figuring out exactly what you needed to "fake" to get authorized. In my case it was authorizing into some Lotus Notes webservice, but the details are unimportant, the method is the same. Essentially, we need to record a regular user session. I would recommend Fiddler http://www.fiddler2.com but if you're on linux or something, then you'll need to use wireshark to figure some of the things out. Not sure if there is a firefox plugin that could be used. Anyway, start up IE, then start up Fiddler. Complete the login process. Stop what you're doing. Switch to the fiddler pane, and examine the recorded sessions in detail. It should give you exactly what you need to fake using WebRequests.
{ "language": "en", "url": "https://stackoverflow.com/questions/22269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's a good way to encapsulate data access with PHP/MySQL? Most of my experience is on the MSFT stack, but I am now working on a side project, helping someone with a personal site with cheap hosting that is built on the LAMP stack. My options for installing extras are limited, so I'm wondering about how to write my data access code without embedding raw queries in the .php files. I like to keep things simple, even with .NET. I generally write stored procedures for everything, and I have a helper class that wraps all calls to execute procedures and return data sets. I'm not looking for a full-blown ORM, but it might be the way to go and others who view this question might be looking for that. Remember that I'm on a $7/month GoDaddy account, so I'm limited to what's already installed in their basic package. Edit: Thanks rix0rr, Alan, Anders, dragon, I will check all of those out. I edited the question to be more open to ORM solutions, since they are so popular. A: ActiveRecord seems to be the state of the art at the moment. I can't recommend any good PHP frameworks for that though. I tried Propel which, while nice, is not easy to set up (especially on a host that you can't install anything on). Ultimately, I rolled my own ORM/ActiveRecord framework, which is not too much work and very instructive. I'm sure other people can recommend good PHP frameworks. A: Take a look at the Zend Framework, specifically Zend_Db. It has a Database Abstraction layer that doesn't require anything other than the MySQLi extension to be installed and isn't a full-blown ORM model. A: Maybe Doctrine would do the job? It seems to be inspired by Hibernate. A: rix0rrr hit on it a bit, in that many tools are a pain to set up. Of course, I have my own solution to this problem that has been working quite well for the past few years. It's a project called dbFacile I also wrote a bit of a usage comparison of the tools I found a few years ago. It's incomplete, but might give you a good starting point. You mentioned that you don't want to embed raw queries but you don't want ORM, so I'm a bit confused about the middle ground you're hoping to find. I also have an ORM project that aims to require minimal setup and great ease of use. The only requirement for my projects is PHP5. A: I would try a framework. Zend Framework has been cited. Symfony seems interesting. It's based on ideas from Ruby on Rails. A: You could also take a look at Prado. http://www.pradosoft.com/ It uses Active Record and DAO. Also if you use .Net then some of the formatting and conventions are similar.
{ "language": "en", "url": "https://stackoverflow.com/questions/22278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Where do I begin to learn about SQL Server alerts or notifications? Just recently started having issues with an SQL Server Agent Job that contains an SSIS package to extract production data and summarize it into a separate reporting database. I think that some of the Alerts/Notifications settings I tried playing with caused the problem as the job had been running to completion unattended for the previous two weeks. So... Where's a good place to start reading up on SQL Agent Alerts and Notifications? I want to enable some sort of alert/notification so that I'm always informed: * *That the job completes successfully (as a check to ensure that it's always executed), or *That the job ran into some sort of error, which should include enough info (such as error number) that I can diagnose the cause of the error As always, any help will be greatly appreciated! A: Books Online is probably a good place to start (or at least I like it and generally find it useful). SQLMenace and bofe made some good points. Here's my additional two cents: I'd recommend configuring Database Mail rather than SQL Mail (i.e. SMTP vs. MAPI, which I think is deprecated anyway). Once you get the mail profile configured, you'll have to also configure the SQL agent to use that mail profile (which is just a page of settings for the agent properties), or else your SSIS job notifications won't actually get sent, even though you can successfully send a test email from Management Studio. I don't use alerts as often as job notifications, so the only tricky thing I can recall about them is that if you're raising an error and you want the alert to email you when that happens, you have to make sure that the raised error gets written to the log. I think that just boils down to "RAISERROR ... WITH LOG"; here's the BOL link for the syntax details. A: You'll want to have "When the job completes" marked in your notifications page on the job's properties. Just go to that dropdown and switch it to job completion instead of failure (which is on the screenshot). You'll also want to make sure that your server has e-mail configured. I think it's under SQL Surface Area Configuration for Features. A: In each step of the job click on advanced then from there you can log to a file or to a table, this will have all errorcodes and other things why the job failed You should be able to see this also from the job history. Right click on the job-->view history, click on the + sign to expand, the click on each step and it will be in the lower panel To set up notifications you need to set up an operator and the in the job on the notification tab you pick it from the email dropdown
{ "language": "en", "url": "https://stackoverflow.com/questions/22306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: ARMV4i (Windows Mobile 6) Native Code disassembler Does anyone know of a disassembler for ARMV4i executables and DLLs? I've got a plug-in DLL I'm writing with a very rare data abort (<5% of the time) that I have narrowed down to a specific function (via dumpbin and the address output by the data abort). However, it is a fairly large function and I would like to narrow it down a little. I know it's happening in a memset() call, but that particular function has about 35 of them, so I was hoping that by looking at the disassembly I could figure out where about the problem actually is. A: I believe that IDA Pro will do what you want. It was mentioned in the O'Reilly Security Warrior book and I've seen it recommended on Windows Mobile developer forums. A: IDA Pro will definitely do ARM disassembly. And they (Datarescue) once arranged me a licence at about 11PM local time, so I like to recommend them... I see from http://www.datarescue.com/idabase/ that there's been some rearrangement of the company, but I guess it's still a good product. Here's the link to the new publisher: http://www.hex-rays.com/idapro/ A: ChARMeD is a Windows Mobile / Pocket PC / Win CE (for ARM CPUs) Disassembler and Assembler You might also look at BDASM, a shareware disassembler - later versions have ARM plugins. The website seems to be down, but if you search for it you'll find the shareware distribution. The source code for the simple ARM disassembler, DISARM, is available as well. The binutils (linux compiler tools) objdump can be used to produce disassembly, "objdump -b binary -m arm7tdmi -D file_name" -Adam A: A couple of years ago I found an ARM disassembler I used while doing some embedded work. However, I don't remember its name - though I think it was part of a larger package like an emulator or something. In your case, could you ask your compiler to generate an assembly listing of the compiled code? That might help give you some scope. Failing that, you could break up your function into one or more new functions, if all you can get is the stack trace. Then break up the new function into one or more again. This is the tried-and-true "divide and conquer" method. And if you have 35 calls to memset() in one function, it might be a good idea from a design standpoint too! Update: I found the package I used: ARMphetamine. It worked for the ARM9 code I was developing, but it looks like it hasn't been updated in quite some time.
{ "language": "en", "url": "https://stackoverflow.com/questions/22309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: IE Securty Zone Issues I'm developing a website which will be used in the corporate intranet which uses JCIFS and NTLM to automatically authenticate the users without asking them to log on. Everything seems to be working in IE 6, but some users are being prompted for their passwords in IE 7. We've had some success by getting the users to change their Logon option to "Automatic logon using current username and password" (Tools > Internet Options > Securty Tab > Select Local Intranet > Custom Level > All the way at the bottom User Authentication > Logon), but we are still having a few users who are getting username/password prompts. Also, we've had some users report they can just click cancel when the prompt comes up and the page will come up correctly. If anyone has any other suggestions, I'd greatly appreciate them. A: If you access an intranet Web site by using an IP address or a fully qualified domain name , or a url with a dot in it the Web site may be identified as in the Internet zone instead of in the Local intranet zone. http://support.microsoft.com/kb/303650 A: You may also want to try having your users add your domain to their trusted sites list. I know that I had to do that to get our sites working with NTLM. A: Turned out that the new security settings on the laptops required NTLMv2 which is not well supported by the JCIFS NLTM library. After some research, found out that JCIFS implementation of NTLM is very hacky (as described by the JCIFS devs) and they're removing support in the next major version of JCIFS. We've moved to using the Tomcat IIS Connector (http://tomcat.apache.org/connectors-doc/webserver_howto/iis.html), which works much better. Thanks everyone for your responses.
{ "language": "en", "url": "https://stackoverflow.com/questions/22318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to send out email at a user's local time in .NET / Sql Server? I am writing a program that needs to send out an email every hour on the hour, but at a time local to the user. Say I have 2 users in different time zones. John is in New York and Fred is in Los Angeles. The server is in Chicago. If I want to send an email at 6 PM local to each user, I'd have to send the email to John at 7 PM Server time and Fred at 4 PM Server time. What's a good approach to this in .NET / Sql Server? I have found an xml file with all of the time zone information, so I am considering writing a script to import it into the database, then querying off of it. Edit: I used “t4znet.dll” and did all comparisons on the .NET side. A: I'm a PHP developer so I'll share what I know from PHP. I'm sure .NET will include something similar. In PHP you can get timezone differences for the server time - as you've suggested you'd send the emails at different times on the server. Every time you add a user save their time offset from the server time (or their timezone in case the server timezone changes). Then when you specify an update, have an automated task (Cron for LAMP people) that runs each hour checking to see if an email needs to be sent. Do this until there are no emails left to send. A: You have two options: * *Store the adjusted time for the mail action into the database for each user. Then just compare server time with stored time. To avoid confusion and portability issues, I would store all times in UTC. So, send mail when SERVER_UTC_TIME() == storedUtcTime. *Store the local time for each mail action into the database, then convert on-the-fly. Send mail when SERVER_UTC_TIME() == TO_UTC_TIME(storedLocalTime, userTimeZone). You should decide what makes most sense for your application. For example if the mailing time is always the same for all users, it makes more sense to go with option (2). If the events times can change between users and even per user, it may make development and debugging easier if you choose option (1). Either way you will need to know the user's time zone. *These function calls are obviously pseudo, since I don't know their invocations in T-SQL, but they should exist. A: You can complement your solution with this excellent article "World Clock and the TimeZoneInformation class", I made a webservice that sent a file with information that included the local and receiver time, what I did was to modify this class so I could handle that issue and it worked perfect, exactly as I needed. I think you could take this class and obtain from the table "Users" the time zone of them and "calculate" the appropiate time, my code went like this; //Get correct destination time DateTime thedate = DateTime.Now; string destinationtimezone = null; //Load the time zone where the file is going TimeZoneInformation tzi = TimeZoneInformation.FromName(this.m_destinationtimezone); //Calculate destinationtimezone = tzi.FromUniversalTime(thedate.ToUniversalTime()).ToString(); This class has an issue in Windows Vista that crashes the "FromIndex(int index)" function but you can modify the code, instead of using the function: public static TimeZoneInformation FromIndex(int index) { TimeZoneInformation[] zones = EnumZones(); for (int i = 0; i < zones.Length; ++i) { if (zones[i].Index == index) return zones[i]; } throw new ArgumentOutOfRangeException("index", index, "Unknown time zone index"); } You can change it to; public static TimeZoneInformation FromName(string name) { TimeZoneInformation[] zones = EnumZones(); foreach (TimeZoneInformation tzi in zones) { if (tzi.DisplayName.Equals(name)) return tzi; } throw new ArgumentOutOfRangeException("name", name, "Unknown time zone name"); }
{ "language": "en", "url": "https://stackoverflow.com/questions/22319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Remoting server auto-discovery. Broadcast or not? I have a client/server application that communicates with .Net remoting. I need my clients to be able to find the server(s) on the network without requiring client-side configuration. As far as I know, there is no support for discovery in remoting. I am prepared to implement a UDP solution where the clients will be able to find the servers via broadcast messages. I want to hear the collective SO wisdom before I go ahead. Is this the best way? Any other suggestions? A: I've looked at both SSDP and UPnP for this type of functionality, but I'd recommend going with a custom UDP multicast solution. Basically, multicast is very similar to a broadcast, but only machines that have joined the multicast group (i.e. requested the broadcast) are contacted. IMHO, SSDP and UPnP and bloated and overly complicated for resource discovery... but hey, it's a standard. ;) A: Seems like what you need is the Simple Service Discovery Protocol or SSDP. This is implemented in Windows as part of Microsoft's support for Universal Plug and Play. Since this is an industry standard protocol, it seems like a good bet. For instance, if you want to deal with firewalls or other issues, this will have been figured out by others instead of you having to roll your own solution. Since you are talking .NET I'll assume you are on Windows. There's a somewhat old document (2001) describing a C-style API and a COM API for Windows entitled Universal Plug and Play (UPnP) Client Support. The COM APIs are exposed by UPNP.DLL and the C-style APIs for SSDP are exposed by SSDPAPI.DLL. The COM-style APIs for UPNP are probably your best bet. Since C# can wrap up COM objects for you and handle the interop. I could not find any place where this API has been ported to C# or the .NET Framework natively. A: You might also consider Apple's Bonjour, which is their Zeroconf implementation. It's available for Mac, PCs, and Linux/BSD. A: The best solution I have found in my remoting work was to keep the server list in a config file on the client systems and make it updateable. Not the easiest to maintain but was fast and no broadcasting. A: My multicast UDP solution seems to be unreliable due to recent MS update.
{ "language": "en", "url": "https://stackoverflow.com/questions/22321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to late bind 32bit/64 bit libs at runtime I've got a problem similar to,but subtly different from, that described here (Loading assemblies and their dependencies). I have a C++ DLL for 3D rendering that is what we sell to customers. For .NET users we will have a CLR wrapper around it. The C++ DLL can be built in both 32 and 64bit versions, but I think this means we need to have two CLR wrappers since the CLR binds to a specific DLL? Say now our customer has a .NET app that can be either 32 or 64bit, and that it being a pure .NET app it leaves the CLR to work it out from a single set of assemblies. The question is how can the app code dynamically choose between our 32 and 64bit CLR/DLL combinations at run-time? Even more specifically, is the suggested answer to the aforementioned question applicable here too (i.e. create a ResolveEvent handler)? A: I finally have an answer for this that appears to work. Compile both 32 & 64 bit versions - both managed & unmanaged - into separate folders. Then have the .NET app choose at run time which directory to load the assemblies from. The problem with using the ResolveEvent is that it only gets called if assemblies aren't found, so it is all to easy to accidentally end up with 32 bit versions. Instead use a second AppDomain object where we can change the ApplicationBase property to point at the right folder. So you end up with code like: static void Main(String[] argv) { // Create a new AppDomain, but with the base directory set to either the 32-bit or 64-bit // sub-directories. AppDomainSetup objADS = new AppDomainSetup(); System.String assemblyDir = System.IO.Path.GetDirectoryName(Application.ExecutablePath); switch (System.IntPtr.Size) { case (4): assemblyDir += "\\win32\\"; break; case (8): assemblyDir += "\\x64\\"; break; } objADS.ApplicationBase = assemblyDir; // We set the PrivateBinPath to the application directory, so that we can still // load the platform neutral assemblies from the app directory. objADS.PrivateBinPath = System.IO.Path.GetDirectoryName(Application.ExecutablePath); AppDomain objAD = AppDomain.CreateDomain("", null, objADS); if (argv.Length > 0) objAD.ExecuteAssembly(argv[0]); else objAD.ExecuteAssembly("MyApplication.exe"); AppDomain.Unload(objAD); } You end up with 2 exes - your normal app and a second switching app that chooses which bits to load. Note - I can't take credit for the details of this myself. One of my colleagues sussed that out given my initial pointer. If and when he signs up to StackOverflow I'll assign the answer to him A: I was able to do this about a year ago, but I no longer remember all of the details. Basically, you can use IntPtr.Size to determine which DLL to load, then perform the actual LoadLibrary through p/Invoke. At that point, you've got the module in memory and you ought to be able to just p/Invoke functions from inside of it -- the same module name shouldn't get reloaded again. I think, though, that in my application I actually had the C++ DLL register itself as a COM server and then accessed its functionality through a generated .NET wrapper -- so I don't know if I ever tested p/Invoking directly. A: I encountered a similar scenario a while back. A toolkit I was using did not behave well in a 64-bit environment and I wasn't able to find a way to dynamically force the assemblies to bind as 32 bit. It is possible to force your assemblies to work in 32 bit mode, but this requires patching the CLR header, (there is a tool that does that in the Framework) and if your assemblies are strongly-named, this does not work out. I'm afraid you'll need to build and publish two sets of binaries for 32 and 64 bit platforms.
{ "language": "en", "url": "https://stackoverflow.com/questions/22322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Word Automation: Write RTF text without going through clipboard I am trying to replace the current selection in Word (2003/2007) by some RTF string stored in a variable. Here is the current code: Clipboard.SetText(strRTFString, TextDataFormat.Rtf) oWord.ActiveDocument.ActiveWindow.Selection.PasteAndFormat(0) Is there any way to do the same thing without going through the clipboard. Or is there any way to push the clipboard data to a safe place and restore it after? A: Put the RTF in a file instead of the clipboard, then insert from the file, e.g. Selection.InsertFile FileName:="myfile.rtf", Range :="", _ ConfirmConversions:=False, Link:=False, Attachment:=False A: You can use a RichTextbox to convert RTF to text or vice versa. RichTextBox r = new RichTextBox(); r.Rtf = strRTFString; Console.WriteLine(r.Text);
{ "language": "en", "url": "https://stackoverflow.com/questions/22326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do people choose product names? I flatter myself that I'm a good programmer, and can get away with graphic design. But something I'm incapable of doing is coming up with good names - and it seems neither are the people I work with. We're now in the slightly ludicrous situation that the product we've been working on for a couple of years is being installed to customers, is well received and is making money - but doesn't yet have a name. We're too small a company to have anything like a proper marketing division to do this thing. So how have people tended to choose names, logos and branding? A: for a product, first read Positioning, the Battle for Your Mind and think really hard about what mental position you want to occupy then find a word or two that conveys that position, and make up an acronym for it for a (self-serving) example: my most recent product is a fine-grained application monitor for .NET applications. I want to convey the feeling of peace that you have when you know that your apps are behaving because they are continuously monitored, so 'no news' really is 'good news'. I chose CALM after a lot of false starts, and decided that it stood for Common Application Lightweight Monitor - which just also happens to be a very technically accurate description of the basic implementation also, you might be amazed at how much 'better' users perceive an application to be when it has a name and a logo attached to it. A: You should try BustaName. It basically combines words to create available domain names. You are able to choose similar words for the words that you previously entered. Also try these links out: * *Naming a company *77 ways to come up with an idea *Igor Naming Guide (PDF) A: When it's for something that "matters", I plop down the $50 and have the folks at PickyDomains.com help out. That also results in a name that's available as a .com. For guidelines, here's an extract from my own guide on naming open source projects: * *If the name you're thinking of is directly pulled from a scifi or fantasy source, don't bother. These sources are WAY overrepresented as naming sources in software. Not only are your chances of coming up with something original pretty small, most of the names of characters and places in scifi are trademarked and you run the risk of being sued. *If the name you're thinking of comes straight from Greek, Roman or Norse mythology, try again. We've got more than enough mail related software called variations of "Mercury". *Run your proposed name through Google. The fewer results you get the better. If you get down to no results, you're there. *Don't try to get a unique name by just slightly misspelling something. Calling your new Windows filesystem program Phat32 is just going to end up with users getting frustrated looking at the results of "fat32" in a search engine. *If your name couldn't be said on TV in the 50s or 60s, you're probably on the wrong track. This is particularly true if you would like anyone to use your product in a work environment. No one is going to recommend a product to their co-workers if they can get sued for sexual harassment just for uttering its name. *If your product name can't be pronounced at all, you'll get no word of mouth benefit at all. Similarly, if no one knows how to pronounce it, they will not be very likely to try to say it out loud to ask questions about it, etc. How do YOU say MySQL? PostgreSQL? GNU? Almost all spoken languages on Earth are based on consonant/vowel syllables of some sort. Alternating between consonants and vowels is a pretty good way to ensure that someone can pronounce it. *The shorter the better. *See if the .com domain is available. If it's not, it's a pretty good indicator that someone has already thought of it and is using it or closer to using it than you are. Do this even if you don't intend to use the domain. *Don't build inherent limitations on your product into the name. Calling your product LinProduct or WinProduct precludes you from ever releasing any sort of cross-platform edition. *Don't use your own name for open source products. If the project lives on beyond your involvement, the project will either have to be renamed or your name may be used in ways you didn't intend. A: Names -- you can try yourselves or ask friends/customers about what they are thinking about when listen/use your product (I don't know correct English word for that -- if two things have something in common they are associated?). Or, depends on what kind of product is it, ask someone with unlimited imagination -- kids are very good at it. Logos and branding -- you need professionals. And of course you need layer :). A: I second the recommendation of the Igor naming guide. Stay away from meaningless strings of alternating vowels and consonants: altana, obito, temora, even if it seems easy and the domains are readily available. Pick something with soul and meaning. Best example: "Plan B" (also known as the morning-after pill).
{ "language": "en", "url": "https://stackoverflow.com/questions/22338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: WCF push to client through firewall? See also How does a WCF server inform a WCF client about changes? (Better solution then simple polling, e.g. Coment or long polling) I need to use push-technology with WCF through client firewalls. This must be a common problem, and I know for a fact it works in theory (see links below), but I have failed to get it working, and I haven't been able to find a code sample that demonstrates it. Requirements: * *WCF *Clients connects to server through tcp port 80 (netTcpBinding). *Server pushes back information at irregular intervals (1 min to several hours). *Users should not have to configure their firewalls, server pushes must pass through firewalls that have all inbound ports closed. TCP duplex on the same connection is needed for this, a dual binding does not work since a port has to be opened on the client firewall. *Clients sends heartbeats to server at regular intervals (perhaps every 15 mins) so server knows client is still alive. *Server is IIS7 with WAS. The solution seems to be duplex netTcpBinding. Based on this information: WCF through firewalls and NATs Keeping connections open in IIS But I have yet to find a code sample that works.. I've tried combining the "Duplex" and "TcpActivation" samples from Microsoft's WCF Samples without any luck. Please can someone point me to example code that works, or build a small sample app. Thanks a lot! A: I've found a couple of solutions: ZeroC Ice GPL with a commercial option. Have only tested quickly. Looks more powerful than .NET Remoting and is very actively developed. RemObjects Commercial, active development, supports everything but does not seem to have all the more advanced features that GenuineChannels use. GenuineChannels. It uses remoting with a lot of nice added features, the most important one being it works through NATs without the need to open the client firewall. Unfortunately seems to be very dead. Another solution is to use streaming with IIS, according to this article: Keeping connections open in IIS The client makes the first connection (http with IIS6, tcp with IIS7) to the server at port 80, the connection is then kept open with a streaming response that never ends. I haven't had the time to experiment with this, and I haven't found a sample that says it specifically solves the firewall-problem, but here's an excellent sample that probably works: Streaming XML. A: Have you tried looking at: http://www.codeproject.com/KB/WCF/WCF_Duplex_UI_Threads.aspx Can you provide examples of what you have already attempted? With details of firewalls etc, error messages? If both client and server can be addressed directly and firewalls are not an issue, have you considered allowing clients to register a URL providing a supported contract. The server can then call this service whenever it needs to, without the need to establish a long running (but mostly idle connection), avoids the need for heart beating and can be made resilient across sessions\connections. A: In most firewall setups, the TCP connection will be torn down by the firewall if it is idle to conserve resources. The idle timeout is probably not something you can control. Some will tear them down if they are idle and a resource limit is being hit. Most corp environments won't allow any machines to make an outbound TCP connection anyway. Also, using this mechanism means you are going to have scaling problems. I think more reliable solution is to queue up information and have your clients poll for them regularly. Utilize caching if possible such that a subsequent client poll will get the cached data from the customers proxy cache, if they are using one. If you have to push data in a timely manner, in sub-second land (i.e. financial services), then consider some messaging infrastructure such an NServiceBus distributor on client side, but that will require a customer install... So have you tried using Toredo? Having read that it would appear there it is prob too complicated for a user to setup. A: I have not tried the scenario you speak of so I can't be too much help, sorry. If all you need to bypass is the client firewall you might want to check out this post. Good luck. A: Have you tried this one? DuplexHttpBinding It is using smart polling technique encapsulated as custom WCF binding. So it should work out of the box. A: You can do following change in client for accessing duplex web service on Firewall enabled client. * *Set WebHttp option checked in Firewall -> Advanced -> Settings (of Network Connection Setting) -> Web Server (Http)
{ "language": "en", "url": "https://stackoverflow.com/questions/22340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do I extract/insert text into RTF string in C# In a C# console app I have the need to extract the text from an RTF string, add some more text to it, and then convert it back into RTF. I have been able to do this using the System.Windows.Forms.RichTextBox class, but I find it a bit odd to use a Forms control in a non-Forms app. Any better way to do this? A: Doing anything with RTF is pretty difficult unless you're using the windows forms. As stated above, using forms is the easiest way to go. You could write something yourself, but the RTF spec is pretty complicated. http://www.biblioscape.com/rtf15_spec.htm Or you could use a conversion DLL / ActiveX object of which there is a large number available. http://www.sautinsoft.com/ Or - If you're doing this from Linux, there are also tools available. A cursory glance throws up UnRTF http://www.gnu.org/software/unrtf/unrtf.html I haven't included stuff to turn text back to RTF because I think the RTF specification treats and formats text correctly. A: I think you should just shake this feeling of "odd". There's nothing odd about it. A: It depends on what you mean by 'better'. You are already using the simplest and easiest way of doing it. A: There is nothing wrong with using an user-interface control in a console application or even in a web application. The Windows controls are part of the .NET Framework, might as well use them. These controls do not need to be hosted in "forms" in order to work. Reinventing the wheel, using DLL/ActiveX/OCX, and using Linux are simply not practical answers to your question. The better way is...do what you know. There is actually a performance and maintainence benefit to using existing framework methods then using the suggested alternatives.
{ "language": "en", "url": "https://stackoverflow.com/questions/22346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Sharepoint COMException 0x81020037 I am working on a SharePoint application that supports importing multiple documents in a single operation. I also have an ItemAdded event handler that performs some basic maintenance of the item metadata. This event fires for both imported documents and manually created ones. The final piece of the puzzle is a batch operation feature that I implemented to kick off a workflow and update another metadata field. I am able to cause a COMException 0x81020037 by extracting the file data of a SPListItem. This file is just an InfoPath form/XML document. I am able to modify the XML and sucessfully push it back into the SPListItem. When I fire off the custom feature immediately afterwards and modify metadata, it occassionally causes the COM error. The error message basically indicates that the file was modified by another thread. It would seem that the ItemAdded event is still writing the file back to the database while the custom feature is changing metadata. I have tried putting in delays and error catching loops to try to detect that the SPListItem is safe to modify with little success. Is there a way to tell if another thread has a lock on a document? A: Sometimes I see the ItemAdded or ItemUpdated firing twice for a single operation. You can try to put a breakpoint in the ItemAdded() method to confirm that. The solution in my case was to single thread the ItemAdded() method: private static object myLock = new object(); public override void ItemAdded(SPItemEventProperties properties) { if (System.Threading.Monitor.TryEnter(myLock, TimeSpan.FromSeconds(30)) { //do your stuff here. System.Threading.Monitor.Exit(myLock); } } A: I'll have to look into that and get back to you. The problem on my end seems to be that there is code running in a different class, in a different feature, being controlled by a different thread, all of which are trying to access the same record. I am trying to avoid using a fixed delay. With any threading issue, there is the pathological possibility that one thread can delay or block beyond what we expect. With deployments on different server hardware with different loads, this is a very real possibility. On the other end of the spectrum, even if I were to go with a delay, I don't want it to be very high, especially not 30 seconds. My client will be importing tens of thousands of documents, and a delay of any significant length will cause the import to take literally all day.
{ "language": "en", "url": "https://stackoverflow.com/questions/22354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Cleanest Way to Invoke Cross-Thread Events I find that the .NET event model is such that I'll often be raising an event on one thread and listening for it on another thread. I was wondering what the cleanest way to marshal an event from a background thread onto my UI thread is. Based on the community suggestions, I've used this: // earlier in the code mCoolObject.CoolEvent+= new CoolObjectEventHandler(mCoolObject_CoolEvent); // then private void mCoolObject_CoolEvent(object sender, CoolObjectEventArgs args) { if (InvokeRequired) { CoolObjectEventHandler cb = new CoolObjectEventHandler( mCoolObject_CoolEvent); Invoke(cb, new object[] { sender, args }); return; } // do the dirty work of my method here } A: I made the following 'universal' cross thread call class for my own purpose, but I think it's worth to share it: using System; using System.Collections.Generic; using System.Text; using System.Windows.Forms; namespace CrossThreadCalls { public static class clsCrossThreadCalls { private delegate void SetAnyPropertyCallBack(Control c, string Property, object Value); public static void SetAnyProperty(Control c, string Property, object Value) { if (c.GetType().GetProperty(Property) != null) { //The given property exists if (c.InvokeRequired) { SetAnyPropertyCallBack d = new SetAnyPropertyCallBack(SetAnyProperty); c.BeginInvoke(d, c, Property, Value); } else { c.GetType().GetProperty(Property).SetValue(c, Value, null); } } } private delegate void SetTextPropertyCallBack(Control c, string Value); public static void SetTextProperty(Control c, string Value) { if (c.InvokeRequired) { SetTextPropertyCallBack d = new SetTextPropertyCallBack(SetTextProperty); c.BeginInvoke(d, c, Value); } else { c.Text = Value; } } } And you can simply use SetAnyProperty() from another thread: CrossThreadCalls.clsCrossThreadCalls.SetAnyProperty(lb_Speed, "Text", KvaserCanReader.GetSpeed.ToString()); In this example the above KvaserCanReader class runs its own thread and makes a call to set the text property of the lb_Speed label on the main form. A: I have some code for this online. It's much nicer than the other suggestions; definitely check it out. Sample usage: private void mCoolObject_CoolEvent(object sender, CoolObjectEventArgs args) { // You could use "() =>" in place of "delegate"; it's a style choice. this.Invoke(delegate { // Do the dirty work of my method here. }); } A: I think the cleanest way is definitely to go the AOP route. Make a few aspects, add the necessary attributes, and you never have to check thread affinity again. A: Use the synchronisation context if you want to send a result to the UI thread. I needed to change the thread priority so I changed from using thread pool threads (commented out code) and created a new thread of my own. I was still able to use the synchronisation context to return whether the database cancel succeeded or not. #region SyncContextCancel private SynchronizationContext _syncContextCancel; /// <summary> /// Gets the synchronization context used for UI-related operations. /// </summary> /// <value>The synchronization context.</value> protected SynchronizationContext SyncContextCancel { get { return _syncContextCancel; } } #endregion //SyncContextCancel public void CancelCurrentDbCommand() { _syncContextCancel = SynchronizationContext.Current; //ThreadPool.QueueUserWorkItem(CancelWork, null); Thread worker = new Thread(new ThreadStart(CancelWork)); worker.Priority = ThreadPriority.Highest; worker.Start(); } SQLiteConnection _connection; private void CancelWork()//object state { bool success = false; try { if (_connection != null) { log.Debug("call cancel"); _connection.Cancel(); log.Debug("cancel complete"); _connection.Close(); log.Debug("close complete"); success = true; log.Debug("long running query cancelled" + DateTime.Now.ToLongTimeString()); } } catch (Exception ex) { log.Error(ex.Message, ex); } SyncContextCancel.Send(CancelCompleted, new object[] { success }); } public void CancelCompleted(object state) { object[] args = (object[])state; bool success = (bool)args[0]; if (success) { log.Debug("long running query cancelled" + DateTime.Now.ToLongTimeString()); } } A: A couple of observations: * *Don't create simple delegates explicitly in code like that unless you're pre-2.0 so you could use: BeginInvoke(new EventHandler<CoolObjectEventArgs>(mCoolObject_CoolEvent), sender, args); * *Also you don't need to create and populate the object array because the args parameter is a "params" type so you can just pass in the list. *I would probably favor Invoke over BeginInvoke as the latter will result in the code being called asynchronously which may or may not be what you're after but would make handling subsequent exceptions difficult to propagate without a call to EndInvoke. What would happen is that your app will end up getting a TargetInvocationException instead. A: I've always wondered how costly it is to always assume that invoke is required... private void OnCoolEvent(CoolObjectEventArgs e) { BeginInvoke((o,e) => /*do work here*/,this, e); } A: As an interesting side note, WPF's binding handles marshaling automatically so you can bind the UI to object properties that are modified on background threads without having to do anything special. This has proven to be a great timesaver for me. In XAML: <TextBox Text="{Binding Path=Name}"/> A: I shun redundant delegate declarations. private void mCoolObject_CoolEvent(object sender, CoolObjectEventArgs args) { if (InvokeRequired) { Invoke(new Action<object, CoolObjectEventArgs>(mCoolObject_CoolEvent), sender, args); return; } // do the dirty work of my method here } For non-events, you can use the System.Windows.Forms.MethodInvoker delegate or System.Action. EDIT: Additionally, every event has a corresponding EventHandler delegate so there's no need at all to redeclare one. A: You can try to develop some sort of a generic component that accepts a SynchronizationContext as input and uses it to invoke the events. A: I am using something like Invoke((Action)(() => { //your code }));
{ "language": "en", "url": "https://stackoverflow.com/questions/22356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: How can I develop for iPhone using a Windows development machine? Is there any way to tinker with the iPhone SDK on a Windows machine? Are there plans for an iPhone SDK version for Windows? The only other way I can think of doing this is to run a Mac VM image on a VMWare server running on Windows, although I'm not too sure how legal this is. A: Check out this: Over view It is a project that attempts to be able to cross-compile programs written in a variety of source languages to a variety of target languages. One of the initial test cases was to write programs in Java and run them on an iPhone. Watching the video on the site is worthwhile. With that said, I haven't tried it. The project seems quite beta, and there isn't a lot of activity on their SourceForge site. A: You can use Intel XDK with that you can develop and publish app for iOS without mac. Click here for detail. A: Most of "so called Windows solutions for iOS development without Mac" require Mac at the end just to sign and send to app store. I checked a few, not all though (who has the time?) At the end it's just too much trouble to learn "their super special easy way to program iOS without Objective-C", they have lots of bugs. Really the goal they are setting is unachievable in my view. Also a lot of time they make you use Objective-C equivalent statements simply in another language. They kind of look the same but there are always subtle differences that you have to learn on top of obj-c. Which also makes even less sense, because now instead of learning less you have to learn more. So where is the gain? Also they cost a lot, because they are very hard to develop. Many lack any debugging abilities whatsoever. In my honest opinion, if you are a hard-core iOS developer then for sure buy the best Mac and learn objective-c. It's expensive and takes time, but if it's your path, it's worth it. For an occasional use, it's just easier to rent a remote Mac service, like XCodeClub.com A: Oracle VirtualBox allows users to install Mac OS X in a virtual machine. If you are comfortable with it, you could just use that way to use Xcode. This is legal if you "dual boot" your mac into windows, then install the VirtualBox within windows (or linux). Other possibilities are cross-compilers such as Appcelerator Titanium (HTML, CSS and JavaScript) or MonoTouch (.NET). A: Interesting that no one has mentioned the cross-platform wxWidgets option. It's less than an optimal solution, though. IMHO, the business-wisest way to go is to invest the money in Apple's endorsed framework. That way, if you find yourself stuck with some mind-boggling problem, you have a much larger community of developers to consult with. A: YOU CAN DEVELOP IPHONE APPS ON WINDOWS PC. I've done it, with complex apps. And it works perfectly. You can develop iphone apps without ever seeing a mac or iphone. You can develop on windows an HTML (or better: HTML5) app, using tools like Sencha or JQTouch, or mobi1. (They used to all be free for a while) Then you use openSSL to sign the app. And Adobe PhoneGAP Build service to build IPhone App. But you need the iphone developer licence to install it on an iphone. But you don't need a mac or iphone at any minute to compile, build or test it - all that is done ON THE PC. I've done it, and it works perfectly. (But with Android type responsiveness - not as fast as a native IPhone app) You could also use a program from the the Babylonian era (circa 300 bc) running C and C++ called dragonfly. If your app has one or two screens with limited interactivity, and many calculations, go for it. It includes an emulator. You compile to the iphone at the press of a button. (Not sure, but I think you do need a developers license in any case) And then there is Xamarin. You develop in C# with special calls to native code. You'll have to learn the environment. A: You can use Sentenza for make applications for iPhone, on Windows. Tested with success. It's not a solution but a good alternative ! A: Two other options * *Titanium Developer - free community edition - write in HTML/JavaScript - compile with Xcode (requires a Mac or VM) *OpenPlus ELIPS Studio - write in Flex, compile on Xcode (requires a Mac or VM) - they just started charging for their product however. I think there may be 'toolchain' options for these and some of the others mentioned, which allow you to compile to binary on Windows, and I have seen that you can upload a zip file and have a toolchain style compile done for you online, but this goes against the Apple licensing. If I am not mistaken, a product such as Titanium that outputs/works with Xcode and does not use any 3rd party / alternative / restricted libraries should be in compliance, because you are ultimately compiling in xcode - normal Objective-C code and libraries. A: The SDK is only available on OS X, forcing you to use a mac. If you don't want to purchase a mac you can either run OS X on a virtual machine on your windows box, or you can install OS X on your PC. In my experience the virtual machine solution is unusably slow (on a core2 duo laptop with 2G ram). If you feel like trying it search for the torrent. It's probably not worthwhile. The other option is to install OS X on your PC, commonly referred to as a hackintosh. Hackintoshes work quite well - my friend just sold his mac because his Dell quad core hackintosh was actually much faster than the apple hardware (and cost about 1/3). Of course both of these options are likely counter to some licensing scheme, so proceed at your own risk. A: As has been pointed you can attempt to use the WinChain but if you are a newbie coder it won't be easy. The iPhone SDK will work on Hackintoshes (a normal PC with OS X installed on it). I know as I have one and it does. So after you go buy an OSX license you could TRY to install it on your PC on a different drive using Boot-132 or one of the other installers like iDeneb. The issue you will have to do a lot of tinkering and things still won't work quite right. A: Using Xamarin now we can develop iPhone applications in Windows machine itself with the help of Xamarin Live Player. Using this Xamarin live player dev/deploy/debug cycle can now be done without an Apple system. But to sign and release the app Apple system is required. Find the reference here I checked the reference nothing dodgy A: You can use WinChain Quoting the project page: It's the easiest way to build the iPhone toolchain on a Windows XP/Vista computer, which in turn, can take Objective-C source code that you write using their UIKit Headers (included with winChain) and compile it into an application that you can use on your iPhone. A: It's certainly possible to develop on a Windows machine, in fact, my first application was exclusively developed on the old Dell Precision I had at the time :) There are three routes; * *Install OSx86 (aka iATKOS / Kalyway) on a second partition/disk and dual boot. *Run Mac OS X Server under VMWare (Mac OS X 10.7 (Lion) onwards, read the update below). *Use a framework and/or toolset, which allows developing on Windows, like Delphi XE4 with the mac-in-cloud service, which can build without MacOS device need. This is a commercial toolset, but the component and lib support is growing. Other honorable mentions are Flutter, Xamarin and similar; which may at end need actual MacOS device for final build (but you can test on Android till then, as they're cross-platform). The first route requires modifying (or using a pre-modified) image of Leopard that can be installed on a regular PC. This is not as hard as you would think, although your success/effort ratio will depend upon how closely the hardware in your PC matches that in Mac hardware - e.g. if you're running a Core 2 Duo on an Intel Motherboard, with an NVidia graphics card you are laughing. If you're running an AMD machine or something without SSE3 it gets a little more involved. If you purchase (or already own) a version of Leopard then this is a gray area since the Leopard EULA states you may only run it on an "Apple Labeled" machine. As many point out if you stick an Apple sticker on your PC you're probably covered. The second option is more costly. The EULA for the workstation version of Leopard prevents it from being run under emulation and as a result, there's no support in VMWare for this. Leopard server, however, CAN be run under emulation and can be used for desktop purposes. Leopard server and VMWare are expensive, however. If you're interested in option 1) I would suggest starting at Insanelymac and reading the OSx86 sections. I do think you should consider whether the time you will invest is going to be worth the money you will save though. It was for me because I enjoy tinkering with this type of stuff and I started during the early iPhone betas, months before their App Store became available. Alternatively, you could pick up a low-spec Mac Mini from eBay. You don't need much horsepower to run the SDK and you can always sell it on later if you decide to stop development or buy a better Mac. Update: You cannot create a Mac OS X Client virtual machine for OS X 10.6 and earlier. Apple does not allow these Client OSes to be virtualized. With Mac OS X 10.7 (Lion) onwards, Apple has changed its licensing agreement in regards to virtualization. Source: VMWare KnowledgeBase A: Yes and you don't need to learn Objective-C and buying Apple software and hardware. Adobe have created compilator from ActionScript 3 to program for iOS. And later Apple approved this method of application creation. This is best way to create Apple applications under Windows or Linux/BSD (and another one for MacOS-X) A: If you want to develop an application on Windows environment then there is an option, you can install MAC OS in your windows Platform name is : "Niresh'MAC OS" , you can search that text on Google then you can download the whole MAC OS Source and easily installed MAC OS in your Windows PC, Niresh is able to Hack the whole OS. Hope this will help you. A: You can install OSX on PC but experience wont be great and it needs lot of work. Alternate is to use a framework/SDK Codename one: which is based on JAVA and can be used to code in WP8, Android, iOS on Windows (eclipse) with all extensive features Features Overview: * *Full Android environment with super fast android simulator *An iPhone/iPad simulator with easy to take iPhone apps to large screen iPad in minutes. *Full support for standard java debugging, profiling for apps on any platform. *Easy themeing / styling – Only a click away More at Develop Android, iOS iPhone, WP8 apps using Java Disclaimer: This is my review for the product A: If you have ssh access to a Mac, then you can use a VNC (like Vine VNC, which allows multiple uses at once - thin thin client) to control XCode. This could be useful if you wanted to access a Mac Mini from a laptop, or your S.O. is hogging your MacBook. A: Develop iOS Apps on Windows With Cross-Platform Tools Cross-platform tools are awesome: you code your app once, and export it to iOS and Android. That could potentially cut your app development time and cost in half. Several cross-platform tools allow you to develop iOS apps on a Windows PC, or allow you to compile the app if there’s a Mac in your local network. Well, not so fast… The cross-platform tool ecosystem is very large. On the one side you have complete Integrated Development Environments (IDEs) like Xamarin, that allow you to build cross-platform apps with C#. The middle ground is covered by tools like PhoneGap, Cordova, Ionic and Appcelerator, that let you build native apps with HTML5 components. The far end includes smaller platforms like React Native that allow you to write native apps with a JavaScript wrapper. The one thing that stands out for all cross-platform tools is this: they’re not beginner friendly! It’s much easier to get access to a Mac, learn Swift, and build a simple app, than it is to get started with Xamarin. Most of the cross-platform tools require you to have a basic understanding of programming, compilation options, and the iOS and Android ecosystems. That’s something you don’t really have as a beginner developer! Having said that, let’s look at a couple of options: If you’re familiar with Windows-based development tools and IDEs, and if you already know how to code, it’s worthwhile to check out Xamarin. With Xamarin you code apps in C#, for multiple platforms, using the Mono and MonoTouch frameworks. If you’re familiar with web-based development, check out PhoneGap or Ionic. You’ll feel right at home with HTML 5, CSS and JavaScript. Don’t forget: a native app works different than a website… If you’re familiar with JavaScript, or if you’d rather learn to code JavaScript than Swift, check out React Native. With React Native you can code native apps for iOS and Android using a “wrapper”. Always deliberately choose for cross-platform tools because it’s a smart option, not because you think a native platform language is bad. The fact that one option isn’t right, doesn’t immediately make another option smarter! If you don’t want to join the proprietary closed Apple universe, don’t forget that many cross-platform tools are operated by equally evil companies like Google, Facebook, Microsoft, Adobe and Amazon. An often heard argument against cross-platform tools is that they offer limited access to and support for smartphone hardware, and are less “snappy” than their native counterparts. Keep in mind that any cross-platform tool will require you to write platform-specific code at one point, especially if you want to code custom features. A: You don't need to own a Mac nor do you need to learn Objective-C. You can develop in different environments and compile into Objective-C later on. developing for the iphone and ipad by runing osx 10.6(snow leopard) This article one of our developers wrote gives a pretty comprehensive walk through on installing OS X Snow Leopard on Windows using iBoot, then installing Vmware (with instructions), then getting your iPhone dev environment going... and a few extra juicy things. Super helpful for me. Hope that helps. It uses Phonegap so you can develop on multiple smart phone platforms at once. A: You can use Tersus (free, open source). A: A devkit that allows one to develop iPhone apps in Objective-C, C++ or just plain C with Visual Studio: Check it out at iOS build env You can build iPhone apps directly within Visual Studio (2008, 2010, Express). Pretty neat, it even builds IPA files for your app after a successful compilation. The code works as is on jailbroken devices, for the rest of the planet I believe the final compilation & submission to the App Store has to be done on a Mac. But still, it enables you to develop using a well-known IDE. A: You may try to develop web apps for iPhone using HTML, JavaScript, CSS. Check the getting started info at Apple's site. A: As many people already answered, iPhone SDK is available only for OS X, and I believe Apple will never release it for Windows. But there are several alternative environments/frameworks that allow you to develop iOS applications, even package and submit to AppStore using windows machine as well as MAC. Here are most popular and relatively better options. PhoneGap, allow to create web-based apps, using HTML/CSS/JavaScript Xamarin, cross-platform apps in C# Adobe AIR, air applications with Flash / ActionScript Unity3D, cross-platform game engine Note: Unity requires Xcode, and therefore OS X to build iOS projects. A: (accurate as of late 2014) For access to the native tools (Xcode etc) there are two main options: 1. Virtual machine Look around for the mavericks (10.9) vmware image that works with a modified Vmware Workstation/Player. Once the machine is able to boot, it can be updated to 10.9.5 with no apparent issues. The good: relatively low learning curve (if you are somewhat familiar with vms) The bad: reduced performance due to virtualized environment, no 3d acceleration (QE/CL) 2. Hackintosh This is the sensible option if you are planning to procure new hardware (or at least partly), instead of retrofitting existing equipment (but you might be lucky to have one of the common OEM models (like Dell) that already have recipes written for it) The good: no penalty on hardware performance, which might even surpass that of a real mac. The same hardware can also be used for other OSes if you are open to multibooting The bad: higher learning curve, more hardware limitations (no drivers for certain Intel wifi etc) which may translate into higher investment if you had no intention to get new hardware originally Needless to say, both options above are frown upon by the fruit company, so licensing compliance is not part of the discussion. 3. An actual mac (added in 2016) This option is perfect for people who already have a mac and use it as a Windows development machine via Bootcamp etc. This also has the least support issues (apart from complications that may result from multi-booting), so it is recommended for those looking for a long-term solution (hardware that works not just for the current OSX version but future versions as well) A: Visual Studio + Xamarin will do the job. Yet, I'd recommend you get a Mac and develop iOS apps in Xcode. When in Rome, live like the Romans do. A: Xamarin is a solid choice. It was purchased by Microsoft and is now built directly into Visual Studio. You code in C#. With all the updates and features they are adding, you can do everything but submit to the App Store from Windows, even compile, build and deploy to an iOS device. For games, Unity 3D is a great option. The editor is free to use for development, and even for distribution (if you have less than 100K USD in annual revenue). Unity supports iOS, Android and most other platforms. It may be possible to use Unity's "Cloud Build" feature to avoid having to use a Mac for deployment, although by default Unity actually spits out an Xcode project when building for iOS. Other options: PhoneGap (html/javascript) also works. It isn't quite as nice for gaming, but it's pretty decent for regular GUI applications. Flutter (dart) is a free cross platform mobile app development framework from Google. Write your code in Dart. React Native (javascript) is another popular cross-platform framework created by Facebook. Note that: for all of these options, all or most of the development can be done on Windows, but a MacOS device is still required to build a binary for submission to the App Store. One option is to get a cheap MAC Mini to do your final build. A: Of course, you can write Objective-C code in notepad or other programs and then move it to a Mac to compile. But seriously, it depends on whether you are developing official applications to put in App Store or developing applications for jailbroken iPhone. To write official applications, Apple iPhone SDK which requires an Intel Mac seems to be the only practical way. However, there is an unofficial toolchain to write applications for jailbroken iPhones. You can run it on Linux and Windows (using Cygwin). A: You will soon be able to use Adobe Flash CS 5 to create Apps for the iPhone on Windows: flashcs 5 flashcs5 apps for iphone A: Try macincloud.com It allows you to rent a mac and access it through RDP remote control. You can then use your PC to access a mac and then develop your apps. A: If you have a jailbroken iPhone, you can install the iphone-gcc toolchain onto the iPhone through Cydia and that way you can just compilie the apps on the iPhone. Apps that are developed this way can still be submitted to the App Store. And although Mr Valdez said it is a grey area (which it is), jailbreaking is incredibly easy and pretty much risk free. Yes, it voids your warrenty but you can just do a restore and they will never know. A: Hooray! You can now more easily accomplish this with the latest Xamarin.iOS, using a network-linked mac providing the build and deployment capabilities. See here for more details: introduction to xamarin ios for visual studio A: If you want it to be legitimate, you have two options, cloud based Mac solutions or cross-platform development tools. You may consider the hackintosh approach or virtual machines if you don't care about legal stuff. If you have a decent PC, running a virtual machine would be the easiest way to go. You may never know which hardware will have driver issues on a hackintosh. I've tried all these approaches and they all have pros and cons, but for the second group, I feel kind of guilty. I develop apps to make a living and I wouldn't want to rip off someone else for it. If you are making a small project, cloud based Macs may prove useful. Rent it for a short time, develop your project and off you go. Don't bother learning anything new. However, if your project is getting big, cross-platform frameworks seem to be the only alternative. The critical thing is that you need to choose wisely. There are so many hybrid frameworks, but what they do can be summarized in one sentence as "diplaying web pages in an app wrapper" and developers' negative experience with hybrid frameworks also affects native frameworks. I tried three of these (Titanium, Smartface and Xamarin) and they all claim to produce "real native output" and in my opinion their claims are correct. You need to test and see it yoursrlf, it's not easy to describe the native feeling. In a previous comment, it was indicated that it takes some effort to learn these platforms, but once you get to know them, you can develop not just iOS applications but Android applications as well, all with the common code base. And of course, they are much cheaper than a cloud Mac. Some of them are even free. You would need a Mac only for store submission. If you know JavaScript, try Titanium and Smartface and if you know C#, try Xamarin. Just note that for the device simuator, Titanium is dependent on a Mac, but Smartface has a simulator app for Windows development and it works better than I expected. On the other hand, Xamarin requires a Mac in your network. A: If you want to create iPhone apps but no Mac, then you should try http://www.pmbaty.com/iosbuildenv/ It allows you to easily develop native iOS apps, like with XCode, deployable on any iPhone, iPod or iPad (jailbroken or not). Use your favourite IDE to code in Objective-C, C++, C or ARM assembly, like in XCode. ARC and blocks are supported. Compile your iPhone apps directly inside Visual Studio It works on Windows all versions (XP, 7, 8), FreeBSD and Linux Now with iOS8 support. A: There's also Sencha Architect and Sencha Touch from the company that makes Ext JS. A: This is a new tool: oxygene which you can use to build apps for iOS/Mac, Windows RT/8 or Android. It uses a specific language derived from Object Pascal and Visual Studio (and uses .net or java.). It seem to be really powerful, but is not free. A: So the bad news is that XCode is needed for its iOS Simulator as well as its Application Loader facility for actually uploading the programs to iOS devices for "real" testing. You'll need XCode for signing your apps before submitting to the App Store. Unfortunately, XCode is only available for OS X. However, the good news is that you may be able to purchase OS X and run it in a virtual machine such as VMWare Workstation. I don't know how straightforward this is, as it is rather difficult to get OS X to run on non-Apple hardware, but a quick Google search shows that it is possible. This method would (likely) be cheaper than purchasing a new Mac, although the Mac Mini retails in the US for only $599. Some posts I've seen indicate that this may or may not be legal, others say you need OS X Server for virtualization. I'll leave the research up to you. There are also services such as MacInCloud that allow you to rent a Mac server that you can access from Windows via remote desktop, or through your browser. Unfortunately, I don't think you'd be able to use Application Loader, as you have to physically connect the device to your computer, but it would work for development and simulation, at least. Good luck! A: Check out NS Basic/App Studio. It's Visual Basic for the iPhone. (though you can also code in JavaScript). It produces WebApps which can be distributed without going through the App Store. Apps will also run on other platforms, like Android. NS Basic/App Studio A: Please take a look at Xamarin. They have an extension for Visual Studio (http://xamarin.com/visual-studio). Taken from their site: Xamarin provides Visual Studio add-ins so that you can develop your iOS, Android and Windows apps all in a single solution. The Xamarin extensions support building, deploying, and debugging on simulator or device. A: B4i is a new development tool that creates native iOS apps. It runs on Windows. When a full compilation is required (in most cases it is not required) the project is compiled on a hosted builder. Each compilation takes about 5 seconds. B4i A: I use Flutter. I develop on my Windows machine and test on an Android emulator. When I'm ready to test on iOS, I run the code on macOS. From my experience, a Flutter app runs exactly the same on Android and iOS. If you don't have a Mac, you can download VirtualBox and install macOS on it. On VirtualBox, macOS runs exactly as it does on a real Mac, and you can build your app on it. A: Just make a web app. iOS web apps are: * *Reliable. The amount of time it takes to make an update is far shorter than with XCode. *Fast. JavaScript on iOS is, by nature, far faster than native apps. *Free. You never have to pay Apple a cent, as Apple doesn't have any control over you. *Safer. If a crash occurs with a native app, it can take down even the newest iOS devices for hours or even days, until the user can fix them. If a web app crashes, the user just has to close and reopen it, worst case scenario. *Offline. You can easily create a service worker to handle your app. *Secure. Because you do not have access to native APIs, a security breach will be significantly less dangerous. *Easy to program. A basic iOS web app can be programmed from the Safari URL bar on an iPhone. Literally. I've done it myself. A: You can make apps for iPhone using Java, and Java can connect to the web.
{ "language": "en", "url": "https://stackoverflow.com/questions/22358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1257" }
Q: Implementing a log watcher I'm wondering how you can implement a program similar to tail -f in C/C++, a program that watches for and processes new lines added to a log file? A: You can use fseek() to clear the eof condition on the stream. Essentially, read to the end of the file, sleep for a while, fseek() (without changing your position) to clear eof, the read to end of file again. wash, rinse, repeat. man fseek(3) for details. Here's what it looks like in perl. perl's seek() is essentially a wrapper for fseek(3), so the logic is the same: wembley 0 /home/jj33/swap >#> cat p my $f = shift; open(I, "<$f") || die "Couldn't open $f: $!\n"; while (1) { seek(I, 0, 1); while (defined(my $l = <I>)) { print "Got: $l"; } print "Hit EOF, sleeping\n"; sleep(10); } wembley 0 /home/jj33/swap >#> cat tfile This is some text in a file wembley 0 /home/jj33/swap >#> perl p tfile Got: This is Got: some Got: text Got: in Got: a file Hit EOF, sleeping Then, in another session: wembley 0 /home/jj33/swap > echo "another line of text" >> tfile And back to the original program output: Hit EOF, sleeping Got: another line of text Hit EOF, sleeping A: See here You could either call out to tail and retrieve the stream back into your app, or as it's open source, maybe try to pull it into your own code. Also, it is possible in C++ iostream to open a file for viewing only and just read to the end, while buffering the last 10-20 lines, then output that. A: I think what you're looking for is the select() call in c/c++. I found a copy of the man page here: http://www.opengroup.org/onlinepubs/007908775/xsh/select.html. Select takes file descriptors as arguments and tells you when one of them has changed and is ready for reading. A: The tail program is open source, so you could reference that. I wondered the same thing and looked at the code a while back, thinking it would be pretty simple, but I was surprised at how complex it was. There are lots of gotchas that have to be taken into account.
{ "language": "en", "url": "https://stackoverflow.com/questions/22379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Does PHP have built-in data structures? I'm looking at the PHP Manual, and I'm not seeing a section on data structures that most languages have, such as lists and sets. Am I just blind or does PHP not have anything like this built in? A: PHP has arrays, which are actually associative arrays and can also be used as sets. Like many interpreted languages, PHP offers all this under one hood instead of providing different explicit data types. E.g. $lst = array(1, 2, 3); $hsh = array(1 => "This", 2 => "is a", 3 => "test"); Also, take a look in the manual. A: PHP's array doubles as both a list and a dictionary. $myArray = array("Apples", "Oranges", "Pears"); $myScalar = $myArray[0] // == "Apples" Or to use it as an associative array: $myArray = array("a"=>"Apples", "b"=>"Oranges", "c"=>"Pears"); $myScalar = $myArray["a"] // == "Apples" A: The only native data structure in PHP is array. Fortunately, arrays are quite flexible and can be used as hash tables as well. http://www.php.net/array However, there is SPL which is sort of a clone of C++ STL. http://www.php.net/manual/en/book.spl.php A: I think you might want to be a bit more specific, when you say data structures my mind goes in a few directions... Arrays - They are certainly well documented and available in. (http://us.php.net/manual/en/book.array.php) SQL Data - Depends on the database you are using, but most are available. (http://us.php.net/manual/en/book.mysql.php) OOP - Depending on the version objects can be designed and implemented. (http://us.php.net/manual/en/language.oop.php) I had to search for OOP to find this on the php site. A: PHP offers data structures through the Standard PHP Library (SPL) basic extension, which is available and compiled by default in PHP 5.0.0. The data structures offered are available with PHP 5 >= 5.3.0, and includes: Doubly Linked Lists A Doubly Linked List (DLL) is a list of nodes linked in both directions to each others. Iterator’s operations, access to both ends, addition or removal of nodes have a cost of O(1) when the underlying structure is a DLL. It hence provides a decent implementation for stacks and queues. * *SplDoublyLinkedList class * *SplStack class *SplQueue class Heaps Heaps are tree-like structures that follow the heap-property: each node is greater than or equal to its children, when compared using the implemented compare method which is global to the heap. * *SplHeap class * *SplMaxHeap class *SplMinHeap class *SplPriorityQueue class Arrays Arrays are structures that store the data in a continuous way, accessible via indexes. Don’t confuse them with PHP arrays: PHP arrays are in fact implemented as ordered hashtables. * *SplFixedArray class Map A map is a datastructure holding key-value pairs. PHP arrays can be seen as maps from integers/strings to values. SPL provides a map from objects to data. This map can also be used as an object set. * *SplObjectStorage class Source: http://php.net/manual/en/spl.datastructures.php A: Of course PHP has data structures. The array in php is incredibly flexible. Some examples: $foo = array( 'bar' => array(1,'two',3), 'baz' => explode(" ", "Some nice words") ); Then you have an absolute plethora of array functions available to map/filter/walk/etc the structures, or convert, flip, reverse, etc. A: You can always create your own if you don't feel PHP includes a specific type of data structure. For example, here is a simple Set data structure backed by an Array. class ArraySet { /** Elements in this set */ private $elements; /** the number of elements in this set */ private $size = 0; /** * Constructs this set. */ public function ArraySet() { $this->elements = array(); } /** * Adds the specified element to this set if * it is not already present. * * @param any $element * * @returns true if the specified element was * added to this set. */ public function add($element) { if (! in_array($element, $this->elements)) { $this->elements[] = $element; $this->size++; return true; } return false; } /** * Adds all of the elements in the specified * collection to this set if they're not already present. * * @param array $collection * * @returns true if any of the elements in the * specified collection where added to this set. */ public function addAll($collection) { $changed = false; foreach ($collection as $element) { if ($this->add($element)) { $changed = true; } } return $changed; } /** * Removes all the elements from this set. */ public function clear() { $this->elements = array(); $this->size = 0; } /** * Checks if this set contains the specified element. * * @param any $element * * @returns true if this set contains the specified * element. */ public function contains($element) { return in_array($element, $this->elements); } /** * Checks if this set contains all the specified * element. * * @param array $collection * * @returns true if this set contains all the specified * element. */ public function containsAll($collection) { foreach ($collection as $element) { if (! in_array($element, $this->elements)) { return false; } } return true; } /** * Checks if this set contains elements. * * @returns true if this set contains no elements. */ public function isEmpty() { return count($this->elements) <= 0; } /** * Get's an iterator over the elements in this set. * * @returns an iterator over the elements in this set. */ public function iterator() { return new SimpleIterator($this->elements); } /** * Removes the specified element from this set. * * @param any $element * * @returns true if the specified element is removed. */ public function remove($element) { if (! in_array($element, $this->elements)) return false; foreach ($this->elements as $k => $v) { if ($element == $v) { unset($this->elements[$k]); $this->size--; return true; } } } /** * Removes all the specified elements from this set. * * @param array $collection * * @returns true if all the specified elemensts * are removed from this set. */ public function removeAll($collection) { $changed = false; foreach ($collection as $element) { if ($this->remove($element)) { $changed = true; } } return $changed; } /** * Retains the elements in this set that are * in the specified collection. If the specified * collection is also a set, this method effectively * modifies this set into the intersection of * this set and the specified collection. * * @param array $collection * * @returns true if this set changed as a result * of the specified collection. */ public function retainAll($collection) { $changed = false; foreach ($this->elements as $k => $v) { if (! in_array($v, $collection)) { unset($this->elements[$k]); $this->size--; $changed = true; } } return $changed; } /** * Returns the number of elements in this set. * * @returns the number of elements in this set. */ public function size() { return $this->size; } /** * Returns an array that contains all the * elements in this set. * * @returns an array that contains all the * elements in this set. */ public function toArray() { $elements = $this->elements; return $elements; } } A: PHP 7 introduced an extension called ds providing specialized data structures as an alternative to the array. The ds, * *uses the Ds\ namespace. *has 3 interfaces namely,Collection, Sequence and Hashable *has 8 classes namely, Vector, Deque,Queue, PriorityQueue, Map, Set, Stack, and Pair For more information checkout the Manual and also This blog post has some awesome information including benchmarks. A: The associative array can be used for most basic data structures hashtable, queue, stack. But if you want something like a tree or heap I don't think they exist by default but I'm sure there are free libraries anywhere. To have an array emulate a stack use array_push() to add and array_pop() to take off To have an array emulate a queue use array_push() to enqueue and array_shift() to dequeue An associative array is a hash by default. In PHP they are allowed to have strings as indexes so this works as expected: $array['key'] = 'value'; Finally, you can kind of emulate a binary tree with an array with the potential to have wasted space. Its useful if you know you're going to have a small tree. Using a linear array, you say for any index (i) you put its left child at index (2i+1) and right child at index (2i+2). All of these methods are covered nicely in this article on how to make JavaScript arrays emulate higher level data structures. A: PHP can also have an array of arrays which is called a "multidimensional array" or "matrix". You can have 2-dimensional arrays, 3-dimensional arrays, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/22401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "74" }
Q: How do I convert images between CMYK and RGB in ColdFusion (Java)? I have a need to convert images from CMYK to RGB - not necessarily back again, but hey, if it can be done... With the release of ColdFusion 8, we got the CFImage tag, but it doesn't support this conversion; and nor does Image.cfc, or Alagad's Image Component. However, it should be possible in Java; which we can leverage through CF. For example, here's how you might create a Java thread to sleep a process: <cfset jthread = createObject("java", "java.lang.Thread")/> <cfset jthread.sleep(5000)/> I would guess a similar method could be used to leverage java to do this image conversion, but not being a Java developer, I don't have a clue where to start. Can anyone lend a hand here? A: I use the Java ImageIO libraries (https://jai-imageio.dev.java.net). They aren't perfect, but can be simple and get the job done. As far as converting from CMYK to RGB, here is the best I have been able to come up with. Download and install the ImageIO JARs and native libraries for your platform. The native libraries are essential. Without them the ImageIO JAR files will not be able to detect the CMYK images. Originally, I was under the impression that the native libraries would improve performance but was not required for any functionality. I was wrong. The only other thing that I noticed is that the converted RGB images are sometimes much lighter than the CMYK images. If anyone knows how to solve that problem, I would be appreciative. Below is some code to convert a CMYK image into an RGB image of any supported format. Thank you, Randy Stegbauer package cmyk; import java.awt.color.ColorSpace; import java.awt.image.BufferedImage; import java.awt.image.ColorConvertOp; import java.io.File; import java.io.IOException; import javax.imageio.ImageIO; import org.apache.commons.lang.StringUtils; public class Main { /** * Creates new RGB images from all the CMYK images passed * in on the command line. * The new filename generated is, for example "GIF_original_filename.gif". * */ public static void main(String[] args) { for (int ii = 0; ii < args.length; ii++) { String filename = args[ii]; boolean cmyk = isCMYK(filename); System.out.println(cmyk + ": " + filename); if (cmyk) { try { String rgbFile = cmyk2rgb(filename); System.out.println(isCMYK(rgbFile) + ": " + rgbFile); } catch (IOException e) { System.out.println(e.getMessage()); } } } } /** * If 'filename' is a CMYK file, then convert the image into RGB, * store it into a JPEG file, and return the new filename. * * @param filename */ private static String cmyk2rgb(String filename) throws IOException { // Change this format into any ImageIO supported format. String format = "gif"; File imageFile = new File(filename); String rgbFilename = filename; BufferedImage image = ImageIO.read(imageFile); if (image != null) { int colorSpaceType = image.getColorModel().getColorSpace().getType(); if (colorSpaceType == ColorSpace.TYPE_CMYK) { BufferedImage rgbImage = new BufferedImage( image.getWidth(), image.getHeight(), BufferedImage.TYPE_3BYTE_BGR); ColorConvertOp op = new ColorConvertOp(null); op.filter(image, rgbImage); rgbFilename = changeExtension(imageFile.getName(), format); rgbFilename = new File(imageFile.getParent(), format + "_" + rgbFilename).getPath(); ImageIO.write(rgbImage, format, new File(rgbFilename)); } } return rgbFilename; } /** * Change the extension of 'filename' to 'newExtension'. * * @param filename * @param newExtension * @return filename with new extension */ private static String changeExtension(String filename, String newExtension) { String result = filename; if (filename != null && newExtension != null && newExtension.length() != 0); { int dot = filename.lastIndexOf('.'); if (dot != -1) { result = filename.substring(0, dot) + '.' + newExtension; } } return result; } private static boolean isCMYK(String filename) { boolean result = false; BufferedImage img = null; try { img = ImageIO.read(new File(filename)); } catch (IOException e) { System.out.println(e.getMessage() + ": " + filename); } if (img != null) { int colorSpaceType = img.getColorModel().getColorSpace().getType(); result = colorSpaceType == ColorSpace.TYPE_CMYK; } return result; } } A: A very simple formula for converting from CMYK to RGB ignoring all color profiles is: R = ( (255-C)*(255-K) ) / 255; G = ( (255-M)*(255-K) ) / 255; B = ( (255-Y)*(255-K) ) / 255; This code requires CMYK values to be in rage of 0-255. If you have 0 to 100 or 0.0 to 1.0 you'll have to convert the values. Hope this will get you started. As for the java and ColdFusion interfacing, I'm sorry, but I have no idea how to do that. A: The tag cfx_image may be of use to you. I haven't used it in a while but I remember it had a ton of features. Alternatively, you might be able to script a windows app such as Irfanview (via commandline using cfexecute) to process images. Hope that helps A: I know that this question is old, but I still encounter problems with CMYK images & ColdFusion. However, I just read a CMYK JPEG image using ColdFusion 10 and resaved it. The saved image was able to to be read using ColdFusion 9 (which is only capable of reading RGB JPEGs.) I'm not sure if this conversion is intentional or not and I don't currently have any way of identifying whether the source image's color profile is CMYK or not as the saved color profile still appears to be the same. <cfset imgData = ImageRead(expandPath("./CMYK_image.jpg"))> <cfset ImageWrite(imgData, expandPath("./Saved_image.jpg"))>
{ "language": "en", "url": "https://stackoverflow.com/questions/22409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: SQL Query Help - Scoring Multiple Choice Tests Say I have a Student table, it's got an int ID. I have a fixed set of 10 multiple choice questions with 5 possible answers. I have a normalized answer table that has the question id, the Student.answer (1-5) and the Student.ID I'm trying to write a single query that will return all scores over a certain pecentage. To this end I wrote a simple UDF that accepts the Student.answers and the correct answer, so it has 20 parameters. I'm starting to wonder if it's better to denormalize the answer table, bring it into my applcation and let my application do the scoring. Anyone ever tackle something like this and have insight? A: If I understand your schema and question correctly, how about something like this: select student_name, score from students join (select student_answers.student_id, count(*) as score from student_answers, answer_key group by student_id where student_answers.question_id = answer_key.question_id and student_answers.answer = answer_key.answer) as student_scores on students.student_id = student_scores.student_id where score >= 7 order by score, student_name That should select the students with a score of 7 or more, for example. Just adjust the where clause for your purposes. A: I would probably leave it up to your application to perform the scoring. Check out Maybe Normalizing Isn't Normal by Jeff Atwood. A: The architecture you are talking about could become very cumbersome in the long run, and if you need to change the questions it means more changes to the UDF you are using. I would think you could probably do your analysis in code without necessarily de-normalizing your database. De-normalization could also lend to inflexibility, or at least added expense to update, down the road. A: No way, you definitely want to keep it normalized. It's not even that hard of a query. Basically, you want to left join the students correct answers with the total answers for that question, and do a count. This will give you the percent correct. Do that for each student, and put the minimum percent correct in a where clause. A: Denormalization is generally considered a last resort. The problem seems very similar to survey applications, which are very common. Without seeing your data model, it's difficult to propose a solution, but I will say that it is definitely possible. I'm wondering why you need 20 parameters to that function? A relational set-based solution will be simpler and faster in most cases. A: This query should be quite easy... assuming you have the correct answer stored in the question table. You do have the correct answer stored in the question table, right?
{ "language": "en", "url": "https://stackoverflow.com/questions/22417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Adding Inline Search function to web page Is it possible to embed an inline search box into a web page which provides similar functionality to the IE7Pro Inline Search or similar plugins for Firefox/Safari? A: jQuery inline search plugin provides this functionality A: If I understand your question, you are asking if it is possible to allow a user to type in a query that will search the text of page they are on? You can certainly do that. I would suggest looking into one of the javascript libraries, jQuery is my library of choice, for your functionality. It has a rich selector syntax that allows you to search various parts of the page easily and without worrying about cross-browser coding yourself.
{ "language": "en", "url": "https://stackoverflow.com/questions/22429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Search strategies in ORMs I am looking for information on handling search in different ORMs. Currently I am redeveloping some old application in PHP and one of requirements is: make everything or almost everything searchable, so user just types "punkrock live" and the app finds videos clips, music tracks, reviews, upcoming events or even user comments labeled that way. In environment where everything is searchable ORM need to support this feature in two ways: * *providing some indexing API on "O" side of ORM *providing means for bulk database retrieval on "R" side Ideal solution would return ready made objects based on searched string. Do you know any good end-to-end solutions that does the job, not necessarily in PHP? If you dealt with similar problem it would be nice to listen what your experience is. Something more than Use Lucene or semantic web is the way oneliners, tho ;-)* A: I have recently integrated the Compass search engine into a Java EE 5 application. It is based on Lucene Java and supports different ORM frameworks as well as other types of models like XML or no real model at all ;) In the case of an object model managed by an ORM framework you can annotate your classes with special annotations (e.g. @Searchable), register your classes and let Compass index them on application startup and listen to changes to the model automatically. When it comes to searching, you have the power of Lucene at hand. Compass then gives you instances of your model objects as search result. It's not PHP, but you said it didn't have to be PHP necessarily ;) Don't know if this helps, though... A: In a Propel 1.3 schema.xml file, you can specify that you'd like all your models to extend a "BaseModel" class that YOU create. In that BaseModel, you're going to re-define the save() method to be something like this: public function save(PropelPDO $con = null) { if($this->getIsSearchable()) { // update your search index here. Lucene, Sphinx, or otherwise } return parent::save($conn); } That takes care of keeping everything indexed. As for searching, I'd suggest creating a Search class with a few methods. class Search { protected $_searchableTypes = array('music','video','blog'); public method findAll($search_term) { $results = array(); foreach($this->_searchableTypes as $type) { $results[] = $this->findType($type, $search_term); } return $results; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/22431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: My regex is matching too much. How do I make it stop? I have this gigantic ugly string: J0000000: Transaction A0001401 started on 8/22/2008 9:49:29 AM J0000010: Project name: E:\foo.pf J0000011: Job name: MBiek Direct Mail Test J0000020: Document 1 - Completed successfully I'm trying to extract pieces from it using regex. In this case, I want to grab everything after Project Name up to the part where it says J0000011: (the 11 is going to be a different number every time). Here's the regex I've been playing with: Project name:\s+(.*)\s+J[0-9]{7}: The problem is that it doesn't stop until it hits the J0000020: at the end. How do I make the regex stop at the first occurrence of J[0-9]{7}? A: Well, ".*" is a greedy selector. You make it non-greedy by using ".*?" When using the latter construct, the regex engine will, at every step it matches text into the "." attempt to match whatever make come after the ".*?". This means that if for instance nothing comes after the ".*?", then it matches nothing. Here's what I used. s contains your original string. This code is .NET specific, but most flavors of regex will have something similar. string m = Regex.Match(s, @"Project name: (?<name>.*?) J\d+").Groups["name"].Value; A: Make .* non-greedy by adding '?' after it: Project name:\s+(.*?)\s+J[0-9]{7}: A: Using non-greedy quantifiers here is probably the best solution, also because it is more efficient than the greedy alternative: Greedy matches generally go as far as they can (here, until the end of the text!) and then trace back character after character to try and match the part coming afterwards. However, consider using a negative character class instead: Project name:\s+(\S*)\s+J[0-9]{7}: \S means “everything except a whitespace and this is exactly what you want. A: I would also recommend you experiment with regular expressions using "Expresso" - it's a utility a great (and free) utility for regex editing and testing. One of its upsides is that its UI exposes a lot of regex functionality that people unexprienced with regex might not be familiar with, in a way that it would be easy for them to learn these new concepts. For example, when building your regex using the UI, and choosing "*", you have the ability to check the checkbox "As few as possible" and see the resulting regex, as well as test its behavior, even if you were unfamiliar with non-greedy expressions before. Available for download at their site: http://www.ultrapico.com/Expresso.htm Express download: http://www.ultrapico.com/ExpressoDownload.htm A: (Project name:\s+[A-Z]:(?:\\w+)+.[a-zA-Z]+\s+J[0-9]{7})(?=:) This will work for you. Adding (?:\\w+)+.[a-zA-Z]+ will be more restrictive instead of .*
{ "language": "en", "url": "https://stackoverflow.com/questions/22444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "115" }
Q: memset() causing data abort I'm getting some strange, intermittent, data aborts (< 5% of the time) in some of my code, when calling memset(). The problem is that is usually doesn't happen unless the code is running for a couple days, so it's hard to catch it in the act. I'm using the following code: char *msg = (char*)malloc(sizeof(char)*2048); char *temp = (char*)malloc(sizeof(char)*1024); memset(msg, 0, 2048); memset(temp, 0, 1024); char *tempstr = (char*)malloc(sizeof(char)*128); sprintf(temp, "%s %s/%s %s%s", EZMPPOST, EZMPTAG, EZMPVER, TYPETXT, EOL); strcat(msg, temp); //Add Data memset(tempstr, '\0', 128); wcstombs(tempstr, gdevID, wcslen(gdevID)); sprintf(temp, "%s: %s%s", "DeviceID", tempstr, EOL); strcat(msg, temp); As you can see, I'm not trying to use memset with a size larger that what's originally allocated with malloc() Anyone see what might be wrong with this? A: There's a couple of things. You're using sprintf which is inherently unsafe; unless you're 100% positive that you're not going to exceed the size of the buffer, you should almost always prefer snprintf. The same applies to strcat; prefer the safer alternative strncat. Obviously this may not fix anything, but it goes a long way in helping spot what might otherwise be very annoying to spot bugs. A: malloc can return NULL if no memory is available. You're not checking for that. Right you are... I didn't think about that as I was monitoring the memory and it there was enough free. Is there any way for there to be available memory on the system but for malloc to fail? Yes, if memory is fragmented. Also, when you say "monitoring memory," there may be something on the system which occasionally consumes a lot of memory and then releases it before you notice. If your call to malloc occurs then, there won't be any memory available. -- Joel Either way...I will add that check :) A: malloc can return NULL if no memory is available. You're not checking for that. A: wcstombs doesn't get the size of the destination, so it can, in theory, buffer overflow. And why are you using sprintf with what I assume are constants? Just use: EZMPPOST" " EZMPTAG "/" EZMPVER " " TYPETXT EOL C and C++ combines string literal declarations into a single string. A: Have you tried using Valgrind? That is usually the fastest and easiest way to debug these sorts of errors. If you are reading or writing outside the bounds of allocated memory, it will flag it for you. A: You're using sprintf which is inherently unsafe; unless you're 100% positive that you're not going to exceed the size of the buffer, you should almost always prefer snprintf. The same applies to strcat; prefer the safer alternative strncat. Yeah..... I mostly do .NET lately and old habits die hard. I likely pulled that code out of something else that was written before my time... But I'll try not to use those in the future ;) A: You know it might not even be your code... Are there any other programs running that could have a memory leak? A: It could be your processor. Some CPUs can't address single bytes, and require you to work in words or chunk sizes, or have instructions that can only be used on word or chunk aligned data. Usually the compiler is made aware of these and works around them, but sometimes you can malloc a region as bytes, and then try to address it as a structure or wider-than-a-byte field, and the compiler won't catch it, but the processor will throw a data exception later. It wouldn't happen unless you're using an unusual CPU. ARM9 will do that, for example, but i686 won't. I see it's tagged windows mobile, so maybe you do have this CPU issue. A: Instead of doing malloc followed by memset, you should be using calloc which will clear the newly allocated memory for you. Other than that, do what Joel said. A: NB borrowed some comments from other answers and integrated into a whole. The code is all mine... * *Check your error codes. E.g. malloc can return NULL if no memory is available. This could be causing your data abort. *sizeof(char) is 1 by definition *Use snprintf not sprintf to avoid buffer overruns * *If EZMPPOST etc are constants, then you don't need a format string, you can just combined several string literals as STRING1 " " STRING2 " " STRING3 and strcat the whole lot. *You are using much more memory than you need to. *With one minor change, you don't need to call memset in the first place. Nothing really requires zero initialisation here. This code does the same thing, safely, runs faster, and uses less memory. // sizeof(char) is 1 by definition. This memory does not require zero // initialisation. If it did, I'd use calloc. const int max_msg = 2048; char *msg = (char*)malloc(max_msg); if(!msg) { // Allocaton failure return; } // Use snprintf instead of sprintf to avoid buffer overruns // we write directly to msg, instead of using a temporary buffer and then calling // strcat. This saves CPU time, saves the temporary buffer, and removes the need // to zero initialise msg. snprintf(msg, max_msg, "%s %s/%s %s%s", EZMPPOST, EZMPTAG, EZMPVER, TYPETXT, EOL); //Add Data size_t len = wcslen(gdevID); // No need to zero init this char* temp = (char*)malloc(len); if(!temp) { free(msg); return; } wcstombs(temp, gdevID, len); // No need to use a temporary buffer - just append directly to the msg, protecting // against buffer overruns. snprintf(msg + strlen(msg), max_msg - strlen(msg), "%s: %s%s", "DeviceID", temp, EOL); free(temp);
{ "language": "en", "url": "https://stackoverflow.com/questions/22459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: I'm looking for a Windows hosting provider that supports custom os images (like AMZN EC2) I've come to love Amazon's EC2 service and I'm looking for something similar that supports the ability to save a running Windows server image and start new instances from it. I contacted GoGrid (the feature is planned in future) and Mosso (no joy) Anyone know of any hosting/cloud providers that can dothis? A: I have just received a message from Amazon to the effect that that they will be supporting Windows Server on EC2 this fall. Wahaay!! A: Seems like dealing with licensing issues would be nightmarish for the host. A: I can't see how licensing would be any different than for a co-lo provider. A: AT&T's Synaptic Hosting allows this, though I'm sure you'll pay through the nose, as it's "enterprise". A: GoGrid as announced to offer the feature this fall. Look at http://www.flexihost.co.uk and http://www.elastichosts.com (both in the UK) as well as http://www.appnexus.com. I have heard that www.rightscale.com (it's kind of a nice interface to EC2 hosting) will support Windows sometime in the future. With a lot of manual tweaking you can already run Windows on EC2 using virtualization inside a Linux virtual machine: http://www.howtoforge.com/amazon_elastic_compute_cloud_qemu I have a short list of cloud hosting companies on my website: http://hotware.wordpress.com/2008/07/25/a-short-cloud-hosting-link-list/
{ "language": "en", "url": "https://stackoverflow.com/questions/22465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: jQuery AJAX vs. UpdatePanel We've got a page with a ton of jQuery (approximately 2000 lines) that we want to trim down b/c it is a maintenance nightmare, and it might be easier to maintain on the server. We've thought about using UpdatePanel for this. However, we don't like the fact that the UpdatePanel is sending the whole page back to the server. A: I don't know if there is a way to optimize UpdatePanels, but my company has found its performance to be pretty poor. jQuery is much much faster at doing pretty much anything. There can be a lot of lag between the time when an UpdatePanel triggers an update and when the UpdatePanel actually updates the page. The only reason we use UpdatePanels is because of the ease of development. Almost nothing needs to be done to make them work. A: Don't move to UpdatePanels. After coming from jQuery, the drop in performance would be untenable. Especially on a page as complex as yours sounds. If you have 2,000 lines of JavaScript code, the solution is to refactor that code. If you put 2,000 lines of C# code in one file, it would be difficult to maintain too. That would be difficult to manage effectively with any language or tool. If you're using 3.5 SP1, you can use the ScriptManager's new script combining to separate your JavaScript into multiple files with no penalty. That way, you can logically partition your code just as you would with server side code. A: Using UpdatePanel force you to use ScriptManager that added tons of scripts in your webpages. UpdatePanel provides you partial postback and not real ajax. If your will run only on a LAN and not internet that's ok, but if your target is internet try refractoring your codes and compress them with some tools before publish on the website A: Please don't put your self in that world of pain. Instead use UFRAME which is a lot faster and is implemented in jQuery. Now, to manage those 2000 lines of Javascript code I recommend splitting the code in different files and set up your build process to join them using JSMin or Yahoo Compressor into chunks.
{ "language": "en", "url": "https://stackoverflow.com/questions/22466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: What's a good machine for iPhone development? I'm going to be working on some iPhone apps so I've been given a golden ticket to buy a Mac. However the golden ticket isn't worth that much, and I'm thinking iMac. Now, Macs are great, I love 'em and use 'em at home, but I know that the iMac is geared more towards the average consumer than the professional. Is an iMac going to be powerful enough to do iPhone development on? If it helps any, the only thing I envision doing on the Mac is running XCode and maybe a web browser. Is there anybody out there doing iPhone development and having trouble running the required tools on their machine? If so, what do you have? A: The iMac is a very fast machine and will be more than suitable for iPhone development. In most cases, a Mac Mini with maxed out ram is good enough. Keep in mind that out of the box, the Mac Mini can only accommodate one monitor. A: My main Mac is a MacBook, juiced with 4GB of RAM, and I find that perfectly good for development (in either Windows or OSX). I could have got an iMac for a bit more money, but I already had a 20" LCD monitor laying around, and I wanted the portability. The MacBooks are as powerful as the iMacs (excluding the graphics card, which may or may not be important to you; it wasn't to me), and are perfectly suitable for development. A: I'd say that any of the current iMac models are more that good enough for development with their dual core, 6MB cache, 1066MHz FSB cpus. You might consider going with more than 1GB of ram, but compare aftermarket prices at places like NewEgg to what Apple wants (for example, I upgraded my MacBook Pro to 4GB for hundreds less than getting installed from Apple). Which model you picked would be more about HD and LCD size and how much you have to spend. A: I run XCode for Mac development on a 20" current-gen iMac and it works perfectly with plenty of other processes running. You can definitely use the iMac to develop software. A: An iMac is easily powerful enough to use for development work. A: I run XCode on a current-gen Macbook with only ONE GB of RAM and it runs fine, so long as I limit the amount of total applications running. A: You aren't gonna have a problem running Xcode on an iMac. Any iMac. Any development project can be done on an iMac. They're fast and modern machines. The cheapest iMac has a Dual Core Duo 2 chip with 1 gig RAM. Boost the RAM to 2 if you can (a cheap option - cheaper if you buy 3rd party RAM). Makes a huge difference running OSX. A: In terms of power, any current Mac is fine for iPhone development. You might want to consider other factors that depend on how you like to work. Do you like to sometimes grab the machine and just get in a different work environment (or show your stuff to people)? The MacBooks are comparable power-wise, but give you that freedom. Can you work with glossy screens, or do they irritate you? In the latter case, an iMac or MacBook may be suboptimal and you should make sure that you get a larger, non-glossy display as main screen. A second display is generally very helpful for development, so you might want to have one anyway. And you will indeed want to push RAM to at least 2GB (4GB are nice of course, but not absolutely necessary). A: I would suggest going for a maxed out Mac Mini and the best monitor you can fit in your budget. Bear in mind that both the iMac and the Mac Mini are essentially laptops (in terms of their internal components). Admittedly, the iMac has a large screen (as laptops go) and a proper hard drive. A: I run XCode on a 17" iMac (2 yrs old) with 2GB of RAM and haven't had any trouble. A: I'm managing just fine on a Mac mini. It only has the stock 1GB RAM at the moment so thats the current bottleneck. A: Developing for the iPhone isn't particularly intensive work, the only way to go up from an iMac is the Mac Pro, which I assume you can't afford. The only reason to go up up to a Mac Pro is if you're doing video or image work where you really need the horsepower. I saw a chart in MacFormat this month that suggested the base iMac was faster than the base Mac Pro anyway, although obviously there's more room for expansion in the Pro. Buy more RAM, though, up it to 4Gb you won't regret it. A: Any modern Mac will be fine. I work on a two year old MacBook (2GHz) with 2Gb of memory and its perfectly usable. The biggest constraint I find it screen real-estate. I am way more productive on my 22" external screen. Go big if you get an iMac or consider adding an external monitor to the base model. A: I often use my PowerMac G5. Sure, you need to hack the developer tools to install on a PPC and there are some Device SDK issues but it runs. Oh wait, you said "good". Nevermind. A: I've bought the mid 2010 Unibody Mac mini and it's a good machine to do iPhone development. I didn't want to spend a lot of money buying a new computer. So I opted for the bare minimum necessary to develop for iPhone. The post bellow shows my impressions about it... Learning to develop for iPhone with a Mac mini A: I'm also thinking of buying an Mac. I wanted to create a new question, but now I'm trying to ask with this 'answer'. There are a few possibilities: * *iMac: Powerful hardware, large screen (27") -> perfect for development *MacBook Pro: portable, but you need a bigger screen than 13" -> expensive *Mac mini: small, no noise, as powerfull as the 13" MacBook Pro, cheap, you need an external display and a RAM upgrade I have worked for a few months on a 13" MacBook Pro, but you really need a second screen if you want to develop (despite through the touchpad scrolling is very easy). The hardware (2.66 Core2Duo, 4 GB RAM, 320 GB) was strong enough for the development with Xcode. But how often do you really need a portable solution? The most of the time I was working on the same place. And a 27" iMac would be great for that, but isn't as cheap as a Mac mini. You could buy a Mac Mini with three 23" IPS panels (1080p) for the same money (including Matrox DualHead2Go) but not as powerfull as the 27" iMac with i5-680. Questions: * *Is portability for you essential or a nice-to-have? *What is better for Xcode? More GHz or more cores? *What brings a faster experience? A faster CPU (e.g. 400 MHz faster) or a SSD instead? The best solution would be an iMac and a MacBook I think. But for the beginning it's too much money. PS: you also need one device too. The cheapest device is a 8GB iPod Touch 4G. A: Please get a Mac that has SSD, either the MacBook Air, or the built-to-order options. Compiling big frameworks such as Three20 would be at least 2 to 3 times faster. XCode 4 should open a lot faster with SSD drive. A: As with all development, screen size is paramount so I would suggest the 24" iMac if your golden ticket stretches that far or a mac mini with a large (probably non-Apple) monitor if it doesn't. A: The only other comment I have is that sometimes I wish I had the portable so I could code on the train, plane or sitting in the park! I bought an iMac and have had no problems whatsoever developing my 'simple' app except for the scrolling thingee freezing on me sometimes.
{ "language": "en", "url": "https://stackoverflow.com/questions/22469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: How do I display records containing specific information in SQl How do I select all records that contain "LCS" within the title column in sql. A: SELECT * FROM TABLE WHERE TABLE.TITLE LIKE '%LCS%'; % is the wild card matcher. A: Look into the LIKE clause A: Are you looking for all the tables with a column name which contains the LCS in them? If yes the do this select table_name from information_schema.columns where column_name like '%lcs%'
{ "language": "en", "url": "https://stackoverflow.com/questions/22474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using cached credentials to connect to SQL 2005 across a domain boundary Ever since moving to Vista some time ago on my development machine, connecting to SQL Servers in our DMZ active directory domain from client tools like SSMS has not worked like it used to. In XP, as long as I had authenticated in some way on the server (for example directing Explorer to \server.dmzdomain\c$ and entering valid creds into the login prompt), SSMS would use those cached credentials to connect. However since switching to Vista, when trying to connect SSMS to a server in the DMZ domain I get the message Login failed for user ''. The user is not associated with a trusted SQL Server connection. If I change the connection options to use Named Pipes instead of the default TCP/IP, my cached credentials are sent and everything works fine. This is the case whether Windows Firewall is off or on, and connections to servers in our internal domain (the same domain my dev PC is in) work fine over TCP/IP or named pipes. I don't mind too much using named pipes for these connections as a workaround, but it seems like TCP/IP is the recommended connection method and I don't like not understanding why it's not working as I'd expect. Any ideas? A: "Login Failed for user ' ', the user is not associated with a trusted SQL Server connection". In this scenario, client may make tcp connetion, plus, running under local admin or non-admin machine account, no matter SPN is registered or not, the client credential is obviously not recognized by SQL Server. The workaround here is: Create the same account as the one on the client machine with same password on the target SQL Server machine, and grant appropriate permission to the account. Let's explain in more detail: When you create the same NT account (let's call it usr1) on both workstations, you essentially connect and impersonate the local account of the connecting station. I.e when you connect from station1 to station2, you're being authenticated via the station2's account. So, if you set the startup account for SQL Server (let's assume it's running on station2) to be station2's usr1, when you connect to SQL from station1 with station1's usr1 login, SQL will authenticate you as station2's usr1. Now, within SQL, you can definitely access station1's resources. Though, how much access will depend on station1's usr1 permission. So far, SQL only deal with an user who is part of the sysadmin role within SQL Server. To allow other users (non-sysamdin) access to network resources, you will have to set the proxy account. Take a look at the article for additional info. taken from http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx A: Have you tried running SSMS in elevated mode, and do you have the latest SP installed on the client? A: I would assume that this is because Vista runs most applications in isolation from either other. I would recommend that you either set the DMZ username and password to match the internal domain username and password, or use named pipes to connect.
{ "language": "en", "url": "https://stackoverflow.com/questions/22493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What are the major differences between ANSI C and K&R C? The Wikipedia article on ANSI C says: One of the aims of the ANSI C standardization process was to produce a superset of K&R C (the first published standard), incorporating many of the unofficial features subsequently introduced. However, the standards committee also included several new features, such as function prototypes (borrowed from the C++ programming language), and a more capable preprocessor. The syntax for parameter declarations was also changed to reflect the C++ style. That makes me think that there are differences. However, I didn't see a comparison between K&R C and ANSI C. Is there such a document? If not, what are the major differences? EDIT: I believe the K&R book says "ANSI C" on the cover. At least I believe the version that I have at home does. So perhaps there isn't a difference anymore? A: * *function prototype. *constant & volatile qualifiers. *wide character support and internationalization. *permit function pointer to be used without dereferencing. A: There may be some confusion here about what "K&R C" is. The term refers to the language as documented in the first edition of "The C Programming Language." Roughly speaking: the input language of the Bell Labs C compiler circa 1978. Kernighan and Ritchie were involved in the ANSI standardization process. The "ANSI C" dialect superceded "K&R C" and subsequent editions of "The C Programming Language" adopt the ANSI conventions. "K&R C" is a "dead language," except to the extent that some compilers still accept legacy code. A: Another difference is that function return types and parameter types did not need to be defined. They would be assumed to be ints. f(x) { return x + 1; } and int f(x) int x; { return x + 1; } are identical. A: The major differences between ANSI C and K&R C are as follows: * *function prototyping *support of the const and volatile data type qualifiers *support wide characters and internationalization *permit function pointers to be used without dereferencing ANSI C adopts c++ function prototype technique where function definition and declaration include function names,arguments' data types, and return value data types. Function prototype enable ANSI C compiler to check for function calls in user programs that pass invalid numbers of arguments or incompatible arguments data types. These fix major weakness of the K&R C compiler. Example: to declares a function foo and requires that foo take two arguments unsigned long foo (char* fmt, double data) { /*body of foo */ } A: The difference is: * *Prototype *wide character support and internationalisation *Support for const and volatile keywords *permit function pointers to be used as dereferencing A: * *FUNCTION PROTOTYPING:ANSI C adopts c++ function prototype technique where function definaton and declaration include function names,arguments t,data types and return value data types.function prototype enable ANSI ccompilers to check for function call in user program that passes invalid number number of argument or incompatiblle argument data types.these fix a major weakness of the K&R C compilers:invalid call in user program often passes compilation but cause program to crash when they are executed A: A major difference nobody has yet mentioned is that before ANSI, C was defined largely by precedent rather than specification; in cases where certain operations would have predictable consequences on some platforms but not others (e.g. using relational operators on two unrelated pointers), precedent strongly favored making platform guarantees available to the programmer. For example: * *On platforms which define a natural ranking among all pointers to all objects, application of the relational operators to arbitrary pointers could be relied upon to yield that ranking. *On platforms where the natural means of testing whether one pointer is "greater than" another never has any side-effect other than yielding a true or false value, application of the relational operators to arbitrary pointers could likewise be relied upon never to have any side-effects other than yielding a true or false value. *On platforms where two or more integer types shared the same size and representation, a pointer to any such integer type could be relied upon to read or write information of any other type with the same representation. *On two's-complement platforms where integer overflows naturally wrap silently, an operation involving an unsigned values smaller than "int" could be relied upon to behave as though the value was unsigned in cases where the result would be between INT_MAX+1u and UINT_MAX and it was not promoted to a larger type, nor used as the left operand of >>, nor either operand of /, %, or any comparison operator. Incidentally, the rationale for the Standard gives this as one of the reasons small unsigned types promote to signed. Prior to C89, it was unclear to what lengths compilers for platforms where the above assumptions wouldn't naturally hold might be expected to go to uphold those assumptions anyway, but there was little doubt that compilers for platforms which could easily and cheaply uphold such assumptions should do so. The authors of the C89 Standard didn't bother to expressly say that because: * *Compilers whose writers weren't being deliberately obtuse would continue doing such things when practical without having to be told (the rationale given for promoting small unsigned values to signed strongly reinforces this view). *The Standard only required implementations to be capable of running one possibly-contrived program without a stack overflow, and recognized that while an obtuse implementation could treat any other program as invoking Undefined Behavior but didn't think it was worth worrying about obtuse compiler writers writing implementations that were "conforming" but useless. Although "C89" was interpreted contemporaneously as meaning "the language defined by C89, plus whatever additional features and guarantees the platform provides", the authors of gcc have been pushing an interpretation which excludes any features and guarantees beyond those mandated by C89. A: Function prototypes were the most obvious change between K&R C and C89, but there were plenty of others. A lot of important work went into standardizing the C library, too. Even though the standard C library was a codification of existing practice, it codified multiple existing practices, which made it more difficult. P.J. Plauger's book, The Standard C Library, is a great reference, and also tells some of the behind-the-scenes details of why the library ended up the way it did. The ANSI/ISO standard C is very similar to K&R C in most ways. It was intended that most existing C code should build on ANSI compilers without many changes. Crucially, though, in the pre-standard era, the semantics of the language were open to interpretation by each compiler vendor. ANSI C brought in a common description of language semantics which put all the compilers on an equal footing. It's easy to take this for granted now, some 20 years later, but this was a significant achievement. For the most part, if you don't have a pre-standard C codebase to maintain, you should be glad you don't have to worry about it. If you do--or worse yet, if you're trying to bring an old program up to more modern standards--then you have my sympathies. A: There are some minor differences, but I think later editions of K&R are for ANSI C, so there's no real difference anymore. "C Classic" for lack of a better terms had a slightly different way of defining functions, i.e. int f( p, q, r ) int p, float q, double r; { // Code goes here } I believe the other difference was function prototypes. Prototypes didn't have to - in fact they couldn't - take a list of arguments or types. In ANSI C they do. A: The biggest single difference, I think, is function prototyping and the syntax for describing the types of function arguments. A: Despite all the claims to the contary K&R was and is quite capable of providing any sort of stuff from low down close to the hardware on up. The problem now is to find a compiler (preferably free) that can give a clean compile on a couple of millions of lines of K&R C without out having to mess with it.And running on something like a AMD multi core processor. As far as I can see, having looked at the source of the GCC 4.x.x series there is no simple hack to reactivate the -traditional and -cpp-traditional lag functionality to their previous working state without without more effor than I am prepered to put in. And simpler to build a K&R pre-ansi compiler from scratch.
{ "language": "en", "url": "https://stackoverflow.com/questions/22500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Formview Dropdown 2 depends on Dropdown 1 I have a form view, in the edit template I have two drop downs. Drop down 1 is explicitly set with a list of allowed values. It is also set to autopostback. Drop down 2 is databound to an objectdatasource, this objectdatasource uses the first dropdown as one of it's parameters. (The idea is that drop down 1 limits what is shown in drop down 2) On the first view of the edit template for an item it works fine. But if drop down 1 has a different item selected it post back and generates an error Databinding methods such as Eval(), XPath(), and Bind() can only be used in the context of a databound control. Here is the drop down list #2: <asp:DropDownList ID="ProjectList" runat="server" SelectedValue='<%# Bind("ConnectToProject_ID","{0:D}") %>' DataSourceID="MasterProjectsDataSource2" DataTextField="Name" DataValueField="ID" AppendDataBoundItems="true"> <asp:ListItem Value="0" Text="{No Master Project}" Selected="True" /> </asp:DropDownList> And here is the MasterProjectDataSource2: <asp:ObjectDataSource ID="MasterProjectsDataSource2" runat="server" SelectMethod="GetMasterProjectList" TypeName="WebWorxData.Project" > <SelectParameters> <asp:ControlParameter ControlID="RPMTypeList" Name="RPMType_ID" PropertyName="SelectedValue" Type="Int32" /> </SelectParameters> </asp:ObjectDataSource> Any help on how to get this to work would be greatly appriciated. A: I had a similar problem with bound dropdownlists in a FormView. I worked around it by setting the selected value manually in the formview's "OnDataBound". (don't know where you get ConnectToProject_ID from) FormView fv = (FormView)sender; DropDownList ddl = (DropDownList)fv.FindControl("ProjectList"); ddl.SelectedValue = String.Format("{0:D}", ConnectToProject_ID); When you ready to save, use the "OnItemInserting" event: FormView fv = (FormView)sender; DropDownList ddl = (DropDownList)fv.FindControl("ProjectList"); e.Values["ConnectToProject_ID"] = ddl.SelectedValue; or "OnItemUpdating" When you ready to save, use the "OnItemInserting" event: FormView fv = (FormView)sender; DropDownList ddl = (DropDownList)fv.FindControl("ProjectList"); e.NewValues["ConnectToProject_ID"] = ddl.SelectedValue; A: Sounds like the controls aren't being databound properly after the postback. Are you databinding the first dropdown in the page or in the codebehind? If codebehind, are you doing it in on_init or on_load every time? There might be an issue of the SelectedValue of the second drop down being set to a non-existent item after the postback. A: Unless your 2nd dropdown is in a databound control (say, a Repeater) - I'm not sure what you're trying to bind SelectedValue to. Apparently, neither is .NET - since that's probably where the error is occurring. Where's Connect_ToProjectId supposed to come from?
{ "language": "en", "url": "https://stackoverflow.com/questions/22503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why is ASP.NET gzip compression corrupting CSS? I have an ASP.NET webforms application (3.5 SP1) that I'm working on, and attempting to enable gzip fpr HTML and CSS that comes down the pipe. I'm using this implementation (and tried a few others that hook into Application_BeginRequest), and it seems to be corrupting the external CSS file that the pages use, but intermittently...suddenly all styles will disappear on a page refresh, stay that way for awhile, and then suddenly start working again. Both IE7 and FF3 exhibit this behavior. When viewing the CSS using the web developer toolbar, it returns jibberish. The cache-control header is coming through as "private," but I don't know enough to figure out if that's a contributing factor or not. Also, this is running on the ASP.NET Development Server. Maybe it'd be fine with IIS, but I'm developing on XP and it'd be IIS5. A: Is it only CSS files that get corrupted? Do JS files (or any other static text files) come through ok? Also can you duplicate the behavior if you browse directly to the CSS file? I've only enabled compression on Windows 2003 server's IIS using this approach: * *IIS → Web Sites → Properties → Service tab, check both boxes *IIS → Web Service Extensions → Right click, Add New Name Http Compression Required Files %systemroot%\system32\inetsrv\gzip.dll *IIS → Right click top node, Internet Information Services, check Enable Direct Metabase Edit *Backup and Edit %systemroot%\system32\inetsrv\MetaBase.xml * *Find Location ="/LM/W3SVC/Filters/Compression/gzip" * *Add png, css, js and any other static file extensions to HcFileExtensions *Add aspx and any other executable extensions to HcScriptFileExtensions *Save *Restart IIS (run iisreset) If you have a Windows 2003/2008 server to play with you could try that approach. A: If you will be deploying on IIS 6 or IIS 7, just use the built-in IIS compression. We're using it on production sites for compressing HTML, CSS, and JavaScript with no errors. It also caches the compressed version on the server, so the compression hit is only taken once.
{ "language": "en", "url": "https://stackoverflow.com/questions/22509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do I secure a folder used to let users upload files? I have a folder in my web server used for the users to upload photos using an ASP page. Is it safe enough to give IUSR write permissions to the folder? Must I secure something else? I am afraid of hackers bypassing the ASP page and uploading content directly to the folder. I'm using ASP classic and IIS6 on Windows 2003 Server. The upload is through HTTP, not FTP. Edit: Changing the question for clarity and changing my answers as comments. A: also, I would recommend not to let the users upload into a folder that's accessible from the web. Even the best MIME type detection may fail and you absolutely don't want users to upload, say, an executable disguised as a jpeg in a case where your MIME sniffing fails, but the one in IIS works correctly. In the PHP world it's even worse, because an attacker could upload a malicious PHP script and later access it via the webserver. Always, always store the uploaded files in a directory somewhere outside the document root and access them via some accessing-script which does additional sanitizing (and at least explicitly sets a image/whatever MIME type. A: How will the user upload the photos? If you are writing an ASP page to accept the uploaded files then only the user that IIS runs as will need write permission to the folder, since IIS will be doing the file I/O. Your ASP page should check the file size and have some form of authentication to prevent hackers from filling your hard drive. If you are setting up an FTP server or some other file transfer method, then the answer will be specific to the method you choose. A: You'll have to grant write permissions, but you can check the file's mime type to ensure an image. You can use FSO as so: set fs=Server.CreateObject("Scripting.FileSystemObject") set f=fs.GetFile("upload.jpg") 'image mime types or image/jpeg or image/gif, so just check to see if "image" is instr if instr(f.type, "image") = 0 then f.delete end if set f=nothing set fs=nothing Also, most upload COM objects have a type property that you could check against before writing the file. A: Your best bang for the buck would probably be to use an upload component (I've used ASPUpload) that allows you to upload/download files from a folder that isn't accessible from the website. You'll get some authentication hooks and won't have to worry about someone casually browsing the folder and downloading the files (or uploading in your case), since the files are only available through the component.
{ "language": "en", "url": "https://stackoverflow.com/questions/22519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Execute shortcuts like programs Example: You have a shortcut s to SomeProgram in the current directory. In cmd.exe, you can type s and it will launch the program. In PowerShell, typing s gives: The term 's' is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again. If you type s.lnk or SomeProgram, it runs the program just fine. How can I configure PowerShell to execute shortcuts just like programs? A: On my Vista system typing S won't launch a lnk file unless I have the environment variable PATHEXT set with .lnk in the list. When I do. S will work in cmd.exe and I have to do .\S in powershell. A: After adding ;.LNK to the end of my PATHEXT environment variable, I can now execute shortcuts even without the preceding ./ notation. (Thanks bruceatk!) I was also inspired by Steven's suggestion to create a little script that automatically aliases all the shortcuts in my PATH (even though I plan to stick with the simpler solution ;). $env:path.Split( ';' ) | Get-ChildItem -filter *.lnk | select @{ Name='Path'; Expression={ $_.FullName } }, @{ Name='Name'; Expression={ [IO.Path]::GetFileNameWithoutExtension( $_.Name ) } } | where { -not (Get-Alias $_.Name -ea 0) } | foreach { Set-Alias $_.Name $_.Path } A: I don't believe you can. You might be better off aliasing commonly used commands in a script that you call from your profile script. Example - Set-Alias np c:\windows\notepad.exe Then you have your short, easily typeable name available from the command line. A: For one, the shortcut is not "s" it is "s.lnk". E.g. you are not able to open a text file (say with notepad) by typing "t" when the name is "t.txt" :) Technet says The PATHEXT environment variable defines the list of file extensions checked by Windows NT when searching for an executable file. The default value of PATHEXT is .COM;.EXE;.BAT;.CMD You can dot-source as described by others here, or you could also use the invocation character "&". This means that PS treats your string as something to execute rather than just text. This might be more important in a script though. I'd add that you should pass any parameters OUTSIDE of the quotes (this one bit me before) note that the "-r" is not in the quoted string, only the exe. & "C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe" -r | out-null A: You can also invoke a shortcut by using the "invoke-item" cmdlet. So for example if you wanted to launch "internet explorer.lnk" you can type the following command: invoke-item 'Internet Explorer.lnk' Or you could also use the alias ii 'internet explorer.lnk' Another cool thing is that you could do "invoke-item t.txt" and it would automatically open whatever the default handler for *.txt files were, such as notepad. Note If you want to execute an application, app.exe, in the current directory you have to actually specify the path, relative or absolute, to execute. ".\app.exe" is what you would need to type to execute the application. A: You can always use tab completion to type "s[TAB]" and press ENTER and that will execute it.
{ "language": "en", "url": "https://stackoverflow.com/questions/22524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: PHP includes vs OOP I would like to have a reference for the pros and cons of using include files vs objects(classes) when developing PHP applications. I know I would benefit from having one place to go for this answer...I have a few opinions of my own but I look forward to hearing others. A Simple Example: Certain pages on my site are only accessible to logged in users. I have two options for implementation (there are others but let's limit it to these two) * *Create an authenticate.php file and include it on every page. It holds the logic for authentication. *Create a user object, which has an authenticate function, reference the object for authentication on every page. Edit I'd like to see some way weigh the benefits of one over the other. My current (and weak reasons) follow: Includes - Sometimes a function is just easier/shorter/faster to call Objects - Grouping of functionality and properties leads for longer term maintenance. Includes - Less code to write (no constructor, no class syntax) call me lazy but this is true. Objects - Force formality and a single approach to functions and creation. Includes - Easier for a novice to deal with Objects - Harder for novices, but frowned upon by professionals. I look at these factors at the start of a project to decide if I want to do includes or objects. Those are a few pros and cons off the top of my head. A: While the question touches on a couple of very debatable issues (OOP, User authentication) I'll skip by those and second Konrad's comment about __autoload. Anyone who knows C/C++ knows how much of a pain including files can be. With autoload, a PHP5 addition, if you choose to use OOP (which I do almost exclusively) you only need use some standard file naming convention and (I would recommend) restricting a single class per file and PHP will do the rest for you. Cleans up the code and you no longer have to worry about remembering to remove includes that are no longer necessary (one of the many problems with includes). A: These are not really opposite choices. You will have to include the checking code anyway. I read your question as procedural programming vs. OO programming. Writing a few lines of code, or a function, and including it in your page header was how things were done in PHP3 or PHP4. It's simple, it works (that's how we did it in osCommerce, for example, an eCommerce PHP application). But it's not easy to maintain and modify, as many developers can confirm. In PHP5 you'd write a user object which will carry its own data and methods for authentication. Your code will be clearer and easier to maintain as everything having to do with users and authentication will be concentrated in a single place. A: I don't have much PHP experience, although I'm using it at my current job. In general, I find that larger systems benefit from the readability and understandability that OO provides. But things like consistency (don't mix OO and non-OO) and your personal preferences (although only really on personal projects) are also important. A: I've learned never to use include in PHP except inside the core libraries that I use and one central include of these libraries (+ config) in the application. Everything else is handled by a global __autoload handler that can be configured to recognize the different classes needed. This can be done easily using appropriate naming conventions for the classes. This is not only flexible but also quite efficient and keeps the architecture clean. A: Can you be a bit more specific? For the example you give you need to use include in both ways. In case 1 you only include a file, in case 2 you need to include the class file (for instance user.class.php) to allow instantiation of the User class. It depends how the rest of the application is built, is it OO? Use OO. A: Whether you do it in classes or in a more procedural style, you simply need to check to ensure that: * *There is a session; *That the session is valid; and, *That the user in possession of the session has proper privileges. You can encapsulate all three steps into one function (or a static method in a Session class might work). Try this: class Session { const GUEST = 0; const SUBSCRIBER = 1; const ADMINISTRATOR = 2; public static function Type() { session_start(); // Depending on how you use sessions on // your site, you might just check for the // existence of PHPSESSID. If you track // every visitor with sessions, however, you // might want to assign some separate unique // number (that you can track in a DB) to // authenticated sessions if(!$_SESSION['uniqid']) { return Session::GUEST; } else { // For the best security, don't store the // user's access permissions in the $_SESSION, // but rather check against the DB. This will // ensure that recently deleted or downgraded // administrators will not be able to make use // of a previous session. return THE_ACCESS_LEVEL_ACCORDING_TO_THE_DB } } } // In your files that need to check for authentication (you // could also do this in a controller if you're going MVC if(!(Session::Type() == Session::ADMINISTRATOR)) { // Redirect them to wherever you want them to go instead, // like a log in page or something like that. }
{ "language": "en", "url": "https://stackoverflow.com/questions/22528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Passing a commented, multi-line (freespace) regex to preg_match I have a regex that is going to end up being a bit long and it'd make it much easier to read to have it across multiple lines. I tried this but it just barfs. preg_match( '^J[0-9]{7}:\s+ (.*?) #Extract the Transaction Start Date msg \s+J[0-9]{7}:\s+Project\sname:\s+ (.*?) #Extract the Project Name \s+J[0-9]{7}:\s+Job\sname:\s+ (.*?) #Extract the Job Name \s+J[0-9]{7}:\s+', $this->getResultVar('FullMessage'), $atmp ); Is there are way to pass a regex in the above form to preg_match? A: You can use the extended syntax: preg_match("/ test /x", $foo, $bar); A: Yes, you can add the /x Pattern Modifier. This modifier turns on additional functionality of PCRE that is incompatible with Perl. Any backslash in a pattern that is followed by a letter that has no special meaning causes an error, thus reserving these combinations for future expansion. By default, as in Perl, a backslash followed by a letter with no special meaning is treated as a literal. There are at present no other features controlled by this modifier. For your example try this: preg_match('/ ^J[0-9]{7}:\s+ (.*?) #Extract the Transaction Start Date msg \s+J[0-9]{7}:\s+Project\sname:\s+ (.*?) #Extract the Project Name \s+J[0-9]{7}:\s+Job\sname:\s+ (.*?) #Extract the Job Name \s+J[0-9]{7}:\s+ /x', $this->getResultVar('FullMessage'), $atmp); A: OK, here's a solution: preg_match( '/(?x)^J[0-9]{7}:\s+ (.*?) #Extract the Transaction Start Date msg \s+J[0-9]{7}:\s+Project\sname:\s+ (.*?) #Extract the Project Name \s+J[0-9]{7}:\s+Job\sname:\s+ (.*?) #Extract the Job Name \s+J[0-9]{7}:\s+/' , $this->getResultVar('FullMessage'), $atmp); The key is (?x) at the beginning which makes whitespace insignificant and allows comments. It's also important that there's no whitespace between the starting and ending quotes and the start & end of the regex. My first attempt like this gave errors: preg_match(' /(?x)^J[0-9]{7}:\s+ (.*?) #Extract the Transaction Start Date msg \s+J[0-9]{7}:\s+Project\sname:\s+ (.*?) #Extract the Project Name \s+J[0-9]{7}:\s+Job\sname:\s+ (.*?) #Extract the Job Name \s+J[0-9]{7}:\s+/ ', $this->getResultVar('FullMessage'), $atmp); What Konrad said also works and feels a little easier than sticking (?x) at the beginning. A: * *You should add delimiters: the first character of the regex will be used to indicate the end of the pattern. *You should add the 'x' flag. This has the same result as putting (?x) at the beginning, but it is more readable imho. A: In PHP the comment syntax looks like this:(?# Your comment here) preg_match(' ^J[0-9]{7}:\s+ (.*?) (?#Extract the Transaction Start Date msg) \s+J[0-9]{7}:\s+Project\sname:\s+ (.*?) (?#Extract the Project Name) \s+J[0-9]{7}:\s+Job\sname:\s+ (.*?) (?#Extract the Job Name) \s+J[0-9]{7}:\s+ ', $this->getResultVar('FullMessage'), $atmp); For more information see the PHP Regular Expression Syntax Reference You can also use the PCRE_EXTENDED (or 'x') Pattern Modifier as Mark shows in his example.
{ "language": "en", "url": "https://stackoverflow.com/questions/22552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I read in the contents of a directory in Perl? How do I get Perl to read the contents of a given directory into an array? Backticks can do it, but is there some method using 'scandir' or a similar term? A: This will do it, in one line (note the '*' wildcard at the end) @files = </path/to/directory/*>; # To demonstrate: print join(", ", @files); A: IO::Dir is nice and provides a tied hash interface as well. From the perldoc: use IO::Dir; $d = IO::Dir->new("."); if (defined $d) { while (defined($_ = $d->read)) { something($_); } $d->rewind; while (defined($_ = $d->read)) { something_else($_); } undef $d; } tie %dir, 'IO::Dir', "."; foreach (keys %dir) { print $_, " " , $dir{$_}->size,"\n"; } So you could do something like: tie %dir, 'IO::Dir', $directory_name; my @dirs = keys %dir; A: opendir(D, "/path/to/directory") || die "Can't open directory: $!\n"; while (my $f = readdir(D)) { print "\$f = $f\n"; } closedir(D); EDIT: Oh, sorry, missed the "into an array" part: my $d = shift; opendir(D, "$d") || die "Can't open directory $d: $!\n"; my @list = readdir(D); closedir(D); foreach my $f (@list) { print "\$f = $f\n"; } EDIT2: Most of the other answers are valid, but I wanted to comment on this answer specifically, in which this solution is offered: opendir(DIR, $somedir) || die "Can't open directory $somedir: $!"; @dots = grep { (!/^\./) && -f "$somedir/$_" } readdir(DIR); closedir DIR; First, to document what it's doing since the poster didn't: it's passing the returned list from readdir() through a grep() that only returns those values that are files (as opposed to directories, devices, named pipes, etc.) and that do not begin with a dot (which makes the list name @dots misleading, but that's due to the change he made when copying it over from the readdir() documentation). Since it limits the contents of the directory it returns, I don't think it's technically a correct answer to this question, but it illustrates a common idiom used to filter filenames in Perl, and I thought it would be valuable to document. Another example seen a lot is: @list = grep !/^\.\.?$/, readdir(D); This snippet reads all contents from the directory handle D except '.' and '..', since those are very rarely desired to be used in the listing. A: You could use DirHandle: use DirHandle; $d = new DirHandle "."; if (defined $d) { while (defined($_ = $d->read)) { something($_); } $d->rewind; while (defined($_ = $d->read)) { something_else($_); } undef $d; } DirHandle provides an alternative, cleaner interface to the opendir(), closedir(), readdir(), and rewinddir() functions. A: A quick and dirty solution is to use glob @files = glob ('/path/to/dir/*'); A: Here's an example of recursing through a directory structure and copying files from a backup script I wrote. sub copy_directory { my ($source, $dest) = @_; my $start = time; # get the contents of the directory. opendir(D, $source); my @f = readdir(D); closedir(D); # recurse through the directory structure and copy files. foreach my $file (@f) { # Setup the full path to the source and dest files. my $filename = $source . "\\" . $file; my $destfile = $dest . "\\" . $file; # get the file info for the 2 files. my $sourceInfo = stat( $filename ); my $destInfo = stat( $destfile ); # make sure the destinatin directory exists. mkdir( $dest, 0777 ); if ($file eq '.' || $file eq '..') { } elsif (-d $filename) { # if it's a directory then recurse into it. #print "entering $filename\n"; copy_directory($filename, $destfile); } else { # Only backup the file if it has been created/modified since the last backup if( (not -e $destfile) || ($sourceInfo->mtime > $destInfo->mtime ) ) { #print $filename . " -> " . $destfile . "\n"; copy( $filename, $destfile ) or print "Error copying $filename: $!\n"; } } } print "$source copied in " . (time - $start) . " seconds.\n"; } A: Similar to the above, but I think the best version is (slightly modified) from "perldoc -f readdir": opendir(DIR, $somedir) || die "can't opendir $somedir: $!"; @dots = grep { (!/^\./) && -f "$somedir/$_" } readdir(DIR); closedir DIR; A: You can also use the children method from the popular Path::Tiny module: use Path::Tiny; my @files = path("/path/to/dir")->children; This creates an array of Path::Tiny objects, which are often more useful than just filenames if you want to do things to the files, but if you want just the names: my @files = map { $_->stringify } path("/path/to/dir")->children; A: from: http://perlmeme.org/faqs/file_io/directory_listing.html #!/usr/bin/perl use strict; use warnings; my $directory = '/tmp'; opendir (DIR, $directory) or die $!; while (my $file = readdir(DIR)) { next if ($file =~ m/^\./); print "$file\n"; } The following example (based on a code sample from perldoc -f readdir) gets all the files (not directories) beginning with a period from the open directory. The filenames are found in the array @dots. #!/usr/bin/perl use strict; use warnings; my $dir = '/tmp'; opendir(DIR, $dir) or die $!; my @dots = grep { /^\./ # Begins with a period && -f "$dir/$_" # and is a file } readdir(DIR); # Loop through the array printing out the filenames foreach my $file (@dots) { print "$file\n"; } closedir(DIR); exit 0; closedir(DIR); exit 0;
{ "language": "en", "url": "https://stackoverflow.com/questions/22566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: What's a good way to check if two datetimes are on the same calendar day in TSQL? Here is the issue I am having: I have a large query that needs to compare datetimes in the where clause to see if two dates are on the same day. My current solution, which sucks, is to send the datetimes into a UDF to convert them to midnight of the same day, and then check those dates for equality. When it comes to the query plan, this is a disaster, as are almost all UDFs in joins or where clauses. This is one of the only places in my application that I haven't been able to root out the functions and give the query optimizer something it can actually use to locate the best index. In this case, merging the function code back into the query seems impractical. I think I am missing something simple here. Here's the function for reference. if not exists (select * from dbo.sysobjects where id = object_id(N'dbo.f_MakeDate') and type in (N'FN', N'IF', N'TF', N'FS', N'FT')) exec('create function dbo.f_MakeDate() returns int as begin declare @retval int return @retval end') go alter function dbo.f_MakeDate ( @Day datetime, @Hour int, @Minute int ) returns datetime as /* Creates a datetime using the year-month-day portion of @Day, and the @Hour and @Minute provided */ begin declare @retval datetime set @retval = cast( cast(datepart(m, @Day) as varchar(2)) + '/' + cast(datepart(d, @Day) as varchar(2)) + '/' + cast(datepart(yyyy, @Day) as varchar(4)) + ' ' + cast(@Hour as varchar(2)) + ':' + cast(@Minute as varchar(2)) as datetime) return @retval end go To complicate matters, I am joining on time zone tables to check the date against the local time, which could be different for every row: where dbo.f_MakeDate(dateadd(hh, tz.Offset + case when ds.LocalTimeZone is not null then 1 else 0 end, t.TheDateINeedToCheck), 0, 0) = @activityDateMidnight [Edit] I'm incorporating @Todd's suggestion: where datediff(day, dateadd(hh, tz.Offset + case when ds.LocalTimeZone is not null then 1 else 0 end, t.TheDateINeedToCheck), @ActivityDate) = 0 My misconception about how datediff works (the same day of year in consecutive years yields 366, not 0 as I expected) caused me to waste a lot of effort. But the query plan didn't change. I think I need to go back to the drawing board with the whole thing. A: This is much more concise: where datediff(day, date1, date2) = 0 A: where year(date1) = year(date2) and month(date1) = month(date2) and day(date1) = day(date2) A: Make sure to read Only In A Database Can You Get 1000% + Improvement By Changing A Few Lines Of Code so that you are sure that the optimizer can utilize the index effectively when messing with dates A: You pretty much have to keep the left side of your where clause clean. So, normally, you'd do something like: WHERE MyDateTime >= @activityDateMidnight AND MyDateTime < (@activityDateMidnight + 1) (Some folks prefer DATEADD(d, 1, @activityDateMidnight) instead - but it's the same thing). The TimeZone table complicates matter a bit though. It's a little unclear from your snippet, but it looks like t.TheDateInTable is in GMT with a Time Zone identifier, and that you're then adding the offset to compare against @activityDateMidnight - which is in local time. I'm not sure what ds.LocalTimeZone is, though. If that's the case, then you need to get @activityDateMidnight into GMT instead. A: this will remove time component from a date for you: select dateadd(d, datediff(d, 0, current_timestamp), 0) A: Eric Z Beard: I do store all dates in GMT. Here's the use case: something happened at 11:00 PM EST on the 1st, which is the 2nd GMT. I want to see activity for the 1st, and I am in EST so I will want to see the 11PM activity. If I just compared raw GMT datetimes, I would miss things. Each row in the report can represent an activity from a different time zone. Right, but when you say you're interested in activity for Jan 1st 2008 EST: SELECT @activityDateMidnight = '1/1/2008', @activityDateTZ = 'EST' you just need to convert that to GMT (I'm ignoring the complication of querying for the day before EST goes to EDT, or vice versa): Table: TimeZone Fields: TimeZone, Offset Values: EST, -4 --Multiply by -1, since we're converting EST to GMT. --Offsets are to go from GMT to EST. SELECT @activityGmtBegin = DATEADD(hh, Offset * -1, @activityDateMidnight) FROM TimeZone WHERE TimeZone = @activityDateTZ which should give you '1/1/2008 4:00 AM'. Then, you can just search in GMT: SELECT * FROM EventTable WHERE EventTime >= @activityGmtBegin --1/1/2008 4:00 AM AND EventTime < (@activityGmtBegin + 1) --1/2/2008 4:00 AM The event in question is stored with a GMT EventTime of 1/2/2008 3:00 AM. You don't even need the TimeZone in the EventTable (for this purpose, at least). Since EventTime is not in a function, this is a straight index scan - which should be pretty efficient. Make EventTime your clustered index, and it'll fly. ;) Personally, I'd have the app convert the search time into GMT before running the query. A: You're spoilt for choice in terms of options here. If you are using Sybase or SQL Server 2008 you can create variables of type date and assign them your datetime values. The database engine gets rid of the time for you. Here's a quick and dirty test to illustrate (Code is in Sybase dialect): declare @date1 date declare @date2 date set @date1='2008-1-1 10:00' set @date2='2008-1-1 22:00' if @date1=@date2 print 'Equal' else print 'Not equal' For SQL 2005 and earlier what you can do is convert the date to a varchar in a format that does not have the time component. For instance the following returns 2008.08.22 select convert(varchar,'2008-08-22 18:11:14.133',102) The 102 part specifies the formatting (Books online can list for you all the available formats) So, what you can do is write a function that takes a datetime and extracts the date element and discards the time. Like so: create function MakeDate (@InputDate datetime) returns datetime as begin return cast(convert(varchar,@InputDate,102) as datetime); end You can then use the function for companions Select * from Orders where dbo.MakeDate(OrderDate) = dbo.MakeDate(DeliveryDate) A: Eric Z Beard: the activity date is meant to indicate the local time zone, but not a specific one Okay - back to the drawing board. Try this: where t.TheDateINeedToCheck BETWEEN ( dateadd(hh, (tz.Offset + ISNULL(ds.LocalTimeZone, 0)) * -1, @ActivityDate) AND dateadd(hh, (tz.Offset + ISNULL(ds.LocalTimeZone, 0)) * -1, (@ActivityDate + 1)) ) which will translate the @ActivityDate to local time, and compare against that. That's your best chance for using an index, though I'm not sure it'll work - you should try it and check the query plan. The next option would be an indexed view, with an indexed, computed TimeINeedToCheck in local time. Then you just go back to: where v.TheLocalDateINeedToCheck BETWEEN @ActivityDate AND (@ActivityDate + 1) which would definitely use the index - though you have a slight overhead on INSERT and UPDATE then. A: I would use the dayofyear function of datepart: Select * from mytable where datepart(dy,date1) = datepart(dy,date2) and year(date1) = year(date2) --assuming you want the same year too See the datepart reference here. A: Regarding timezones, yet one more reason to store all dates in a single timezone (preferably UTC). Anyway, I think the answers using datediff, datepart and the different built-in date functions are your best bet.
{ "language": "en", "url": "https://stackoverflow.com/questions/22570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Search Plugin for Safari How do we create a search plugin for Safari? Like this post A: Here is a Safari plugin whereby you can customize it to search other sites. May work with Stack Overflow (I haven't tried it). Check out that site too for other Safari plugins. A: I recently wrote Safari Omnibar which is a native extension for Safari that let's you add custom search keywords for searching directly on particular sites. You can also set the default search engine to any other site. Safari Omnikey Homepage Source Code A: If you're looking for a search plugin specifically for this site, someone will have to write one. A: AFAIK, Safari doesn't have a Search plugin capability. You could try Inquisitor; just add the URL https://stackoverflow.com/search?s=%@
{ "language": "en", "url": "https://stackoverflow.com/questions/22577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-3" }
Q: How do I cluster an upload folder with ASP.Net? We have a situation where users are allowed to upload content, and then separately make some changes, then submit a form based on those changes. This works fine in a single-server, non-failover environment, however we would like some sort of solution for sharing the files between servers that supports failover. Has anyone run into this in the past? And what kind of solutions were you able to develop? Obviously persisting to the database is one option, but we'd prefer to avoid that. A: In our scenario, we have a separate file server that both of our front end app servers write to, that way you either server has access to the same sets of files. A: At a former job we had a cluster of web servers with an F5 load balancer in front of them. We had a very similar problem in that our applications allowed users to upload content which might include photo's and such. These were legacy applications and we did not want to edit them to use a database and a SAN solution was too expensive for our situation. We ended up using a file replication service on the two clustered servers. This ran as a service on both machines using an account that had network access to paths on the opposite server. When a file was uploaded, this backend service sync'd the data in the file system folders making it available to be served from either web server. Two of the products we reviewed were ViceVersa and PeerSync. I think we ended up using PeerSync. A: The best solution for this is usually to provide the shared area on some form of SAN, which will be accessible from all servers and contain failover. This also has the benefit that you don't have to provide sticky load balancing, the upload can be handled by one server, and the edit by another. A: A shared SAN with failover is a great solution with a great (high) cost. Are there any similar solutions with failover at a reasonable cost? Perhaps something like DRBD for windows? The problem with a simple shared filesystem is the lack of redundancy (what if the fileserver goes down)?
{ "language": "en", "url": "https://stackoverflow.com/questions/22590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: ASP.NET Tutorials can you recommend some good ASP.NET tutorials or a good book? Should I jump right to ASP.NET MVC/html/javascript or learn web forms first? Thanks A: A great book if you're just beginning is Matthew MacDonald's Beginning ASP.NET 3.5 in C# 2008: From Novice to Professional. Once you're done with that a great reference (also by MacDonald) is Pro ASP.NET 3.5 in C# 2008. One of my favorite sources of information online is 4GuysFromRolla. A: MVC or WebForms...it's your choice but if I can offer one piece of advice regarding webforms...I know it'll be tempting to start dropping controls and playing with code, but it will help you A LOT if you don't skip over learning about the request and page lifecycles...a couple weeks later you'll thank yourself for spending the extra time there. A: MVC www.asp.net/mvc great videos Asp.net www.asp.net A: If you're going to use ASP.NET MVC, then go straight to it. But it's a fairly new technology, not even in beta yet, so have that in mind. However, the application model is totally different compared to ASP.NET, so it is not in fact a replacement. For tutorials, you can surely check out http://www.asp.net and http://www.asp.net/mvc - there's tons of information there. A: A site for web tutorials including ASP.net can be found here.
{ "language": "en", "url": "https://stackoverflow.com/questions/22598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Install Leopard inside VMWare I’m thinking about trying some development for the iPhone, is it possible to install Leopard inside VMWare? I already have a pretty high spec PC with a comfy setup that I’d like to use, or do I need to buy a real Mac? A: We are in no way affiliated with any of the providers below. We have tried many virtual cloud mac providers including: * *Mac in Cloud *virtualmacosx.com *xcodeclub By far the best support we got was from xcodeclub. Daniel (the owner) personally provisioned a trial virtual machine and allowed us to see if our programs ran before purchasing the service. Java 7 ended up not working on virtual Mac due to some graphics drivers issues but Daniel spent over an hour of his personal time helping us. Now that is how customer service should be like. We highly recommend his service. A: You can rent a virtual Mac with a service like www.MacinCloud.com. A: Legally, you need to buy a Mac. It is "possible" to run (at least Tiger) in VMWare -- the experience is not optimal, but you can do it. It's also possible to run OS X on PC hardware; however, it's an exercise in illegal software and hacks. A: I've run OSX under VMWare, and I can tell you with confidence that it is not an environment that you would find comfortable for developing applications in. It was barely (not really) usable for testing Mac specific browser bugs that couldn't be reproduced in Safari on Windows. On the other hand, if your hardware is supported by OSx86, you can run it natively at reasonable speeds, and I would expect it to make a fairly nice dev environment. For all cases, I'm going to assume that you have a legal OS X license, and don't mind the legal ambiguity of running it on hardware which the license explicitly forbids (the legality is unclear, imo, but I really think you'd be ok as long as its not a pirated copy). A: It is legal to run Mac OS X Server in a virtual machine on Apple hardware. All other forms of Mac OS X virtualization are currently forbidden. A: Unfortunately, there's no legal way to run OS X in a virtual machine. For developing iPhone apps you probably don't need a particularly beefy machine, so maybe look into grabbing a mac mini? They're the cheapest Macs you can get, and should probably be just fine for doing iPhone work. Plus, now you have a mac that you can use for testing other things too! :)
{ "language": "en", "url": "https://stackoverflow.com/questions/22607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Format numbers to strings in Python I need to find out how to format numbers as strings. My code is here: return str(hours)+":"+str(minutes)+":"+str(seconds)+" "+ampm Hours and minutes are integers, and seconds is a float. the str() function will convert all of these numbers to the tenths (0.1) place. So instead of my string outputting "5:30:59.07 pm", it would display something like "5.0:30.0:59.1 pm". Bottom line, what library / function do I need to do this for me? A: You can use C style string formatting: "%d:%d:d" % (hours, minutes, seconds) See here, especially: https://web.archive.org/web/20120415173443/http://diveintopython3.ep.io/strings.html A: Starting with Python 3.6, formatting in Python can be done using formatted string literals or f-strings: hours, minutes, seconds = 6, 56, 33 f'{hours:02}:{minutes:02}:{seconds:02} {"pm" if hours > 12 else "am"}' or the str.format function starting with 2.7: "{:02}:{:02}:{:02} {}".format(hours, minutes, seconds, "pm" if hours > 12 else "am") or the string formatting % operator for even older versions of Python, but see the note in the docs: "%02d:%02d:%02d" % (hours, minutes, seconds) And for your specific case of formatting time, there’s time.strftime: import time t = (0, 0, 0, hours, minutes, seconds, 0, 0, 0) time.strftime('%I:%M:%S %p', t) A: Here are some examples using the existing string format operator (%) which has been around for as long as Python has been around: >>> "Name: %s, age: %d" % ('John', 35) 'Name: John, age: 35' >>> i = 45 >>> 'dec: %d/oct: %#o/hex: %#X' % (i, i, i) 'dec: 45/oct: 055/hex: 0X2D' >>> "MM/DD/YY = %02d/%02d/%02d" % (12, 7, 41) 'MM/DD/YY = 12/07/41' >>> 'Total with tax: $%.2f' % (13.00 * 1.0825) 'Total with tax: $14.07' >>> d = {'web': 'user', 'page': 42} >>> 'http://xxx.yyy.zzz/%(web)s/%(page)d.html' % d 'http://xxx.yyy.zzz/user/42.html' Starting in Python 2.6, there is an alternative: the str.format() method. Here are the equivalent snippets to the above but using str.format(): >>> "Name: {0}, age: {1}".format('John', 35) 'Name: John, age: 35' >>> i = 45 >>> 'dec: {0}/oct: {0:#o}/hex: {0:#X}'.format(i) 'dec: 45/oct: 0o55/hex: 0X2D' >>> "MM/DD/YY = {0:02d}/{1:02d}/{2:02d}".format(12, 7, 41) 'MM/DD/YY = 12/07/41' >>> 'Total with tax: ${0:.2f}'.format(13.00 * 1.0825) 'Total with tax: $14.07' >>> d = {'web': 'user', 'page': 42} >>> 'http://xxx.yyy.zzz/{web}/{page}.html'.format(**d) 'http://xxx.yyy.zzz/user/42.html' Like Python 2.6+, all Python 3 releases (so far) understand how to do both. I shamelessly ripped this stuff straight out of my hardcore Python intro book and the slides for the Intro+Intermediate Python courses I offer from time-to-time. :-) Aug 2018 UPDATE: Of course, now that we have the f-string feature introduced in 3.6, we need the equivalent examples of that; yes, another alternative: >>> name, age = 'John', 35 >>> f'Name: {name}, age: {age}' 'Name: John, age: 35' >>> i = 45 >>> f'dec: {i}/oct: {i:#o}/hex: {i:#X}' 'dec: 45/oct: 0o55/hex: 0X2D' >>> m, d, y = 12, 7, 41 >>> f"MM/DD/YY = {m:02d}/{d:02d}/{y:02d}" 'MM/DD/YY = 12/07/41' >>> f'Total with tax: ${13.00 * 1.0825:.2f}' 'Total with tax: $14.07' >>> d = {'web': 'user', 'page': 42} >>> f"http://xxx.yyy.zzz/{d['web']}/{d['page']}.html" 'http://xxx.yyy.zzz/user/42.html' A: Python 2.6+ It is possible to use the format() function, so in your case you can use: return '{:02d}:{:02d}:{:.2f} {}'.format(hours, minutes, seconds, ampm) There are multiple ways of using this function, so for further information you can check the documentation. Python 3.6+ f-strings is a new feature that has been added to the language in Python 3.6. This facilitates formatting strings notoriously: return f'{hours:02d}:{minutes:02d}:{seconds:.2f} {ampm}' A: You can use following to achieve desired functionality "%d:%d:d" % (hours, minutes, seconds) A: You can use the str.format() to make Python recognize any objects to strings. A: I've tried this in Python 3.6.9 >>> hours, minutes, seconds = 9, 33, 35 >>> time = f'{hours:02}:{minutes:02}:{seconds:02} {"pm" if hours > 12 else "am"}' >>> print (time) 09:33:35 am >>> type(time) <class 'str'> A: str() in python on an integer will not print any decimal places. If you have a float that you want to ignore the decimal part, then you can use str(int(floatValue)). Perhaps the following code will demonstrate: >>> str(5) '5' >>> int(8.7) 8 A: If you have a value that includes a decimal, but the decimal value is negligible (ie: 100.0) and try to int that, you will get an error. It seems silly, but calling float first fixes this. str(int(float([variable])))
{ "language": "en", "url": "https://stackoverflow.com/questions/22617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "127" }
Q: Best practices for catching and re-throwing .NET exceptions What are the best practices to consider when catching exceptions and re-throwing them? I want to make sure that the Exception object's InnerException and stack trace are preserved. Is there a difference between the following code blocks in the way they handle this? try { //some code } catch (Exception ex) { throw ex; } Vs: try { //some code } catch { throw; } A: A few people actually missed a very important point - 'throw' and 'throw ex' may do the same thing but they don't give you a crucial piece of imformation which is the line where the exception happened. Consider the following code: static void Main(string[] args) { try { TestMe(); } catch (Exception ex) { string ss = ex.ToString(); } } static void TestMe() { try { //here's some code that will generate an exception - line #17 } catch (Exception ex) { //throw new ApplicationException(ex.ToString()); throw ex; // line# 22 } } When you do either a 'throw' or 'throw ex' you get the stack trace but the line# is going to be #22 so you can't figure out which line exactly was throwing the exception (unless you have only 1 or few lines of code in the try block). To get the expected line #17 in your exception you'll have to throw a new exception with the original exception stack trace. A: I would definitely use: try { //some code } catch { //you should totally do something here, but feel free to rethrow //if you need to send the exception up the stack. throw; } That will preserve your stack. A: You may also use: try { // Dangerous code } finally { // clean up, or do nothing } And any exceptions thrown will bubble up to the next level that handles them. A: Actually, there are some situations which the throw statment will not preserve the StackTrace information. For example, in the code below: try { int i = 0; int j = 12 / i; // Line 47 int k = j + 1; } catch { // do something // ... throw; // Line 54 } The StackTrace will indicate that line 54 raised the exception, although it was raised at line 47. Unhandled Exception: System.DivideByZeroException: Attempted to divide by zero. at Program.WithThrowIncomplete() in Program.cs:line 54 at Program.Main(String[] args) in Program.cs:line 106 In situations like the one described above, there are two options to preseve the original StackTrace: Calling the Exception.InternalPreserveStackTrace As it is a private method, it has to be invoked by using reflection: private static void PreserveStackTrace(Exception exception) { MethodInfo preserveStackTrace = typeof(Exception).GetMethod("InternalPreserveStackTrace", BindingFlags.Instance | BindingFlags.NonPublic); preserveStackTrace.Invoke(exception, null); } I has a disadvantage of relying on a private method to preserve the StackTrace information. It can be changed in future versions of .NET Framework. The code example above and proposed solution below was extracted from Fabrice MARGUERIE weblog. Calling Exception.SetObjectData The technique below was suggested by Anton Tykhyy as answer to In C#, how can I rethrow InnerException without losing stack trace question. static void PreserveStackTrace (Exception e) { var ctx = new StreamingContext (StreamingContextStates.CrossAppDomain) ; var mgr = new ObjectManager (null, ctx) ; var si = new SerializationInfo (e.GetType (), new FormatterConverter ()) ; e.GetObjectData (si, ctx) ; mgr.RegisterObject (e, 1, si) ; // prepare for SetObjectData mgr.DoFixups () ; // ObjectManager calls SetObjectData // voila, e is unmodified save for _remoteStackTraceString } Although, it has the advantage of relying in public methods only it also depends on the following exception constructor (which some exceptions developed by 3rd parties do not implement): protected Exception( SerializationInfo info, StreamingContext context ) In my situation, I had to choose the first approach, because the exceptions raised by a 3rd-party library I was using didn't implement this constructor. A: The way to preserve the stack trace is through the use of the throw; This is valid as well try { // something that bombs here } catch (Exception ex) { throw; } throw ex; is basically like throwing an exception from that point, so the stack trace would only go to where you are issuing the throw ex; statement. Mike is also correct, assuming the exception allows you to pass an exception (which is recommended). Karl Seguin has a great write up on exception handling in his foundations of programming e-book as well, which is a great read. Edit: Working link to Foundations of Programming pdf. Just search the text for "exception". A: When you throw ex, you're essentially throwing a new exception, and will miss out on the original stack trace information. throw is the preferred method. A: The rule of thumb is to avoid Catching and Throwing the basic Exception object. This forces you to be a little smarter about exceptions; in other words you should have an explicit catch for a SqlException so that your handling code doesn't do something wrong with a NullReferenceException. In the real world though, catching and logging the base exception is also a good practice, but don't forget to walk the whole thing to get any InnerExceptions it might have. A: Nobody has explained the difference between ExceptionDispatchInfo.Capture( ex ).Throw() and a plain throw, so here it is. However, some people have noticed the problem with throw. The complete way to rethrow a caught exception is to use ExceptionDispatchInfo.Capture( ex ).Throw() (only available from .Net 4.5). Below there are the cases necessary to test this: 1. void CallingMethod() { //try { throw new Exception( "TEST" ); } //catch { // throw; } } 2. void CallingMethod() { try { throw new Exception( "TEST" ); } catch( Exception ex ) { ExceptionDispatchInfo.Capture( ex ).Throw(); throw; // So the compiler doesn't complain about methods which don't either return or throw. } } 3. void CallingMethod() { try { throw new Exception( "TEST" ); } catch { throw; } } 4. void CallingMethod() { try { throw new Exception( "TEST" ); } catch( Exception ex ) { throw new Exception( "RETHROW", ex ); } } Case 1 and case 2 will give you a stack trace where the source code line number for the CallingMethod method is the line number of the throw new Exception( "TEST" ) line. However, case 3 will give you a stack trace where the source code line number for the CallingMethod method is the line number of the throw call. This means that if the throw new Exception( "TEST" ) line is surrounded by other operations, you have no idea at which line number the exception was actually thrown. Case 4 is similar with case 2 because the line number of the original exception is preserved, but is not a real rethrow because it changes the type of the original exception. A: If you throw a new exception with the initial exception you will preserve the initial stack trace too.. try{ } catch(Exception ex){ throw new MoreDescriptiveException("here is what was happening", ex); } A: You should always use "throw;" to rethrow the exceptions in .NET, Refer this, http://weblogs.asp.net/bhouse/archive/2004/11/30/272297.aspx Basically MSIL (CIL) has two instructions - "throw" and "rethrow": * *C#'s "throw ex;" gets compiled into MSIL's "throw" *C#'s "throw;" - into MSIL "rethrow"! Basically I can see the reason why "throw ex" overrides the stack trace. A: FYI I just tested this and the stack trace reported by 'throw;' is not an entirely correct stack trace. Example: private void foo() { try { bar(3); bar(2); bar(1); bar(0); } catch(DivideByZeroException) { //log message and rethrow... throw; } } private void bar(int b) { int a = 1; int c = a/b; // Generate divide by zero exception. } The stack trace points to the origin of the exception correctly (reported line number) but the line number reported for foo() is the line of the throw; statement, hence you cannot tell which of the calls to bar() caused the exception.
{ "language": "en", "url": "https://stackoverflow.com/questions/22623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "304" }
Q: What are the main differences between programming for Windows XP and for Vista? From a desktop application developer point of view, is there any difference between developing for Windows XP and developing for Windows Vista? A: Do not ever assume your user has access to certain key areas of the disc (i.e. program files, windows directory etc). Instead the default user account will only be able to write to a small section of their application data. Also, they won't be able to write to global areas of the registry - only the current user section. You can of course elevate their privileges, but that in itself is a task. Generally programming for Vista is the same as XP, it's just the new account restrictions you have to be wary of. Have a look at this page with regards to making your application "UAC aware" http://www.codeproject.com/KB/vista-security/MakingAppsUACAware.aspx A: User Interface Looking at the Windows Vista User Experience Guidelines you can see that they have changed many UI elements, which you should be aware of. Some major things to take note of: * *Larger icons *New font (Which affects some custom UI constistency) *New dialog box features (task dialogs) *Altered common dialogs (like File Open, Save As, etc.) *Dialog text style and tone, and look and feel *New Aero Wizards *Redesigned toolbars *Better notification UI *New recommended method of including a search control *Glass 64-bit Vista has a 64-bit edition, and although XP did too, your users are more likely to use Vista 64 than XP 64. Now you have to deal with: * *Registry virtualization *Registry redirection (Wow6432Node) *Registry reflection *Digital signatures for kernel modules *MSI installers have new properties to deal with UAC User Account Control vastly affects the default permissions that your application has when interacting with the OS. * *How UAC works and affects your application (also see the requirements doc) *Installers have to deal with UAC New APIs There are new APIs which are targeted at either new methods of application construction or allowing new functionality: * *Cryptography API: Next Generation (CNG) *Extensible Application Markup Language (XAML) *Windows Communication Foundation (WCF) *Windows Workflow Foundation (WF) *And many more smaller ones Installers Because installations can only use common runtimes they install after a transaction has completed, custom actions will fail if your custom action dll requires the Visual C++ runtimes above the VS 2005 CRT (non-SP1). A: There can be, but that's a conscious choice you make as the developer. You can use new Vista stuff, like UAC and CommandLinks and Aero and so forth. But you don't have to (even UAC can be programmed around -- just don't do anything that needs admin privileges). If you choose to ignore all of the Vista stuff, then there's absolutely no difference between the two. If you do want to include that stuff in your app, it makes a difference. But I'd say not a huge one. And if you abstract away the differences (for example, write your own function that shows a TaskDialog for Vista, but which dumbs down the input you give it into a MesssageBox on XP), then you'll only be writing against your own code, and the differences will seem like almost nothing. Also, a lot of Vista's new stuff (for example, UAC or Aero) is stuff that you worry about once, when you create the first piece of functionality that uses it, get it working, and then never think about again while you're developing the app. A: By far the most painful part of moving an application from XP to Vista (from my point of view) is dealing with the numerous services and IPv6 stuff that uses ports which were previously free, and dealing with the Wireless Provisioning -> Native WiFi transition. The UAC stuff is basically a moot point; there is very little the application developer needs to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/22674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to download a file over HTTP? I have a small utility that I use to download an MP3 file from a website on a schedule and then builds/updates a podcast XML file which I've added to iTunes. The text processing that creates/updates the XML file is written in Python. However, I use wget inside a Windows .bat file to download the actual MP3 file. I would prefer to have the entire utility written in Python. I struggled to find a way to actually download the file in Python, thus why I resorted to using wget. So, how do I download the file using Python? A: You can get the progress feedback with urlretrieve as well: def report(blocknr, blocksize, size): current = blocknr*blocksize sys.stdout.write("\r{0:.2f}%".format(100.0*current/size)) def downloadFile(url): print "\n",url fname = url.split('/')[-1] print fname urllib.urlretrieve(url, fname, report) A: Use urllib.request.urlopen(): import urllib.request with urllib.request.urlopen('http://www.example.com/') as f: html = f.read().decode('utf-8') This is the most basic way to use the library, minus any error handling. You can also do more complex stuff such as changing headers. On Python 2, the method is in urllib2: import urllib2 response = urllib2.urlopen('http://www.example.com/') html = response.read() A: If speed matters to you, I made a small performance test for the modules urllib and wget, and regarding wget I tried once with status bar and once without. I took three different 500MB files to test with (different files- to eliminate the chance that there is some caching going on under the hood). Tested on debian machine, with python2. First, these are the results (they are similar in different runs): $ python wget_test.py urlretrive_test : starting urlretrive_test : 6.56 ============== wget_no_bar_test : starting wget_no_bar_test : 7.20 ============== wget_with_bar_test : starting 100% [......................................................................] 541335552 / 541335552 wget_with_bar_test : 50.49 ============== The way I performed the test is using "profile" decorator. This is the full code: import wget import urllib import time from functools import wraps def profile(func): @wraps(func) def inner(*args): print func.__name__, ": starting" start = time.time() ret = func(*args) end = time.time() print func.__name__, ": {:.2f}".format(end - start) return ret return inner url1 = 'http://host.com/500a.iso' url2 = 'http://host.com/500b.iso' url3 = 'http://host.com/500c.iso' def do_nothing(*args): pass @profile def urlretrive_test(url): return urllib.urlretrieve(url) @profile def wget_no_bar_test(url): return wget.download(url, out='/tmp/', bar=do_nothing) @profile def wget_with_bar_test(url): return wget.download(url, out='/tmp/') urlretrive_test(url1) print '==============' time.sleep(1) wget_no_bar_test(url2) print '==============' time.sleep(1) wget_with_bar_test(url3) print '==============' time.sleep(1) urllib seems to be the fastest A: Just for the sake of completeness, it is also possible to call any program for retrieving files using the subprocess package. Programs dedicated to retrieving files are more powerful than Python functions like urlretrieve. For example, wget can download directories recursively (-R), can deal with FTP, redirects, HTTP proxies, can avoid re-downloading existing files (-nc), and aria2 can do multi-connection downloads which can potentially speed up your downloads. import subprocess subprocess.check_output(['wget', '-O', 'example_output_file.html', 'https://example.com']) In Jupyter Notebook, one can also call programs directly with the ! syntax: !wget -O example_output_file.html https://example.com A: Late answer, but for python>=3.6 you can use: import dload dload.save(url) Install dload with: pip3 install dload A: use wget module: import wget wget.download('url') A: import os,requests def download(url): get_response = requests.get(url,stream=True) file_name = url.split("/")[-1] with open(file_name, 'wb') as f: for chunk in get_response.iter_content(chunk_size=1024): if chunk: # filter out keep-alive new chunks f.write(chunk) download("https://example.com/example.jpg") A: In 2012, use the python requests library >>> import requests >>> >>> url = "http://download.thinkbroadband.com/10MB.zip" >>> r = requests.get(url) >>> print len(r.content) 10485760 You can run pip install requests to get it. Requests has many advantages over the alternatives because the API is much simpler. This is especially true if you have to do authentication. urllib and urllib2 are pretty unintuitive and painful in this case. 2015-12-30 People have expressed admiration for the progress bar. It's cool, sure. There are several off-the-shelf solutions now, including tqdm: from tqdm import tqdm import requests url = "http://download.thinkbroadband.com/10MB.zip" response = requests.get(url, stream=True) with open("10MB", "wb") as handle: for data in tqdm(response.iter_content()): handle.write(data) This is essentially the implementation @kvance described 30 months ago. A: Source code can be: import urllib sock = urllib.urlopen("http://diveintopython.org/") htmlSource = sock.read() sock.close() print htmlSource A: I wrote the following, which works in vanilla Python 2 or Python 3. import sys try: import urllib.request python3 = True except ImportError: import urllib2 python3 = False def progress_callback_simple(downloaded,total): sys.stdout.write( "\r" + (len(str(total))-len(str(downloaded)))*" " + str(downloaded) + "/%d"%total + " [%3.2f%%]"%(100.0*float(downloaded)/float(total)) ) sys.stdout.flush() def download(srcurl, dstfilepath, progress_callback=None, block_size=8192): def _download_helper(response, out_file, file_size): if progress_callback!=None: progress_callback(0,file_size) if block_size == None: buffer = response.read() out_file.write(buffer) if progress_callback!=None: progress_callback(file_size,file_size) else: file_size_dl = 0 while True: buffer = response.read(block_size) if not buffer: break file_size_dl += len(buffer) out_file.write(buffer) if progress_callback!=None: progress_callback(file_size_dl,file_size) with open(dstfilepath,"wb") as out_file: if python3: with urllib.request.urlopen(srcurl) as response: file_size = int(response.getheader("Content-Length")) _download_helper(response,out_file,file_size) else: response = urllib2.urlopen(srcurl) meta = response.info() file_size = int(meta.getheaders("Content-Length")[0]) _download_helper(response,out_file,file_size) import traceback try: download( "https://geometrian.com/data/programming/projects/glLib/glLib%20Reloaded%200.5.9/0.5.9.zip", "output.zip", progress_callback_simple ) except: traceback.print_exc() input() Notes: * *Supports a "progress bar" callback. *Download is a 4 MB test .zip from my website. A: You can use PycURL on Python 2 and 3. import pycurl FILE_DEST = 'pycurl.html' FILE_SRC = 'http://pycurl.io/' with open(FILE_DEST, 'wb') as f: c = pycurl.Curl() c.setopt(c.URL, FILE_SRC) c.setopt(c.WRITEDATA, f) c.perform() c.close() A: Use Python Requests in 5 lines import requests as req remote_url = 'http://www.example.com/sound.mp3' local_file_name = 'sound.mp3' data = req.get(remote_url) # Save file data to local copy with open(local_file_name, 'wb')as file: file.write(data.content) Now do something with the local copy of the remote file A: This may be a little late, But I saw pabloG's code and couldn't help adding a os.system('cls') to make it look AWESOME! Check it out : import urllib2,os url = "http://download.thinkbroadband.com/10MB.zip" file_name = url.split('/')[-1] u = urllib2.urlopen(url) f = open(file_name, 'wb') meta = u.info() file_size = int(meta.getheaders("Content-Length")[0]) print "Downloading: %s Bytes: %s" % (file_name, file_size) os.system('cls') file_size_dl = 0 block_sz = 8192 while True: buffer = u.read(block_sz) if not buffer: break file_size_dl += len(buffer) f.write(buffer) status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size) status = status + chr(8)*(len(status)+1) print status, f.close() If running in an environment other than Windows, you will have to use something other then 'cls'. In MAC OS X and Linux it should be 'clear'. A: urlretrieve and requests.get are simple, however the reality not. I have fetched data for couple sites, including text and images, the above two probably solve most of the tasks. but for a more universal solution I suggest the use of urlopen. As it is included in Python 3 standard library, your code could run on any machine that run Python 3 without pre-installing site-package import urllib.request url_request = urllib.request.Request(url, headers=headers) url_connect = urllib.request.urlopen(url_request) #remember to open file in bytes mode with open(filename, 'wb') as f: while True: buffer = url_connect.read(buffer_size) if not buffer: break #an integer value of size of written data data_wrote = f.write(buffer) #you could probably use with-open-as manner url_connect.close() This answer provides a solution to HTTP 403 Forbidden when downloading file over http using Python. I have tried only requests and urllib modules, the other module may provide something better, but this is the one I used to solve most of the problems. A: New Api urllib3 based implementation >>> import urllib3 >>> http = urllib3.PoolManager() >>> r = http.request('GET', 'your_url_goes_here') >>> r.status 200 >>> r.data *****Response Data**** More info: https://pypi.org/project/urllib3/ A: An improved version of the PabloG code for Python 2/3: #!/usr/bin/env python # -*- coding: utf-8 -*- from __future__ import ( division, absolute_import, print_function, unicode_literals ) import sys, os, tempfile, logging if sys.version_info >= (3,): import urllib.request as urllib2 import urllib.parse as urlparse else: import urllib2 import urlparse def download_file(url, dest=None): """ Download and save a file specified by url to dest directory, """ u = urllib2.urlopen(url) scheme, netloc, path, query, fragment = urlparse.urlsplit(url) filename = os.path.basename(path) if not filename: filename = 'downloaded.file' if dest: filename = os.path.join(dest, filename) with open(filename, 'wb') as f: meta = u.info() meta_func = meta.getheaders if hasattr(meta, 'getheaders') else meta.get_all meta_length = meta_func("Content-Length") file_size = None if meta_length: file_size = int(meta_length[0]) print("Downloading: {0} Bytes: {1}".format(url, file_size)) file_size_dl = 0 block_sz = 8192 while True: buffer = u.read(block_sz) if not buffer: break file_size_dl += len(buffer) f.write(buffer) status = "{0:16}".format(file_size_dl) if file_size: status += " [{0:6.2f}%]".format(file_size_dl * 100 / file_size) status += chr(13) print(status, end="") print() return filename if __name__ == "__main__": # Only run if this file is called directly print("Testing with 10MB download") url = "http://download.thinkbroadband.com/10MB.zip" filename = download_file(url) print(filename) A: Simple yet Python 2 & Python 3 compatible way comes with six library: from six.moves import urllib urllib.request.urlretrieve("http://www.example.com/songs/mp3.mp3", "mp3.mp3") A: Following are the most commonly used calls for downloading files in python: * *urllib.urlretrieve ('url_to_file', file_name) *urllib2.urlopen('url_to_file') *requests.get(url) *wget.download('url', file_name) Note: urlopen and urlretrieve are found to perform relatively bad with downloading large files (size > 500 MB). requests.get stores the file in-memory until download is complete. A: Wrote wget library in pure Python just for this purpose. It is pumped up urlretrieve with these features as of version 2.0. A: In python3 you can use urllib3 and shutil libraires. Download them by using pip or pip3 (Depending whether python3 is default or not) pip3 install urllib3 shutil Then run this code import urllib.request import shutil url = "http://www.somewebsite.com/something.pdf" output_file = "save_this_name.pdf" with urllib.request.urlopen(url) as response, open(output_file, 'wb') as out_file: shutil.copyfileobj(response, out_file) Note that you download urllib3 but use urllib in code A: import urllib2 mp3file = urllib2.urlopen("http://www.example.com/songs/mp3.mp3") with open('test.mp3','wb') as output: output.write(mp3file.read()) The wb in open('test.mp3','wb') opens a file (and erases any existing file) in binary mode so you can save data with it instead of just text. A: Python 3 * *urllib.request.urlopen import urllib.request response = urllib.request.urlopen('http://www.example.com/') html = response.read() *urllib.request.urlretrieve import urllib.request urllib.request.urlretrieve('http://www.example.com/songs/mp3.mp3', 'mp3.mp3') Note: According to the documentation, urllib.request.urlretrieve is a "legacy interface" and "might become deprecated in the future" (thanks gerrit) Python 2 * *urllib2.urlopen (thanks Corey) import urllib2 response = urllib2.urlopen('http://www.example.com/') html = response.read() *urllib.urlretrieve (thanks PabloG) import urllib urllib.urlretrieve('http://www.example.com/songs/mp3.mp3', 'mp3.mp3') A: I agree with Corey, urllib2 is more complete than urllib and should likely be the module used if you want to do more complex things, but to make the answers more complete, urllib is a simpler module if you want just the basics: import urllib response = urllib.urlopen('http://www.example.com/sound.mp3') mp3 = response.read() Will work fine. Or, if you don't want to deal with the "response" object you can call read() directly: import urllib mp3 = urllib.urlopen('http://www.example.com/sound.mp3').read() A: One more, using urlretrieve: import urllib.request urllib.request.urlretrieve("http://www.example.com/songs/mp3.mp3", "mp3.mp3") (for Python 2 use import urllib and urllib.urlretrieve) A: If you have wget installed, you can use parallel_sync. pip install parallel_sync from parallel_sync import wget urls = ['http://something.png', 'http://somthing.tar.gz', 'http://somthing.zip'] wget.download('/tmp', urls) # or a single file: wget.download('/tmp', urls[0], filenames='x.zip', extract=True) Doc: https://pythonhosted.org/parallel_sync/pages/examples.html This is pretty powerful. It can download files in parallel, retry upon failure , and it can even download files on a remote machine. A: You can python requests import os import requests outfile = os.path.join(SAVE_DIR, file_name) response = requests.get(URL, stream=True) with open(outfile,'wb') as output: output.write(response.content) You can use shutil import os import requests import shutil outfile = os.path.join(SAVE_DIR, file_name) response = requests.get(url, stream = True) with open(outfile, 'wb') as f: shutil.copyfileobj(response.content, f) * *If you are downloading from restricted url, don't forget to include access token in headers A: I wanted do download all the files from a webpage. I tried wget but it was failing so I decided for the Python route and I found this thread. After reading it, I have made a little command line application, soupget, expanding on the excellent answers of PabloG and Stan and adding some useful options. It uses BeatifulSoup to collect all the URLs of the page and then download the ones with the desired extension(s). Finally it can download multiple files in parallel. Here it is: #!/usr/bin/env python3 # -*- coding: utf-8 -*- from __future__ import (division, absolute_import, print_function, unicode_literals) import sys, os, argparse from bs4 import BeautifulSoup # --- insert Stan's script here --- # if sys.version_info >= (3,): #... #... # def download_file(url, dest=None): #... #... # --- new stuff --- def collect_all_url(page_url, extensions): """ Recovers all links in page_url checking for all the desired extensions """ conn = urllib2.urlopen(page_url) html = conn.read() soup = BeautifulSoup(html, 'lxml') links = soup.find_all('a') results = [] for tag in links: link = tag.get('href', None) if link is not None: for e in extensions: if e in link: # Fallback for badly defined links # checks for missing scheme or netloc if bool(urlparse.urlparse(link).scheme) and bool(urlparse.urlparse(link).netloc): results.append(link) else: new_url=urlparse.urljoin(page_url,link) results.append(new_url) return results if __name__ == "__main__": # Only run if this file is called directly # Command line arguments parser = argparse.ArgumentParser( description='Download all files from a webpage.') parser.add_argument( '-u', '--url', help='Page url to request') parser.add_argument( '-e', '--ext', nargs='+', help='Extension(s) to find') parser.add_argument( '-d', '--dest', default=None, help='Destination where to save the files') parser.add_argument( '-p', '--par', action='store_true', default=False, help="Turns on parallel download") args = parser.parse_args() # Recover files to download all_links = collect_all_url(args.url, args.ext) # Download if not args.par: for l in all_links: try: filename = download_file(l, args.dest) print(l) except Exception as e: print("Error while downloading: {}".format(e)) else: from multiprocessing.pool import ThreadPool results = ThreadPool(10).imap_unordered( lambda x: download_file(x, args.dest), all_links) for p in results: print(p) An example of its usage is: python3 soupget.py -p -e <list of extensions> -d <destination_folder> -u <target_webpage> And an actual example if you want to see it in action: python3 soupget.py -p -e .xlsx .pdf .csv -u https://healthdata.gov/dataset/chemicals-cosmetics A: Another possibility is with built-in http.client: from http import HTTPStatus, client from shutil import copyfileobj # using https connection = client.HTTPSConnection("www.example.com") with connection.request("GET", "/noise.mp3") as response: if response.status == HTTPStatus.OK: copyfileobj(response, open("noise.mp3") else: raise Exception("request needs work") The HTTPConnection object is considered “low-level” in that it performs the desired request once and assumes the developer will subclass it or script in a way to handle the nuances of HTTP. Libraries such as requests tend to handle more special cases such as automatically following redirects and so on. A: You can use keras.utils.get_file to do it: from tensorflow import keras path_to_downloaded_file = keras.utils.get_file( fname="file name", origin="https://www.linktofile.com/link/to/file", extract=True, archive_format="zip", # downloaded file format cache_dir="/", # cache and extract in current directory ) A: Another way is to call an external process such as curl.exe. Curl by default displays a progress bar, average download speed, time left, and more all formatted neatly in a table. Put curl.exe in the same directory as your script from subprocess import call url = "" call(["curl", {url}, '--output', "song.mp3"]) Note: You cannot specify an output path with curl, so do an os.rename afterwards
{ "language": "en", "url": "https://stackoverflow.com/questions/22676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1092" }
Q: Alternative SSH Application to Plink I have recently started having problems with TortoiseCVS, or more specifically with plink, the SSH application that comes with it. The IP address it tries to connect to can not be changed and is stuck with the old CVS repository's IP. Downloading plink from it's home site and calling from the command line still has this problem. TortoiseCVS has the option to choose the SSH application it uses and I was wondering which other alternatives there are that I can use instead? A: Are you sure this is a problem with plink? It sounds to me like you have CVS/Root files lying around that still point to the old cvs ip address. In general, CVS doesn't make changing repositories into a fun process. Since you are using Windows, if you install WinCVS with macros support (Python module loaded) it has a macro that can be used to mass change CVS roots. Otherwise, its up to you to script the process. FWIW, I've used plink quite a bit and never had a similar problem. A: Putty is probably the best SSH client out there: http://www.chiark.greenend.org.uk/~sgtatham/putty/ A: I'd recommend you stick with PuTTY too. You might find it useful to run Pageant in conjunction with Plink to avoid having to type in the passphrase. But if you want to research alternatives you should review this Wikipedia resource: http://en.wikipedia.org/wiki/Comparison_of_SSH_clients A: Thanks to jsight (and Mark Biek for pointing out the connection between plink and putty) I decided to investigate more fully. It turned out that plink had been using the "Default Settings" stored Session that I set up for putty and wasn't allowing them to be overridden. edit: The Geek: Also, this is a good example why you should always, always use DNS/hostnames instead of the IP address directly. The problem was nothing to do with the IP address change, and in this case the DNS changed as well. I can see your point, but this isn't the 'good example' you are looking for. A: It might be worth trying Tunnelier from www.bitvise.com A: For what it's worth, plink is just a command-line version of putty written by the same guy. I think jsight probably has the right idea. A: TortoiseSVN, at least, has an option called Relocate which you can use if the location of the repository has changed. Also, this is a good example why you should always, always use DNS/hostnames instead of the IP address directly. A: I'm using TortoiseCVS 1.10.9 on Vista Business, and ext connections to my server were regularly crashing TortoisePlink. I downloaded the latest puTTY (0.60) and set TortoiseCVS to point to the plink included with this puTTY (CVS->Preferences->Tools). The command line options appear to be the same, but one difference is that TortoisePlink pops up a password dialog if you don't have a keypair for your server. Regular plink does not. So you have to either create the keypair (puttygen, I believe) or specify a -pw on the command line options (very BAD security idea). A: +1 for PuTTy... been using it for the last decade and never needed anything else!
{ "language": "en", "url": "https://stackoverflow.com/questions/22687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Use of 3rd party libraries/components in production When using 3rd party libraries/components in production projects, are you rigorous about using only released versions of said libraries? When do you consider using a pre-release or beta version of a library (in dev? in production, under certain circumstances)? If you come across a bug or shortcoming of the library and you're already committed to using it, do you apply a patch to the library or create a workaround in your code? A: I am a big fan of not coding something when someone else has a version that I could not code in a reasonable amount of time or would require me to become an expert on something that wouldn't matter in the long run. There are several open source components and libraries I have used in our production environment such as Quartz.NET, Log4Net, nLog, SharpFTPLibrary (heavily modified) and more. Quartz.NET was in beta when I first released an application using it into production. It was a very stable beta and I had the source code so I could debug an issue and there were a few. When I encountered a bug or an error I would fix it and post the issue to the bug tracker or author. I feel very comfortable using a beta product if the source is available for me to debug any issues or there is a strong following of developers hammering out any issues. A: I've used beta libraries in commercial projects before but mostly during development and when the vendor is likely to release a final version before I finish the product. For example, I developed a small desktop application using Visual Studio 2005 Beta 2 because I knew that the RTM version would be available before the final release of my app. Also I used a beta version of FirebirdSQL ADO.NET Driver during development of another project. For bugs I try to post complete bug reports whenever there's a way to reproduce it but most of the time you have to find a workaround to release the application ASAP. A: * *Yes. Unless there's a feature we really need in a beta version. *There's no point using a beta version in dev if you aren't certain you'll use it in production. That just seems like a wasted exercise *I'll use the patch. Why write code for something you've paid for? A: There's no point using a beta version in dev if you aren't certain you'll use it in production. That just seems like a wasted exercise Good point, I was also considering the scenario of evaluation of the pre-release version in dev, but I supposed that taints the dev -> test/qa -> prod path. I'll use the patch. Why write code for something you've paid for? What if it's not a commercial library, but an open source one? What if the patch to be applied is not from the releasing entity (e.g. your own patch)? A: I use: * *Infragistics (.NET WinForms controls) *LeadTools (video capture) *Xtreme ToolkitPro (MFC controls) *National Instruments Measurement Studio (computational libraries, plotting, and DAQ) I've found significant bugs in every one of these, so I try to limit their use as much as possible. Infragisitcs is pretty good for what it is, and National Instruments is by far the best, although quite limited. I would avoid LeadTools at all cost.
{ "language": "en", "url": "https://stackoverflow.com/questions/22694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to filter by 2 fields when loading data into an access database table from an excel spreadsheet OK, here is my problem, without being too specific for reasons of being fired for putting company practices on the internet. There are spreadsheets made. These are uploaded to the database. I need to filter out duplicates from uploading. The only way to do this is by making sure that for each and every entry that two fields aren't the same as an entry already in the database. As just becasue one field is the same does not mean its a duplicate. There are two specific fields lets call them FLDA and FLDB that both must match up to an entry in the database already. I can filter by one field already. I'm thinking this has to be a subquery but I'm not sure how to apply it. This is hard to decribe. Just ask if your not sure what I mean. A: I had a similar problem. My solution was to: * *import into a staging-table. *delete the duplicates *copy what's left over into the live table It's a little BFI, but it just plain works. A: Would a query suit? For example: INSERT INTO ToUpdate ( Field1, Field2 ) SELECT e.H1, e.H2 FROM (SELECT * FROM [Sheet1$] IN '' [Excel 8.0;HDR=YES;IMEX=1;database=C:\Docs\LTD.xls]) As e LEFT JOIN ToUpdate ON (e.H2 = ToUpdate.Field2) AND (e.H1 = ToUpdate.Field1) WHERE ToUpdate.Field1 Is Null AND ToUpdate.Field2 Is Null A: How are you loading them into the database? Is this with your own code to read the Excel files? You can read the Excel files using ADO/ADO.NET with the right connection string. Then you could read the data using a query that would weed out the dupes. A: Building on CodeSlave's answer, Access provides a Find Duplicates Query wizard that can help you easily build the query to weed out the duplicates. Another approach would be to set up an identity on FLDA and FLDB. This will prevent duplicate entries from even getting written to the table. Of course you will also need to catch the exception that is thrown when the insert operation fails. A: Is there a field FLDC that would be different for identifying duplicates? I presume there must be, as otherwise having (FLDA,FLDB) as a unique or primary key would solve your problem immediately. Assuming there is such a field, you could try something like this: SELECT T1.FLDA, T1.FLDB, T1.FLDC FROM Table1 T1, Table1 T2 WHERE T1.FLDA = T2.FLDA AND T1.FLDB = T2.FLDB AND T1.FLDC <> T2.FLDC The downside here is that both the original and the duplicate will be returned by something like this. If you only want to see the duplicates, you will probably have to figure out a way to identify an 'original' row and add another WHERE clause or two for this. If you can get a query that gives you just the duplicate rows and not the originals, it should be pretty easy to change it to a DELETE query. A: To avoid duplicates on imports: 1 - If there's not already a primary key on the table, put one on FLDA and FLDB (together). If there's already a primary key that's not FLDA and FLDB (together), place an index on the table on these two fields, unique yes, ignore nulls no. 2 - You can import from the spreadsheet to the table with the wizard or with a query. If you do it with the import spreadsheet wizard, you'll see this message before the import starts: "DB name was unable to append all the data to the table. "The contents of fields in 0 records were deleted and (xx) records were lost due to key violations. (These lost records were duplicates, so no real loss there.) ... Do you want to proceed anyway?" Click yes to import the rows from the spreadsheet. No duplicates will be imported. Or, to use a query for the import, paste this into a new query in sql view (menu: Insert > Query > Design View, Close button; menu: View > SQL View.) INSERT INTO tblInput SELECT XLS.* FROM tblInput AS T RIGHT JOIN [Excel 8.0;IMEX=1;HDR=Yes;DATABASE=c:\data.xls;].[Sheet1$] AS XLS ON T.FLDA = XLS.FLDA AND T.FLDB = XLS.FLDB WHERE ISNULL(T.FLDA) AND ISNULL(T.FLDB); Change the path, c:\data.xls, to your path, Sheet1$ to your sheetname, tblInput to your table name, and FLDA and FLDB to your column names. If the spreadsheet doesn't have headers (column names) change HDR=Yes to HDR=No A: I did this by using a delete query and then using Select from Table 1 Group by X Having Y, Z, A. And then I put run query button for front end users. Cheers for all your help.
{ "language": "en", "url": "https://stackoverflow.com/questions/22696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What's the best mock framework for Java? What's the best framework for creating mock objects in Java? Why? What are the pros and cons of each framework? A: I am the creator of PowerMock so obviously I must recommend that! :-) PowerMock extends both EasyMock and Mockito with the ability to mock static methods, final and even private methods. The EasyMock support is complete, but the Mockito plugin needs some more work. We are planning to add JMock support as well. PowerMock is not intended to replace other frameworks, rather it can be used in the tricky situations when other frameworks does't allow mocking. PowerMock also contains other useful features such as suppressing static initializers and constructors. A: I used JMock early. I've tried Mockito at my last project and liked it. More concise, more cleaner. PowerMock covers all needs which are absent in Mockito, such as mocking a static code, mocking an instance creation, mocking final classes and methods. So I have all I need to perform my work. A: I like JMock because you are able to set up expectations. This is totally different from checking if a method was called found in some mock libraries. Using JMock you can write very sophisticated expectations. See the jmock cheat-sheat. A: Yes, Mockito is a great framework. I use it together with hamcrest and Google guice to setup my tests. A: The JMockit project site contains plenty of comparative information for current mocking toolkits. In particular, check out the feature comparison matrix, which covers EasyMock, jMock, Mockito, Unitils Mock, PowerMock, and of course JMockit. I try to keep it accurate and up-to-date, as much as possible. A: The best solution to mocking is to have the machine do all the work with automated specification-based testing. For Java, see ScalaCheck and the Reductio framework included in the Functional Java library. With automated specification-based testing frameworks, you supply a specification of the method under test (a property about it that should be true) and the framework generates tests as well as mock objects, automatically. For example, the following property tests the Math.sqrt method to see if the square root of any positive number n squared is equal to n. val propSqrt = forAll { (n: Int) => (n >= 0) ==> scala.Math.sqrt(n*n) == n } When you call propSqrt.check(), ScalaCheck generates hundreds of integers and checks your property for each, also automatically making sure that the edge cases are covered well. Even though ScalaCheck is written in Scala, and requires the Scala Compiler, it's easy to test Java code with it. The Reductio framework in Functional Java is a pure Java implementation of the same concepts. A: I've had good success using Mockito. When I tried learning about JMock and EasyMock, I found the learning curve to be a bit steep (though maybe that's just me). I like Mockito because of its simple and clean syntax that I was able to grasp pretty quickly. The minimal syntax is designed to support the common cases very well, although the few times I needed to do something more complicated I found what I wanted was supported and easy to grasp. Here's an (abridged) example from the Mockito homepage: import static org.mockito.Mockito.*; List mockedList = mock(List.class); mockedList.clear(); verify(mockedList).clear(); It doesn't get much simpler than that. The only major downside I can think of is that it won't mock static methods. A: Mockito also provides the option of stubbing methods, matching arguments (like anyInt() and anyString()), verifying the number of invocations (times(3), atLeastOnce(), never()), and more. I've also found that Mockito is simple and clean. One thing I don't like about Mockito is that you can't stub static methods. A: I've been having success with JMockit. It's pretty new, and so it's a bit raw and under-documented. It uses ASM to dynamically redefine the class bytecode, so it can mock out all methods including static, private, constructors, and static initializers. For example: import mockit.Mockit; ... Mockit.redefineMethods(MyClassWithStaticInit.class, MyReplacementClass.class); ... class MyReplacementClass { public void $init() {...} // replace default constructor public static void $clinit{...} // replace static initializer public static void myStatic{...} // replace static method // etc... } It has an Expectations interface allowing record/playback scenarios as well: import mockit.Expectations; import org.testng.annotations.Test; public class ExpecationsTest { private MyClass obj; @Test public void testFoo() { new Expectations(true) { MyClass c; { obj = c; invokeReturning(c.getFoo("foo", false), "bas"); } }; assert "bas".equals(obj.getFoo("foo", false)); Expectations.assertSatisfied(); } public static class MyClass { public String getFoo(String str, boolean bool) { if (bool) { return "foo"; } else { return "bar"; } } } } The downside is that it requires Java 5/6. A: For something a little different, you could use JRuby and Mocha which are combined in JtestR to write tests for your Java code in expressive and succinct Ruby. There are some useful mocking examples with JtestR here. One advantage of this approach is that mocking concrete classes is very straightforward. A: You could also have a look at testing using Groovy. In Groovy you can easily mock Java interfaces using the 'as' operator: def request = [isUserInRole: { roleName -> roleName == "testRole"}] as HttpServletRequest Apart from this basic functionality Groovy offers a lot more on the mocking front, including the powerful MockFor and StubFor classes. http://docs.codehaus.org/display/GROOVY/Groovy+Mocks A: I started using mocks with EasyMock. Easy enough to understand, but the replay step was kinda annoying. Mockito removes this, also has a cleaner syntax as it looks like readability was one of its primary goals. I cannot stress enough how important this is, since most of developers will spend their time reading and maintaining existing code, not creating it. Another nice thing is that interfaces and implementation classes are handled in the same way, unlike in EasyMock where still you need to remember (and check) to use an EasyMock Class Extension. I've taken a quick look at JMockit recently, and while the laundry list of features is pretty comprehensive, I think the price of this is legibility of resulting code, and having to write more. For me, Mockito hits the sweet spot, being easy to write and read, and dealing with majority of the situations most code will require. Using Mockito with PowerMock would be my choice. One thing to consider is that the tool you would choose if you were developing by yourself, or in a small tight-knit team, might not be the best to get for a large company with developers of varying skill levels. Readability, ease of use and simplicity would need more consideration in the latter case. No sense in getting the ultimate mocking framework if a lot of people end up not using it or not maintaining the tests. A: We are heavily using EasyMock and EasyMock Class Extension at work and are pretty happy with it. It basically gives you everything you need. Take a look at the documentation, there's a very nice example which shows you all the features of EasyMock. A: I started using mocks through JMock, but eventually transitioned to use EasyMock. EasyMock was just that, --easier-- and provided a syntax that felt more natural. I haven't switched since.
{ "language": "en", "url": "https://stackoverflow.com/questions/22697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "347" }
Q: What strategies have you employed to improve web application performance? * *Any personal experience in overcoming web application performance hurdles? *Any recommended strategies for improving the performance of a data-driven web application? My development team works on a web application (JSP reports, HTML, JavaScript) that uses an Oracle database (PL/SQL). The key functionality the application delivers is in reporting, where a user can get PDFs of reports at a high level and drill down to lower levels of supporting details. As the number of supporting detail records has grown into the millions, the performance of the system has significantly degraded. Based on our current analysis of the metrics, the bottleneck seems to be in the logic hitting the DB and the DB performance. Changing the DB model and re-doing some of the server side logic is currently being explored. Partioning, indexing, explain plans, and running statistics are things that have been done on the DB side to try to help improve performance. While they've helped, they haven't solved the issue satisfactorily. The toughest part in analyzing performance data is that the database and web servers are remotely administered by a different part of the IT organization, so the developers don't have regular, full access to see what's going on (especially in the production environment, which is not mirrored exactly in any other development/testing environment). A: While my answer may not contain any concrete steps to help this is always where I start. First thing I would do is try to throw away all of your assumptions about what the trouble is and take steps to install metrics everywhere you can. Let the metrics guide you rather than your intuition. I've chased many, many, many white rabbits going on a hunch...the let me down more times than they've been right. A: Have you considered building your data ahead of time? In other words are there groups of data that are requested again and again? If so have them ready before the user asks. I'm not exactly talking about caching, but I think that is part of the equation. It might be worth it to take a step back from the code and examine the usage patterns of the system. For example, if you are showing people monthly inventory or sales information do they look at it at only at the end of the month? If so just build the data on the last day and store it. If they look at it daily, maybe try building each previous days results and storing the results and avoid the calculation. I guess ultimately I am pushing you in to a Dynamic Programming solution; if you know an answer don't solve it again. A: Have you checked this out? Best practices for making web pages fast from Yahoo!'s Exceptional Performance team If you really are having trouble at the backend, this won't help. But we used their advice to great effect to make our site faster, and there is still more to do. Also use the YSlow add-on for Firebug. You may be surprised when you see where the actual time is being taken up. A: As Webjedi says, metrics are your friend. Also look at your stack and see where there are opportunities for caching - then employ mercilessly wherever possible! A: As I said in another question: Use a profiler. Yes they cost money, and using them can occasionally be a bit awkward, but they do provide you with a great deal more real evidence rather than guesswork. Human beings are universally bad at guessing where performance bottlenecks are. It just seems to be something our brains aren't build to do very well. It may seem obvious, you may have great ideas about what the problem is, but the real world often turns out to be doing something different. And optimising the wrong part of code means, at best, lots of work for minimal benefit. More often it makes things slower, and sometimes it breaks things entirely. So before you make any changes for the sake of optimisation, you should always have real evidence from a profiler or other accurate tool. A: Not all profilers cost (extra) money. For .Net, I'm successfully using an old build of NProf (currently abandoned but it still works for me) for profiling my ASP.Net applications. For SQL Server, the query profiler is part of the package. There's also the CLF Profiler from MS but I've never been able to get it to work successfully. That being said, profilers are definitely the way to go. That way you can see where your program is spending most of its time, and not focus on things that you think are slow. Plus it means you don't have to write anything in your code to actually record the metrics. As I hinted to at the beginning, there are different types of profilers. The three I find most useful are application profilers, which let you see which functions you actually spend most of your time in. The second is SQL profilers that let you see how long your queries take to run. The third is memory profilers, which help to show you what type of objects your memory is being used up by. All three of these are really useful, and although you won't use them every day, the times you do use them will save you a lot of headache.
{ "language": "en", "url": "https://stackoverflow.com/questions/22704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I find the Excel column name that corresponds to a given integer? How would you determine the column name (e.g. "AQ" or "BH") of the nth column in Excel? Edit: A language-agnostic algorithm to determine this is the main goal here. A: Thanks, Joseph Sturtevant! Your code works perfectly - I needed it in vbscript, so figured I'd share my version: Function ColumnLetter(ByVal intColumnNumber) Dim sResult intColumnNumber = intColumnNumber - 1 If (intColumnNumber >= 0 And intColumnNumber < 26) Then sResult = Chr(65 + intColumnNumber) ElseIf (intColumnNumber >= 26) Then sResult = ColumnLetter(CLng(intColumnNumber \ 26)) _ & ColumnLetter(CLng(intColumnNumber Mod 26 + 1)) Else err.Raise 8, "Column()", "Invalid Column #" & CStr(intColumnNumber + 1) End If ColumnLetter = sResult End Function A: Joseph's code is good but, if you don't want or need to use a VBA function, try this. Assuming that the value of n is in cell A2 Use this function: =MID(ADDRESS(1,A2),2,LEN(ADDRESS(1,A2))-3) A: I once wrote this function to perform that exact task: public static string Column(int column) { column--; if (column >= 0 && column < 26) return ((char)('A' + column)).ToString(); else if (column > 25) return Column(column / 26) + Column(column % 26 + 1); else throw new Exception("Invalid Column #" + (column + 1).ToString()); } A: IF(COLUMN()>=26,CHAR(ROUND(COLUMN()/26,1)+64)&CHAR(MOD(COLUMN(),26)+64),CHAR(COLUMN()+64)) This works 2 letter columns (up until column ZZ). You'd have to nest another if statement for 3 letter columns. The formula above fails on columns AY, AZ and each of the following nY and nZ columns. The corrected formula is: =IF(COLUMN()>26,CHAR(ROUNDDOWN((COLUMN()-1)/26,0)+64)&CHAR(MOD((COLUMN()-1),26)+65),CHAR(COLUMN()+64) A: FROM wcm: If you don't want to use VBA, you can use this replace colnr with the number you want =MID(ADDRESS(1,colnr),2,LEN(ADDRESS(1,colnr))-3) Please be aware of the fact that this formula is volatile because of the usage of the ADDRESS function. Volatile functions are functions that are recalculated by excel after EVERY change. Normally excel recalculates formula's only when their dependent references changes. It could be a performance killer, to use this formula. A: Ruby one-liner: def column_name_for(some_int) some_int.to_s(26).split('').map {|c| (c.to_i(26) + 64).chr }.join # 703 => "AAA" end It converts the integer to base26 then splits it and does some math to convert each character from ascii. Finally joins 'em all back together. No division, modulus, or recursion. Fun. A: Here is the cleanest correct solution I could come up with (in Java, but feel free to use your favorite language): String getNthColumnName(int n) { String name = ""; while (n > 0) { n--; name = (char)('A' + n%26) + name; n /= 26; } return name; } But please do let me know of if you find a mistake in this code, thank you. A: And here is a conversion from the VBScript version to SQL Server 2000+. CREATE FUNCTION [dbo].[GetExcelColRef] ( @col_seq_no int ) RETURNS varchar(5) AS BEGIN declare @Result varchar(5) set @Result = '' set @col_seq_no = @col_seq_no - 1 If (@col_seq_no >= 0 And @col_seq_no < 26) BEGIN set @Result = char(65 + @col_seq_no) END ELSE BEGIN set @Result = [dbo].[GetExcelColRef] (@col_seq_no / 26) + '' + [dbo].[GetExcelColRef] ((@col_seq_no % 26) + 1) END Return @Result END GO A: This works fine in MS Excel 2003-2010. Should work for previous versions supporting the Cells(...).Address function: * *For the 28th column - taking columnNumber=28; Cells(1, columnNumber).Address returns "$AB$1". *Doing a split on the $ sign returns the array: ["","AB","1"] *So Split(Cells(1, columnNumber).Address, "$")(1) gives you the column name "AB". UPDATE: Taken from How to convert Excel column numbers into alphabetical characters ' The following VBA function is just one way to convert column number ' values into their equivalent alphabetical characters: Function ConvertToLetter(iCol As Integer) As String Dim iAlpha As Integer Dim iRemainder As Integer iAlpha = Int(iCol / 27) iRemainder = iCol - (iAlpha * 26) If iAlpha > 0 Then ConvertToLetter = Chr(iAlpha + 64) End If If iRemainder > 0 Then ConvertToLetter = ConvertToLetter & Chr(iRemainder + 64) End If End Function APPLIES TO: Microsoft Office Excel 2007 SE / 2002 SE / 2000 SE / 97 SE A: A language agnostic algorithm would be as follows: function getNthColumnName(int n) { let curPower = 1 while curPower < n { set curPower = curPower * 26 } let result = "" while n > 0 { let temp = n / curPower let result = result + char(temp) set n = n - (curPower * temp) set curPower = curPower / 26 } return result This algorithm also takes into account if Excel gets upgraded again to handle more than 16k columns. If you really wanted to go overboard, you could pass in an additional value and replace the instances of 26 with another number to accomodate alternate alphabets A: I suppose you need VBA code: Public Function GetColumnAddress(nCol As Integer) As String Dim r As Range Set r = Range("A1").Columns(nCol) GetColumnAddress = r.Address End Function A: This does what you want in VBA Function GetNthExcelColName(n As Integer) As String Dim s As String s = Cells(1, n).Address GetNthExcelColName = Mid(s, 2, InStr(2, s, "$") - 2) End Function A: This seems to work in vb.net Public Function Column(ByVal pColumn As Integer) As String pColumn -= 1 If pColumn >= 0 AndAlso pColumn < 26 Then Return ChrW(Asc("A"c) + pColumn).ToString ElseIf (pColumn > 25) Then Return Column(CInt(math.Floor(pColumn / 26))) + Column((pColumn Mod 26) + 1) Else stop Throw New ArgumentException("Invalid column #" + (pColumn + 1).ToString) End If End Function I took Joseph's and tested it to BH, then fed it 980-1000 and it looked good. A: In VBA, assuming lCol is the column number: function ColNum2Letter(lCol as long) as string ColNum2Letter = Split(Cells(1, lCol).Address, "$")(0) end function A: All these code samples that these good people have posted look fine. There is one thing to be aware of. Starting with Office 2007, Excel actually has up to 16,384 columns. That translates to XFD (the old max of 256 colums was IV). You will have to modify these methods somewhat to make them work for three characters. Shouldn't be that hard... A: Here's Gary Waters solution Function ConvertNumberToColumnLetter2(ByVal colNum As Long) As String Dim i As Long, x As Long For i = 6 To 0 Step -1 x = (1 - 26 ^ (i + 1)) / (-25) - 1 ‘ Geometric Series formula If colNum > x Then ConvertNumberToColumnLetter2 = ConvertNumberToColumnLetter2 & Chr(((colNum - x - 1)\ 26 ^ i) Mod 26 + 65) End If Next i End Function via http://www.dailydoseofexcel.com/archives/2004/05/21/column-numbers-to-letters/ A: Considering the comment of wcm (top value = xfd), you can calculate it like this; function IntToExcel(n: Integer); string; begin Result := ''; for i := 2 down to 0 do begin if ((n div 26^i)) > 0) or (i = 0) then Result := Result + Char(Ord('A')+(n div (26^i)) - IIF(i>0;1;0)); n := n mod (26^i); end; end; There are 26 characters in the alphabet and we have a number system just like hex or binary, just with an unusual character set (A..Z), representing positionally the powers of 26: (26^2)(26^1)(26^0). A: FYI T-SQL to give the Excel column name given an ordinal (zero-based), as a single statement. Anything below 0 or above 16,383 (max columns in Excel2010) returns NULL. ; WITH TestData AS ( -- Major change points SELECT -1 AS FieldOrdinal UNION ALL SELECT 0 UNION ALL SELECT 25 UNION ALL SELECT 26 UNION ALL SELECT 701 UNION ALL SELECT 702 UNION ALL SELECT 703 UNION ALL SELECT 16383 UNION ALL SELECT 16384 ) SELECT FieldOrdinal , CASE WHEN FieldOrdinal < 0 THEN NULL WHEN FieldOrdinal < 26 THEN '' WHEN FieldOrdinal < 702 THEN CHAR (65 + FieldOrdinal / 26 - 1) WHEN FieldOrdinal < 16384 THEN CHAR (65 + FieldOrdinal / 676 - 1) + CHAR (65 + (FieldOrdinal / 26) - (FieldOrdinal / 676) * 26 - 1) ELSE NULL END + CHAR (65 + FieldOrdinal % 26) FROM TestData ORDER BY FieldOrdinal A: =CHAR(64+COLUMN()) A: I currently use this, but I have a feeling that it can be optimized. private String GetNthExcelColName(int n) { String firstLetter = ""; //if number is under 26, it has a single letter name // otherwise, it is 'A' for 27-52, 'B' for 53-78, etc if(n > 26) { //the Converts to double and back to int are just so Floor() can be used Double value = Convert.ToDouble((n-1) / 26); int firstLetterVal = Convert.ToInt32(Math.Floor(value))-1; firstLetter = Convert.ToChar(firstLetterValue + 65).ToString(); } //second letter repeats int secondLetterValue = (n-1) % 26; String secondLetter = Convert.ToChar(secondLetterValue+65).ToString(); return firstLetter + secondLetter; }
{ "language": "en", "url": "https://stackoverflow.com/questions/22708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Configure a Java Socket to fail-fast on disconnect? I have a listening port on my server that I'm connecting to using a Java class and the Socket interface, i.e. Socket mySocket = new Socket(host,port); I then grab an OutputStream, decorate with a PrintWriter in autoflush mode and I'm laughing - except if the listening port closes. Then I get tcp4 0 0 *.9999 *.* LISTEN tcp 0 0 127.0.0.1.45737 127.0.0.1.9999 CLOSE_WAIT and I can't seem to detect the problem in the program - I've tried using the isConnected() method on the socket but it doesn't seem to know that the connection is closed. I want to be aware of the problem the next time I try and write to the Socket so that I can try and reconnect and report the issue. Any advice please? Thanks all A: Set a short timeout? Does isOutputShutdown() not get you what you want? You could always build a SocketWatcher class that spins up in its own Thread and repeatedly tries to write empty strings to the Socket until that raises a SocketClosedException. A: The only reliable way to detect a broken connection in TCP is to write to it, which will eventually cause a 'connection reset' IOException. However due to buffering it won't happen on the first write after the disconnection,p but on a subsequent write. You can't do anything about this. A: Set a different thread to reading from the socket. It will block until the socket is closed, and then an exception will be thrown. Catch that exception to detect the close immediately.
{ "language": "en", "url": "https://stackoverflow.com/questions/22720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I pass multiple string parameters to a PowerShell script? I am trying to do some string concatenation/formatting, but it's putting all the parameters into the first placeholder. Code function CreateAppPoolScript([string]$AppPoolName, [string]$AppPoolUser, [string]$AppPoolPass) { # Command to create an IIS application pool $AppPoolScript = "cscript adsutil.vbs CREATE ""w3svc/AppPools/$AppPoolName"" IIsApplicationPool`n" $AppPoolScript += "cscript adsutil.vbs SET ""w3svc/AppPools/$AppPoolName/WamUserName"" ""$AppPoolUser""`n" $AppPoolScript += "cscript adsutil.vbs SET ""w3svc/AppPools/$AppPoolName/WamUserPass"" ""$AppPoolPass""`n" $AppPoolScript += "cscript adsutil.vbs SET ""w3svc/AppPools/$AppPoolName/AppPoolIdentityType"" 3" return $AppPoolScript } $s = CreateAppPoolScript("name", "user", "pass") write-host $s Output cscript adsutil.vbs CREATE "w3svc/AppPools/name user pass" IIsApplicationPool cscript adsutil.vbs SET "w3svc/AppPools/name user pass/WamUserName" "" cscript adsutil.vbs SET "w3svc/AppPools/name user pass/WamUserPass" "" cscript adsutil.vbs SET "w3svc/AppPools/name user pass/AppPoolIdentityType" 3 A: By the way, using a PowerShell here-string might make your function a little easier to read as well, since you won't need to double up all the "-marks: function CreateAppPoolScript([string]$AppPoolName, [string]$AppPoolUser, [string]$AppPoolPass) { # Command to create an IIS application pool return @" cscript adsutil.vbs CREATE "w3svc/AppPools/$AppPoolName" IIsApplicationPool cscript adsutil.vbs SET "w3svc/AppPools/$AppPoolName/WamUserName" "$AppPoolUser" cscript adsutil.vbs SET "w3svc/AppPools/$AppPoolName/WamUserPass" "$AppPoolPass" cscript adsutil.vbs SET "w3svc/AppPools/$AppPoolName/AppPoolIdentityType" 3 "@ } A: Lose the parentheses and commas. Calling your function as: $s = CreateAppPoolScript "name" "user" "pass" gives: cscript adsutil.vbs CREATE "w3svc/AppPools/name" IIsApplicationPool cscript adsutil.vbs SET "w3svc/AppPools/name/WamUserName" "user" cscript adsutil.vbs SET "w3svc/AppPools/name/WamUserPass" "pass" cscript adsutil.vbs SET "w3svc/AppPools/name/AppPoolIdentityType" 3 A: Paul's right. In PowerShell, function parameters are not enclosed in parenthesis. (Method parameters still are.) Your initial call was just passing one big array to the function, rather than the three separate parameters you wanted.
{ "language": "en", "url": "https://stackoverflow.com/questions/22732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: How does Ruby 1.9 handle character cases in source code? In Ruby 1.8 and earlier, Foo is a constant (a Class, a Module, or another constant). Whereas foo is a variable. The key difference is as follows: module Foo bar = 7 BAZ = 8 end Foo::BAZ # => 8 Foo::bar # NoMethodError: undefined method 'bar' for Foo:Module That's all well and good, but Ruby 1.9 allows UTF-8 source code. So is ℃ "uppercase" or "lowecase" as far as this is concerned? What about ⊂ (strict subset) or Ɖfoo? Is there a general rule? Later: Ruby-core is already considering some of the mathematical operators. For example module Kernel def √(num) ... end def ∑(*args) ... end end would allow x = √2 y = ∑(1, 45, ...) I would love to see my_proc = λ { |...| ... } x ∈ my_enumerable # same as my_enumerable.include?(x) my_infinite_range = (1..∞) return 'foo' if x ≠ y 2.21 ≈ 2.2 A: OK, my joking answer didn't go down so well. This mailing list question, with answer from Matz indicates that Ruby 1.9's built in String#upcase and String#downcase methods will only handle ASCII characters. Without testing it myself, I would see this as strong evidence that all non-ascii characters in source code will likely be considered lowercase. Can someone download and compile the latest 1.9 and see? A: I don't know what ruby would do if you used extended UTF8 characters as identifiers in your source code, but I know what I would do, which would be to slap you upside the back of the head and tell you DON'T DO THAT A: I would love to see my_proc = λ { |...| ... } x ∈ my_enumerable # same as my_enumerable.include?(x) my_infinite_range = (1..∞) return 'foo' if x ≠ y 2.21 ≈ 2.2 I would love to see someone trying to type that program on an English keyboard :P A: In Ruby 1.9.2-p0 (YARV) the result is the same as in the original post (i.e., Foo::bar #=> # NoMethodError: undefined method 'bar' for Foo:Module). Also, letters with accent are unfortunately not considered as being upper nor lower and related methods produce no result. Examples: "á".upcase => "á" "á" == "Á".downcase => false A: I can't get IRB to accept UTF-8 characters, so I used a test script (/tmp/utf_test.rb). "λ" works fine as a variable name: # encoding: UTF-8 λ = 'foo' puts λ # from the command line: > ruby -KU /tmp/utf_test.rb foo "λ" also works fine as a method name: # encoding: UTF-8 Kernel.class_eval do alias_method :λ, :lambda end (λ { puts 'hi' }).call # from the command line: > ruby -KU /tmp/utf_test.rb: hi It doesn't work as a constant, though: # encoding: UTF-8 Object.const_set :λ, 'bar' # from the command line: > ruby -KU /tmp/utf_test.rb: utf_test.rb:2:in `const_set': wrong constant name λ (NameError) Nor does the capitalized version: # encoding: UTF-8 Object.const_set :Λ, 'bar' # from the command line: > ruby -KU /tmp/utf_test.rb: utf_test.rb:2:in `const_set': wrong constant name Λ (NameError) My suspicion is that constant names must start with a capital ASCII letter (must match /^[A-Z]/).
{ "language": "en", "url": "https://stackoverflow.com/questions/22764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is GDI+ actually still a "usable" technology? I just wonder a bit whether or not GDI+ is still a technology worth using, especially in a .net sense. Granted, GDI+ is still THE technology to handle Images in Windows, but it is also unmanaged code (obviously). Now, after discovering that GDI+ is actually not supported on ASP.net¹, I just wonder: Is it actually feasible to have Image Manipulation Libraries that run completely in Managed Code? XNA does it if i remember correctly, albeit it uses the graphics card for it. Is there maybe even any .net Image Library that implements a sort of managed GDI+? ¹ Source, also Microsoft just offered something for ASP.net that uses GDI+. A: System.Drawing is built on top of GDI+. It's just a wrapper. http://msdn.microsoft.com/en-us/library/system.drawing.aspx A: It's still a technology worth using. There are lots of Windows Forms and unmanaged apps around that use GDI+ that either won't be upgraded, or that will be upgraded, but that don't need more advanced rendering capabilities. GDI+ is a good bolt-on solution for older applications, and for new applications written in Windows Forsm. That's the primary reason GDI+ wasn't axed in Vista in favour of a totally DirectX solution. There's not specifically anything wrong with GDI/GDI+. True, it's not as advanced as Aero et al, but that doesn't always matter. Particularly in LOB applications (in companies that probably don't even have machines capable of running Vista - mine certainly doesn't), GDI+ is an extremely important technology. The fact that it's not supported (for drawing, at least... you CAN still use it for image manipulation) in ASP.NET is a red herring, since other drawing technologies are not supported for web applications either (plugin-based "applications" notwithstanding). A: You can use AntiGrain instead of GDI+. As example of an application that use this library see Creative Docs .NET. SDL can be also pretty suitable for some type of applications. A: Usable? Well, yes. To the extent that it ever was. It's always been horribly slow, text rendering has always been broken, and it's been apparent for some time now that it won't be The Next Official Graphics Layer for Windows. If you can't live with that, then there are plenty of other graphics libraries out there, faster and/or higher quality / fewer system dependencies... although i'm not aware of any implemented in managed code. A: Blockquote there are plenty of other graphics libraries out there, faster and/or higher quality / fewer system dependencies... Could u list some of them libraries that could be used instead of GDI+ with C++ ?
{ "language": "en", "url": "https://stackoverflow.com/questions/22779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there an open source SQL Server DB compare tool? I'm working on an open source project that uses SQL Server 2005 as the data store. We need a DB compare tool to generate diff scripts to be able to upgrade a DB from one version to another. Is there an open source or free SQL Server DB diff tool out there that generates a convert script? A: I'd recommend spending some cash and getting Red Gate's SQL Compare tool which does an excellent job, and can even compare databases to Visual Studio database projects to generate upgrade scripts. It's fast and easy to use, and works well. The upgrade scripts are also of decent quality. It's not that expensive. Probably less expensive than your time. Just think about how much your hourly rate is, and how many hours it might take to investigate an open-source tool and get it working (and how many you have already spent), then multiply them together. That's how much a 'free' tool is really costing you, which is often significantly more than a commercial tool. A: It's not open source, but is free (as in beer): Sql Effects Accord (aka Clarity) Community Edition A: AdeptSQL Diff and DataDiff are wonderful products, much cheaper than RedGate's and a much more simplified UI, and I have yet to run into a scenario it cannot handle. A: Aloha You might want to try SqlDbDiff. It can generate change scripts. The free edition does a good enough job. A: I think that Open DBiff does a good job. It's simple and I works with SQL Server 2005/2008. But only generate the change script. Nothing more and nothing less. A: On CodePlex I noticed yesterday DbDiff (http://www.codeplex.com/OpenDBiff) that you could try. Supports Sql2005 and 2008, I did not try it. A: Anyone try xSQL Bundle (xSQL Data Compare and xSQL Object Compare)? Our place only uses for DB diffs, no syncs, so can't say for syncing but the diff and reports are not bad. Also, OpenDBDiff has a spin off, not sure which is better - http://code.google.com/p/sql-dbdiff/ Anyone know if any of the free/open source DB diff tools mentioned here offer scriptable / command line interface to automate the diffs and synching? I looked into xSQL tools, they offer command line access but unfortunately, no scriptable command to export diff results to (report) file. A: While it's not exactly what you want, I found this for postgres: http://mbk.projects.postgresql.org/ It doesn't generate a diff to apply, but rather allows you to merge a full dump of the new version of the table with the previous version. A: Hmm, none that I know of. You can always retrieve the definitions as SQL and then run a diff tool on them, but it's a bit of a pain in the rear. Probably the best solution for this is using some kind of "Migrations" tool, so you can keep your database definitions together with your code, and version them, etc. A: Update On Sourceforge I found Whiz SQL Structure Compare with this description: Whiz is a database diff utility which will be useful to find difference between two MS-SQL Server databases. It also able to generate SQL script to update the changes from one database to another database. However, I've been unsuccessful in getting it to work so far... A: We have both SQL Delta and SQL Compare. Each has strengths, but each also have weaknesses that make them quite a pain. SQL Delta will miss some triggers in its comparison, and it will take actions not found in the action list, and it will sometimes take actions you did not want it to take. That was discovered at quite a cost in time. SQL Compare will catch the triggers, but they are embedded within the table listings. On a large database, that means going through each table and sifting them out. Something the tool should have isolated for us. Again, quite a cost in time. A: Its is a little late, but I just relased a real simple project on code plex: http://dbcompare.codeplex.com Enter (or build) two connection strings and it will compare all Tables, Views and Stored Procedures.
{ "language": "en", "url": "https://stackoverflow.com/questions/22792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Drag and Drop an Email from MS Outlook to Eclipse RCP Has anyone got a working solution without some Java/COM-bridge? E.g. process the Email as a file (.msg) rather than locating the data that is referenced in the Clipboard? A: I did make some headway on this sort of thing a few years back using Apache POI to extract the contents of an email from .msg files. I'm pretty sure they have a simple swing explorer / viewer that you can use to examine the structure within the compound document format, but I can't find it right now. I was able to extract most information that I was interested in but was ultimately wanting to create a mime format version of the message and couldn't extract all the information I needed in a format I could use. A: Maybe this is a solution for your problem: http://sourceforge.net/projects/javaoutlookdd/ It allows to handle outlook items like File objects during drag&drop. A: I assume that you've already ruled out the tools in "org.eclipse.swt.dnd" for some reason? There are some examples here on how to go about using them, in case you haven't. If what you really want to do is drag&drop, you're going to have to do some work with those tools. At that point, really the question becomes, what format is it in on the clipboard, vs in a file, and which is easier to integrate into your app.
{ "language": "en", "url": "https://stackoverflow.com/questions/22798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: The difference between loops It's about PHP but I've no doubt many of the same comments will apply to other languages. Simply put, what are the differences in the different types of loop for PHP? Is one faster/better than the others or should I simply put in the most readable loop? for ($i = 0; $i < 10; $i++) { # code... } foreach ($array as $index => $value) { # code... } do { # code... } while ($flag == false); A: For loop and While loops are entry condition loops. They evaluate condition first, so the statement block associated with the loop won't run even once if the condition fails to meet The statements inside this for loop block will run 10 times, the value of $i will be 0 to 9; for ($i = 0; $i < 10; $i++) { # code... } Same thing done with while loop: $i = 0; while ($i < 10) { # code... $i++ } Do-while loop is exit-condition loop. It's guaranteed to execute once, then it will evaluate condition before repeating the block do { # code... } while ($flag == false); foreach is used to access array elements from start to end. At the beginning of foreach loop, the internal pointer of the array is set to the first element of the array, in next step it is set to the 2nd element of the array and so on till the array ends. In the loop block The value of current array item is available as $value and the key of current item is available as $index. foreach ($array as $index => $value) { # code... } You could do the same thing with while loop, like this while (current($array)) { $index = key($array); // to get key of the current element $value = $array[$index]; // to get value of current element # code ... next($array); // advance the internal array pointer of $array } And lastly: The PHP Manual is your friend :) A: This is CS101, but since no one else has mentioned it, while loops evaluate their condition before the code block, and do-while evaluates after the code block, so do-while loops are always guaranteed to run their code block at least once, regardless of the condition. A: PHP Benchmarks A: @brendan: The article you cited is seriously outdated and the information is just plain wrong. Especially the last point (use for instead of foreach) is misleading and the justification offered in the article no longer applies to modern versions of .NET. While it's true that the IEnumerator uses virtual calls, these can actually be inlined by a modern compiler. Furthermore, .NET now knows generics and strongly typed enumerators. There are a lot of performance tests out there that prove conclusively that for is generally no faster than foreach. Here's an example. A: I use the first loop when iterating over a conventional (indexed?) array and the foreach loop when dealing with an associative array. It just seems natural and helps the code flow and be more readable, in my opinion. As for do...while loops, I use those when I have to do more than just flip through an array. I'm not sure of any performance benefits, though. A: Performance is not significantly better in either case. While is useful for more complex tasks than iterating, but for and while are functionally equivalent. Foreach is nice, but has one important caveat: you can't modify the enumerable you're iterating. So no removing, adding or replacing entries to/in it. Modifying entries (like changing their properties) is OK, of course. A: With a foreach loop, a copy of the original array is made in memory to use inside. You shouldn't use them on large structures; a simple for loop is a better choice. You can use a while loop more efficiently on a large non-numerically indexed structure like this: while(list($key, $value) = each($array)) { But that approach is particularly ugly for a simple small structure. while loops are better suited for looping through streams, or as in the following example that you see very frequently in PHP: while ($row = mysql_fetch_array($result)) { Almost all of the time the different loops are interchangeable, and it will come down to either a) efficiency, or b) clarity. If you know the efficiency trade-offs of the different types of loops, then yes, to answer your original question: use the one that looks the most clean. A: In regards to performance, a foreach is more consuming than a for http://forums.asp.net/p/1041090/1457897.aspx A: Each looping construct serves a different purpose. for - This is used to loop for a specific number of iterations. foreach - This is used to loop through all of the values in a collection. while - This is used to loop until you meet a condition. Of the three, "while" will most likely provide the best performance in most situations. Of course, if you do something like the following, you are basically rewriting the "for" loop (which in c# is slightly more performant). $count = 0; do { ... $count++; } while ($count < 10); They all have different basic purposes, but they can also be used in somewhat the same way. It completely depends on the specific problem that you are trying to solve. A: With a foreach loop, a copy of the original array is made in memory to use inside. Foreach is nice, but has one important caveat: you can't modify the enumerable you're iterating. Both of those won't be a problem if you pass by reference instead of value: foreach ($array as &$value) { I think this has been allowed since PHP 5. A: When accessing the elements of an array, for clarity I would use a foreach whenever possible, and only use a for if you need the actual index values (for example, the same index in multiple arrays). This also minimizes the chance for typo mistakes since for loops make this all too easy. In general, PHP might not be the place be worrying too much about performance. And last but not least, for and foreach have (or should have; I'm not a PHP-er) the same Big-O time (O(n)) so you are looking possibly at a small amount more of memory usage or a slight constant or linear hit in time.
{ "language": "en", "url": "https://stackoverflow.com/questions/22801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Comparing effective dates in SQL Wondering if there is a better why in the WHERE clause of choosing records when you need to look at effective start and end dates? Currently this how I've done it in the past on MS SQL Server. Just worried about the date and not the time. I'm using SQL Server 2005. AND Convert(datetime, Convert(char(10), ep.EffectiveStartDate, 101)) <= Convert(datetime, Convert(char(10), GetDate(), 101)) AND Convert(datetime, Convert(char(10), ep.EffectiveEndDate, 101)) >= Convert(datetime, Convert(char(10), GetDate(), 101)) A: That is terrible, take a look at Only In A Database Can You Get 1000% + Improvement By Changing A Few Lines Of Code to see how you can optimize this since that is not sargable Also check out Get Datetime Without Time and Query Optimizations With Dates A: @Darren Kopp - you can use set @date2 = '20201001' this will let you lose the cast. footndale - you can use date arithmetic to remove the time as well. Something like select dateadd(d, datediff(d, 0, CURRENT_TIMESTAMP), 0) to get today's date (without the time). I believe this is more efficient than casting back and forth. A: @Darren Kopp Be carefull with BETWEEN, check out How Does Between Work With Dates In SQL Server? A: AND DateDiff(Day, 0, GetDate()) + 1 > ep.EffectiveStartDate AND DateDiff(Day, 0, GetDate()) < ep.EffectiveEndDate I think you will find that these conditions offer the best performance possible. This will happily utilize indexes. I am also very sure that this is right and will give the right data. No further calculation of dates without time portions is needed. A: try ep.EffectiveStartDate BETWEEN @date1 AND @date2 where you would do something like declare @date1 datetime, @date2 datetime; set @date1 = cast('10/1/2000' as datetime) set @date2 = cast('10/1/2020' as datetime)
{ "language": "en", "url": "https://stackoverflow.com/questions/22807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to decode viewstate I need to see the contents of the viewstate of an asp.net page. I looked for a viewstate decoder, found Fridz Onion's ViewState Decoder but it asks for the url of a page to get its viewstate. Since my viewstate is formed after a postback and comes as a result of an operation in an update panel, I cannot provide a url. I need to copy & paste the viewstate string and see what's inside. Is there a tool or a website exist that can help viewing the contents of viewstate? A: As another person just mentioned, it's a base64 encoded string. In the past, I've used this website to decode it: http://www.motobit.com/util/base64-decoder-encoder.asp A: JavaScript-ViewState-Parser: * *http://mutantzombie.github.com/JavaScript-ViewState-Parser/ *https://github.com/mutantzombie/JavaScript-ViewState-Parser/ The parser should work with most non-encrypted ViewStates. It doesn’t handle the serialization format used by .NET version 1 because that version is sorely outdated and therefore too unlikely to be encountered in any real situation. http://deadliestwebattacks.com/2011/05/29/javascript-viewstate-parser/ Parsing .NET ViewState * *A Spirited Peek into ViewState, Part I: http://deadliestwebattacks.com/2011/05/13/a-spirited-peek-into-viewstate-part-i/ *A Spirited Peek into ViewState, Part II: http://deadliestwebattacks.com/2011/05/25/a-spirited-peek-into-viewstate-part-ii/ A: Here's another decoder that works well as of 2014: http://viewstatedecoder.azurewebsites.net/ This worked on an input on which the Ignatu decoder failed with "The serialized data is invalid" (although it leaves the BinaryFormatter-serialized data undecoded, showing only its length). A: Here's an online ViewState decoder: http://ignatu.co.uk/ViewStateDecoder.aspx Edit: Unfortunatey, the above link is dead - here's another ViewState decoder (from the comments): http://viewstatedecoder.azurewebsites.net/ A: Use Fiddler and grab the view state in the response and paste it into the bottom left text box then decode. A: This is somewhat "native" .NET way of converting ViewState from string into StateBag Code is below: public static StateBag LoadViewState(string viewState) { System.Web.UI.Page converterPage = new System.Web.UI.Page(); HiddenFieldPageStatePersister persister = new HiddenFieldPageStatePersister(new Page()); Type utilClass = typeof(System.Web.UI.BaseParser).Assembly.GetType("System.Web.UI.Util"); if (utilClass != null && persister != null) { MethodInfo method = utilClass.GetMethod("DeserializeWithAssert", BindingFlags.NonPublic | BindingFlags.Static); if (method != null) { PropertyInfo formatterProperty = persister.GetType().GetProperty("StateFormatter", BindingFlags.NonPublic | BindingFlags.Instance); if (formatterProperty != null) { IStateFormatter formatter = (IStateFormatter)formatterProperty.GetValue(persister, null); if (formatter != null) { FieldInfo pageField = formatter.GetType().GetField("_page", BindingFlags.NonPublic | BindingFlags.Instance); if (pageField != null) { pageField.SetValue(formatter, null); try { Pair pair = (Pair)method.Invoke(null, new object[] { formatter, viewState }); if (pair != null) { MethodInfo loadViewState = converterPage.GetType().GetMethod("LoadViewStateRecursive", BindingFlags.Instance | BindingFlags.NonPublic); if (loadViewState != null) { FieldInfo postback = converterPage.GetType().GetField("_isCrossPagePostBack", BindingFlags.NonPublic | BindingFlags.Instance); if (postback != null) { postback.SetValue(converterPage, true); } FieldInfo namevalue = converterPage.GetType().GetField("_requestValueCollection", BindingFlags.NonPublic | BindingFlags.Instance); if (namevalue != null) { namevalue.SetValue(converterPage, new NameValueCollection()); } loadViewState.Invoke(converterPage, new object[] { ((Pair)((Pair)pair.First).Second) }); FieldInfo viewStateField = typeof(Control).GetField("_viewState", BindingFlags.NonPublic | BindingFlags.Instance); if (viewStateField != null) { return (StateBag)viewStateField.GetValue(converterPage); } } } } catch (Exception ex) { if (ex != null) { } } } } } } } return null; } A: You can ignore the URL field and simply paste the viewstate into the Viewstate string box. It does look like you have an old version; the serialisation methods changed in ASP.NET 2.0, so grab the 2.0 version A: Best way in python is use this link. A small Python 3.5+ library for decoding ASP.NET viewstate. First install that: pip install viewstate >>> from viewstate import ViewState >>> base64_encoded_viewstate = '/wEPBQVhYmNkZQ9nAgE=' >>> vs = ViewState(base64_encoded_viewstate) >>> vs.decode() ('abcde', (True, 1)) A: Here is the source code for a ViewState visualizer from Scott Mitchell's article on ViewState (25 pages) using System; using System.Collections; using System.Text; using System.IO; using System.Web.UI; namespace ViewStateArticle.ExtendedPageClasses { /// <summary> /// Parses the view state, constructing a viaully-accessible object graph. /// </summary> public class ViewStateParser { // private member variables private TextWriter tw; private string indentString = " "; #region Constructor /// <summary> /// Creates a new ViewStateParser instance, specifying the TextWriter to emit the output to. /// </summary> public ViewStateParser(TextWriter writer) { tw = writer; } #endregion #region Methods #region ParseViewStateGraph Methods /// <summary> /// Emits a readable version of the view state to the TextWriter passed into the object's constructor. /// </summary> /// <param name="viewState">The view state object to start parsing at.</param> public virtual void ParseViewStateGraph(object viewState) { ParseViewStateGraph(viewState, 0, string.Empty); } /// <summary> /// Emits a readable version of the view state to the TextWriter passed into the object's constructor. /// </summary> /// <param name="viewStateAsString">A base-64 encoded representation of the view state to parse.</param> public virtual void ParseViewStateGraph(string viewStateAsString) { // First, deserialize the string into a Triplet LosFormatter los = new LosFormatter(); object viewState = los.Deserialize(viewStateAsString); ParseViewStateGraph(viewState, 0, string.Empty); } /// <summary> /// Recursively parses the view state. /// </summary> /// <param name="node">The current view state node.</param> /// <param name="depth">The "depth" of the view state tree.</param> /// <param name="label">A label to display in the emitted output next to the current node.</param> protected virtual void ParseViewStateGraph(object node, int depth, string label) { tw.Write(System.Environment.NewLine); if (node == null) { tw.Write(String.Concat(Indent(depth), label, "NODE IS NULL")); } else if (node is Triplet) { tw.Write(String.Concat(Indent(depth), label, "TRIPLET")); ParseViewStateGraph(((Triplet) node).First, depth+1, "First: "); ParseViewStateGraph(((Triplet) node).Second, depth+1, "Second: "); ParseViewStateGraph(((Triplet) node).Third, depth+1, "Third: "); } else if (node is Pair) { tw.Write(String.Concat(Indent(depth), label, "PAIR")); ParseViewStateGraph(((Pair) node).First, depth+1, "First: "); ParseViewStateGraph(((Pair) node).Second, depth+1, "Second: "); } else if (node is ArrayList) { tw.Write(String.Concat(Indent(depth), label, "ARRAYLIST")); // display array values for (int i = 0; i < ((ArrayList) node).Count; i++) ParseViewStateGraph(((ArrayList) node)[i], depth+1, String.Format("({0}) ", i)); } else if (node.GetType().IsArray) { tw.Write(String.Concat(Indent(depth), label, "ARRAY ")); tw.Write(String.Concat("(", node.GetType().ToString(), ")")); IEnumerator e = ((Array) node).GetEnumerator(); int count = 0; while (e.MoveNext()) ParseViewStateGraph(e.Current, depth+1, String.Format("({0}) ", count++)); } else if (node.GetType().IsPrimitive || node is string) { tw.Write(String.Concat(Indent(depth), label)); tw.Write(node.ToString() + " (" + node.GetType().ToString() + ")"); } else { tw.Write(String.Concat(Indent(depth), label, "OTHER - ")); tw.Write(node.GetType().ToString()); } } #endregion /// <summary> /// Returns a string containing the <see cref="IndentString"/> property value a specified number of times. /// </summary> /// <param name="depth">The number of times to repeat the <see cref="IndentString"/> property.</param> /// <returns>A string containing the <see cref="IndentString"/> property value a specified number of times.</returns> protected virtual string Indent(int depth) { StringBuilder sb = new StringBuilder(IndentString.Length * depth); for (int i = 0; i < depth; i++) sb.Append(IndentString); return sb.ToString(); } #endregion #region Properties /// <summary> /// Specifies the indentation to use for each level when displaying the object graph. /// </summary> /// <value>A string value; the default is three blank spaces.</value> public string IndentString { get { return indentString; } set { indentString = value; } } #endregion } } And here's a simple page to read the viewstate from a textbox and graph it using the above code private void btnParse_Click(object sender, System.EventArgs e) { // parse the viewState StringWriter writer = new StringWriter(); ViewStateParser p = new ViewStateParser(writer); p.ParseViewStateGraph(txtViewState.Text); ltlViewState.Text = writer.ToString(); } A: Online Viewstate Viewer made by Lachlan Keown: http://lachlankeown.blogspot.com/2008/05/online-viewstate-viewer-decoder.html A: Normally, ViewState should be decryptable if you have the machine-key, right? After all, ASP.net needs to decrypt it, and that is certainly not a black box.
{ "language": "en", "url": "https://stackoverflow.com/questions/22814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: Libraries for pretty charts in SWT? I know the following libraries for drawing charts in an SWT/Eclipse RCP application: * *Eclipse BIRT Chart Engine (Links to an article on how to use it) *JFreeChart Which other libraries are there for drawing pretty charts with SWT? Or charts in Java generally? After all, you can always display an image... A: SWTChart gives good results for line, scatter, bar, and area charts. The API is straight forward and there are numerous examples on the website. I went from finding it on google to viewing my data in less than an hour. SWTChart A: You might like this one too It has the ability to plot real time data with your own data provider. A: The one I've used are JChart2D and JFreeChart. I did a live plotter application over the summer and used JFreeChart for that. The guy who had started the project had used JChart2D but I found that it doesn't have enough options for tweaking the chart look and feel. JChart2D is supposed to be very fast so if you need to do live plotting have a look at it, although JFreeChart didn't have any problems doing a plot a few times per second. There also quite a list of charting libraries on java2s.com A: I was also looking for a charting library for an Eclipse RCP app, stumbled on Caleb's post here and can definitely recommend SWTChart now myself. It is a lot faster than JFreeChart for me, plus easily extensible. If I would really have to complain about something, I'd say the javadoc could be a bit more verbose, but this is just to say everything else is great. A: I have not used BIRT or JGraph, however I use JFreeChart in my SWT application. I have found the best way to use JFreeChart in SWT is by making a composite an AWT frame and using the AWT functionality for JFreeChart. The way to do this is by creating a composite Composite comp = new Composite(parent, SWT.NONE | SWT.EMBEDDED); Frame frame = SWT_AWT.new_Frame(comp); JFreeChart chart = createChart(); ChartPanel chartPanel = new ChartPanel(chart); frame.add(chartPanel); There are several problems in regards to implementations across different platforms as well as the SWT code in it is very poor (in its defense Mr. Gilbert does not know SWT well and it is made for AWT). My two biggest problems are as AWT events bubble up through SWT there are some erroneous events fired and due to wrapping the AWT frame JFreeChart becomes substantially slower. @zvikico The idea of putting the chart into a web page is probably not a great way to go. There are a few problems first being how Eclipse handles integrating the web browser on different platforms is inconsistent. Also from my understanding of a few graphing packages for the web they are server side requiring that setup, also many companies including mine use proxy servers and sometimes this creates issues with the Eclipse web browsing. A: There’s also ILOG JViews Charts which looks pretty feature-complete… if you can afford it. Here is some additional infos on using it with eclipse. A: I suggest you try jzy3d, a simple java library for plotting 3d data. It's for java, on AWT, Swing or SWT. A: After evaluationg several options I decided to use a JavaScript library for showing plots in my Eclipse Plugin. As zvikico already suggested it is possible to show a html page in a browser. In the html page you can utilize one of the JavaScript libraries to do the actual plotting. If you use Chartist you can save the image as SVG file from the context menu. Some JavaScript charting libraries: * *Chartist: http://gionkunz.github.io/chartist-js *D3js: http://d3js.org *Flot: http://www.flotcharts.org/ *Further JavaScript charting frameworks: https://en.wikipedia.org/wiki/Comparison_of_JavaScript_charting_frameworks Chartist Example image: Example java code: package org.treez.results.chartist; import java.net.URL; import javafx.application.Application; import javafx.concurrent.Worker; import javafx.geometry.HPos; import javafx.geometry.VPos; import javafx.scene.Scene; import javafx.scene.layout.Region; import javafx.scene.paint.Color; import javafx.scene.web.WebEngine; import javafx.scene.web.WebView; import javafx.stage.Stage; import netscape.javascript.JSObject; public class WebViewSample extends Application { private Scene scene; @Override public void start(Stage stage) { // create the scene stage.setTitle("Web View"); Browser browser = new Browser(); scene = new Scene(browser, 750, 500, Color.web("#666970")); stage.setScene(scene); stage.show(); } public static void main(String[] args) { launch(args); } } class Browser extends Region { final WebView browser = new WebView(); final WebEngine webEngine = browser.getEngine(); public Browser() { //add the web view to the scene getChildren().add(browser); //add finished listener webEngine.getLoadWorker().stateProperty().addListener((obs, oldState, newState) -> { if (newState == Worker.State.SUCCEEDED) { executeJavaScript(); } }); // load the web page URL url = WebViewSample.class.getResource("chartist.html"); String urlPath = url.toExternalForm(); webEngine.load(urlPath); } private void executeJavaScript() { String script = "var chartist = new Chartist.Line(" + "'#chart'," + " " + "{" + " labels: [1, 2, 3, 4, 5, 6, 7, 8]," + "series: [" + " [5, 9, 7, 8, 5, 3, 5, 44]" + "]" + "}, " + "" + "{" + " low: 0," + " showArea: true" + "}" + "" + ");" + " var get = function(){return chartist};"; webEngine.executeScript(script); Object resultJs = webEngine.executeScript("get()"); //get line JSObject line = (JSObject) resultJs; String getKeys = "{var keys = [];for (var key in this) {keys.push(key);} keys;}"; JSObject linekeys = (JSObject) line.eval(getKeys); JSObject options = (JSObject) line.eval("this.options"); JSObject optionkeys = (JSObject) options.eval(getKeys); options.eval("this.showLine=false"); } @Override protected void layoutChildren() { double w = getWidth(); double h = getHeight(); layoutInArea(browser, 0, 0, w, h, 0, HPos.CENTER, VPos.CENTER); } @Override protected double computePrefWidth(double height) { return 750; } @Override protected double computePrefHeight(double width) { return 500; } } Example html page: <!DOCTYPE html> <html> <head> <link rel="stylesheet" type="text/css" href="chartist.min.css"> </head> <body> <div class="ct-chart" id="chart"></div> <script type="text/javascript" src="chartist.js"></script> </body> </html> In order to get this working, chartist.js and chartist.min.css need to be downloaded and put at the same location as the html file. You could also include them from the web. See here for another example: https://www.snip2code.com/Snippet/233633/Chartist-js-example Edit I created a java wrapper for D3.js, see https://github.com/stefaneidelloth/javafx-d3 A: There's also JGraph, but I'm not sure if that's only for graphs (i.e. nodes and edges), or if it does charts also. A: Here's something different: it's very to embed web pages in SWT views. I recently tried it and it works very well. You can see where this is going: there are plenty of beautiful charting components for HTML, it could be an option. Just make sure the component is client-side only (unless you want to start a server). I haven't tested Flash, but I'm pretty sure you can get it to work (naturally, this means your software will require Flash plug-in installed). A: JCharts is another option. It is similar to JFreeChart but the documentation is free. It does not have direct support for SWT but you can always generate an image and embed it in an SWT frame.
{ "language": "en", "url": "https://stackoverflow.com/questions/22816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Where to put your code - Database vs. Application? I have been developing web/desktop applications for about 6 years now. During the course of my career, I have come across application that were heavily written in the database using stored procedures whereas a lot of application just had only a few basic stored procedures (to read, insert, edit and delete entity records) for each entity. I have seen people argue saying that if you have paid for an enterprise database use its features extensively. Whereas a lot of "object oriented architects" told me its absolute crime to put anything more than necessary in the database and you should be able to drive the application using the methods on those classes? Where do you think is the balance? Thanks, Krunal A: I think it's a business logic vs. data logic thing. If there is logic that ensures the consistency of your data, put it in a stored procedure. Same for convenience functions for data retrieval/update. Everything else should go into the code. A friend of mine is developing a host of stored procedures for data analysis algorithms in bioinformatics. I think his approach is quite interesting, but not the right way in the long run. My main objections are maintainability and lacking adaptability. A: I'm in the object oriented architects camp. It's not necessarily a crime to put code in the database, as long as you understand the caveats that go along with that. Here are some: * *It's not debuggable *It's not subject to source control *Permissions on your two sets of code will be different *It will make it more difficult to track where an error in the data came from if you're accessing info in the database from both places A: Anything that relates to Referential Integrity or Consistency should be in the database as a bare minimum. If it's in your application and someone wants to write an application against the database they are going to have to duplicate your code in their code to ensure that the data remains consistent. PLSQL for Oracle is a pretty good language for accessing the database and it can also give performance improvements. Your application can also be much 'neater' as it can treat the database stored procedures as a 'black box'. The sprocs themselves can also be tuned and modified without you having to go near your compiled application, this is also useful if the supplier of your application has gone out of business or is unavailable. I'm not advocating 'everything' should be in database, far from it. Treat each case seperately and logically and you will see which makes more sense, put it in the app or put it in the database. A: I'm coming from almost the same background and have heard the same arguments. I do understand that there are very valid reasons to put logic into the database. However, it depends on the type of application and the way it handles data which approach you should choose. In my experience, a typical data entry app like some customer (or xyz) management will massively benefit from using an ORM layer as there are not so many different views at the data and you can reduce the boilerplate CRUD code to a minimum. On the other hand, assume you have an application with a lot of concurrency and calculations that span a lot of tables and that has a fine-grained column-level security concept with locking and so on, you're probably better off doing stuff like that directly in the database. As mentioned before, it also depends on the variety of views you anticipate for your data. If there are many different combinations of columns and tables that need to be presented to the user, you may also be better off just handing back different result sets rather than map your objects one-by-one to another representation. After all, the database is good at dealing with sets, whereas OO code is good at dealing with single entities. A: Reading these answers, I'm quite confused by the lack of understanding of database programming. I am an Oracle Pl/sql developer, we source control for every bit of code that goes into the database. Many of the IDEs provide addins for most of the major source control products. From ClearCase to SourceSafe. The Oracle tools we use allow us to debug the code, so debugging isn't an issue. The issue is more of logic and accessibility. As a manager of support for about 5000 users, the less places i have to look for the logic, the better. If I want to make sure the logic is applied for ALL applications that use the data , even business logic, i put it in the DB. If the logic is different depending on the application, they can be responsible for it. A: @DannySmurf: It's not debuggable Depending on your server, yes, they are debuggable. This provides an example for SQL Server 2000. I'm guessing the newer ones also have this. However, the free MySQL server does not have this (as far as I know). It's not subject to source control Yes, it is. Kind of. Database backups should include stored procedures. Those backup files might or might not be in your version control repository. But either way, you have backups of your stored procedures. A: My personal preference is to try and keep as much logic and configuration out of the database as possible. I am heavily dependent on Spring and Hibernate these days so that makes it a lot easier. I tend to use Hibernate named queries instead of stored procedures and the static configuration information in Spring application context XML files. Anything that needs to go into the database has to be loaded using a script and I keep those scripts in version control. A: @Thomas Owens: (re source control) Yes, but that's not source control in the same sense that I can check in a .cs file (or .cpp file or whatever) and go and pick out any revision I want. To do that with database code requires a potentially-significant amount of effort to either retrieve the procedure from the database and transfer it to somewhere in the source tree, or to do a database backup every time a minor change is made. In either case (and regardless of the amount of effort), it's not intuitive; and for many shops, it's not a good enough solution either. There is also the potential here for developers who may not be as studious at that as others to forget to retrieve and check in a revision. It's technically possible to put ANYTHING in source control; the disconnect here is what I would take issue with. (re debuggable) Fair enough, though that doesn't provide much integration with the rest of the application (where the majority of the code could live). That may or may not be important. A: Well, if you care about the consistency of your data, there are reasons to implement code within the database. As others have said, placing code (and/or RI/constraints) inside the database acts to enforce business logic, close to the data itself. And, it provides a common, encapsulated interface, so that your new developer doesn't accidentally create orphan records or inconsistent data. A: Well, this one is difficult. As a programmer, you'll want to avoid TSQL and such "Database languages" as much as possible, because they are horrendous, difficult to debug, not extensible and there's nothing you can do with them that you won't be able to do using code on your application. The only reasons I see for writing stored procedures are: * *Your database isn't great (think how SQL Server doesn't implement LIMIT and you have to work around that using a procedure. *You want to be able to change a behaviour by changing code in just one place without re-deploying your client applications. *The client machines have big calculation-power constraints (think small embedded devices). For most applications though, you should try to keep your code in the application where you can debug it, keep it under version control and fix it using all the tools provided to you by your language.
{ "language": "en", "url": "https://stackoverflow.com/questions/22818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do I perform a Perl substitution on a string while keeping the original? In Perl, what is a good way to perform a replacement on a string using a regular expression and store the value in a different variable, without changing the original? I usually just copy the string to a new variable then bind it to the s/// regex that does the replacement on the new string, but I was wondering if there is a better way to do this? $newstring = $oldstring; $newstring =~ s/foo/bar/g; A: The statement: (my $newstring = $oldstring) =~ s/foo/bar/g; Which is equivalent to: my $newstring = $oldstring; $newstring =~ s/foo/bar/g; Alternatively, as of Perl 5.13.2 you can use /r to do a non destructive substitution: use 5.013; #... my $newstring = $oldstring =~ s/foo/bar/gr; A: This is the idiom I've always used to get a modified copy of a string without changing the original: (my $newstring = $oldstring) =~ s/foo/bar/g; In perl 5.14.0 or later, you can use the new /r non-destructive substitution modifier: my $newstring = $oldstring =~ s/foo/bar/gr; NOTE: The above solutions work without g too. They also work with any other modifiers. SEE ALSO: perldoc perlrequick: Perl regular expressions quick start A: Under use strict, say: (my $new = $original) =~ s/foo/bar/; instead. A: Another pre-5.14 solution: http://www.perlmonks.org/?node_id=346719 (see japhy's post) As his approach uses map, it also works well for arrays, but requires cascading map to produce a temporary array (otherwise the original would be modified): my @orig = ('this', 'this sucks', 'what is this?'); my @list = map { s/this/that/; $_ } map { $_ } @orig; # @orig unmodified A: The one-liner solution is more useful as a shibboleth than good code; good Perl coders will know it and understand it, but it's much less transparent and readable than the two-line copy-and-modify couplet you're starting with. In other words, a good way to do this is the way you're already doing it. Unnecessary concision at the cost of readability isn't a win. A: If you write Perl with use strict;, then you'll find that the one line syntax isn't valid, even when declared. With: my ($newstring = $oldstring) =~ s/foo/bar/; You get: Can't declare scalar assignment in "my" at script.pl line 7, near ") =~" Execution of script.pl aborted due to compilation errors. Instead, the syntax that you have been using, while a line longer, is the syntactically correct way to do it with use strict;. For me, using use strict; is just a habit now. I do it automatically. Everyone should. #!/usr/bin/env perl -wT use strict; my $oldstring = "foo one foo two foo three"; my $newstring = $oldstring; $newstring =~ s/foo/bar/g; print "$oldstring","\n"; print "$newstring","\n"; A: I hate foo and bar .. who dreamed up these non descriptive terms in programming anyway? my $oldstring = "replace donotreplace replace donotreplace replace donotreplace"; my $newstring = $oldstring; $newstring =~ s/replace/newword/g; # inplace replacement print $newstring; %: newword donotreplace newword donotreplace newword donotreplace A: if I just use this in oneliner, how about, sprintf("%s", $oldstring)
{ "language": "en", "url": "https://stackoverflow.com/questions/22836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "205" }
Q: ASP.NET - Building your own routing system In a recent project, I built my own MVC framework in PHP. One of the things I implemented was a routing system. I used Apache's mod_rewrite to send all requests to index.php, and then parsed the URI to extract information and route the request. I'm dabbling in ASP.NET now, and I'm wondering if/how I might perform something similar. Is there a way to route all requests (similar to the way WordPress does it) to one page where central route processing is performed? I'm aware of the MVC framework for ASP.NET, but I'd like to take a stab at this myself as I'm tinkering around and learning. EDIT: BTW, my hosting provider runs IIS 6 A: This is going to be a long answer, because I want to make sure you are fully aware of all the ways you can accomplish what you want to do. The routing engine that powers the ASP.NET MVC Framework will work with the traditional ASP.NET Framework. You can take advantage of using the RouteTable and assigning routes, just like you would in an ASP.NET MVC application. You just don't get the MVC portion in traditional ASP.NET sites. That was a huge enhancement for the ASP.NET Framework and it was great to see them reuse that code and make it work in both frameworks. If you want to learn more about this, check out ScottGu's post and scroll down to URL Routing Improvements. Also here is a reference on how to use the System.Web.Routing in WebForms by Phil Haack. Now, if you still want to write you own. You will need to learn the ASP.NET HTTP pipeline and how to implement the IHttpModule and the IHttpHandler interfaces to create your own HttpModule or HttpHandler class to handle your routing. These interfaces are the key in writing your own routing engine. To help put those interfaces in a working example, I couldn't recommend this MSDN article enough. It shows you how to with either interface and explains the differences when creating your own routing/url rewriting engine. Now, if you find out that this might be to much for you. There are third party libraries you can use of people who already wrote a routing/url rewriting engine in .NET. Here is a question that I saw not to long ago asking "What Url rewriter do you use for ASP.Net?" right here on SO.
{ "language": "en", "url": "https://stackoverflow.com/questions/22869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Language Books/Tutorials for popular languages It wasn't that long ago that I was a beginning coder, trying to find good books/tutorials on languages I wanted to learn. Even still, there are times I need to pick up a language relatively quickly for a new project I am working on. The point of this post is to document some of the best tutorials and books for these languages. I will start the list with the best I can find, but hope you guys out there can help with better suggestions/new languages. Here is what I found: Since this is now wiki editable, I am giving control up to the community. If you have a suggestion, please put it in this section. I decided to also add a section for general be a better programmer books and online references as well. Once again, all recommendations are welcome. General Programming Online Tutorials Foundations of Programming By Karl Seguin - From Codebetter, its C# based but the ideas ring true across the board, can't believe no-one's posted this yet actually. How to Write Unmaintainable Code - An anti manual that teaches you how to write code in the most unmaintable way possible. It would be funny if a lot of these suggestions didn't ring so true. The Programming Section of Wiki Books - suggested by Jim Robert as having a large amount of books/tutorials on multiple languages in various stages of completion Just the Basics To get a feel for a language. Books Code Complete - This book goes without saying, it is truely brilliant in too many ways to mention. The Pragmatic Programmer - The next best thing to working with a master coder, teaching you everything they know. Mastering Regular Expressions - Regular Expressions are an essential tool in every programmer's toolbox. This book, recommended by Patrick Lozzi is a great way to learn what they are capable of. Algorithms in C, C++, and Java - A great way to learn all the classic algorithms if you find Knuth's books a bit too in depth. C Online Tutorials This tutorial seems to pretty consise and thourough, looked over the material and seems to be pretty good. Not sure how friendly it would be to new programmers though. Books K&R C - a classic for sure. It might be argued that all programmers should read it. C Primer Plus - Suggested by Imran as being the ultimate C book for beginning programmers. C: A Reference Manual - A great reference recommended by Patrick Lozzi. C++ Online Tutorials The tutorial on cplusplus.com seems to be the most complete. I found another tutorial here but it doesn't include topics like polymorphism, which I believe is essential. If you are coming from C, this tutorial might be the best for you. Another useful tutorial, C++ Annotation. In Ubuntu family you can get the ebook on multiple format(pdf, txt, Postscript, and LaTex) by installing c++-annotation package from Synaptic(installed package can be found in /usr/share/doc/c++-annotation/. Books The C++ Programming Language - crucial for any C++ programmer. C++ Primer Plus - Orginally added as a typo, but the amazon reviews are so good, I am going to keep it here until someone says it is a dud. Effective C++ - Ways to improve your C++ programs. More Effective C++ - Continuation of Effective C++. Effective STL - Ways to improve your use of the STL. Thinking in C++ - Great book, both volumes. Written by Bruce Eckel and Chuck Ellison. Programming: Principles and Practice Using C++ - Stroustrup's introduction to C++. Accelerated C++ - Andy Koenig and Barbara Moo - An excellent introduction to C++ that doesn't treat C++ as "C with extra bits bolted on", in fact you dive straight in and start using STL early on. Forth Books FORTH, a text and reference. Mahlon G. Kelly and Nicholas Spies. ISBN 0-13-326349-5 / ISBN 0-13-326331-2. 1986 Prentice-Hall. Leo Brodie's books are good but this book is even better. For instance it covers defining words and the interpreter in depth. Java Online Tutorials Sun's Java Tutorials - An official tutorial that seems thourough, but I am not a java expert. You guys know of any better ones? Books Head First Java - Recommended as a great introductory text by Patrick Lozzi. Effective Java - Recommended by pek as a great intermediate text. Core Java Volume 1 and Core Java Volume 2 - Suggested by FreeMemory as some of the best java references available. Java Concurrency in Practice - Recommended by MDC as great resource for concurrent programming in Java. The Java Programing Language Python Online Tutorials Python.org - The online documentation for this language is pretty good. If you know of any better let me know. Dive Into Python - Suggested by Nickola. Seems to be a python book online. Perl Online Tutorials perldoc perl - This is how I personally got started with the language, and I don't think you will be able to beat it. Books Learning Perl - a great way to introduce yourself to the language. Programming Perl - greatly referred to as the Perl Bible. Essential reference for any serious perl programmer. Perl Cookbook - A great book that has solutions to many common problems. Modern Perl Programming - newly released, contains the latest wisdom on modern techniques and tools, including Moose and DBIx::Class. Ruby Online Tutorials Adam Mika suggested Why's (Poignant) Guide to Ruby but after taking a look at it, I don't know if it is for everyone. Found this site which seems to offer several tutorials for Ruby on Rails. Books Programming Ruby - suggested as a great reference for all things ruby. Visual Basic Online Tutorials Found this site which seems to devote itself to visual basic tutorials. Not sure how good they are though. PHP Online Tutorials The main PHP site - A simple tutorial that allows user comments for each page, which I really like. PHPFreaks Tutorials - Various tutorials of different difficulty lengths. Quakenet/PHP tutorials - PHP tutorial that will guide you from ground up. JavaScript Online Tutorials Found a decent tutorial here geared toward non-programmers. Found another more advanced one here. Nickolay suggested A reintroduction to javascript as a good read here. Books Head first JavaScript JavaScript: The Good Parts (with a Google Tech Talk video by the author) C# Online Tutorials C# Station Tutorial - Seems to be a decent tutorial that I dug up, but I am not a C# guy. C# Language Specification - Suggested by tamberg. Not really a tutorial, but a great reference on all the elements of C# Books C# to the point - suggested by tamberg as a short text that explains the language in amazing depth ocaml Books nlucaroni suggested the following: OCaml for Scientists Introduction to ocaml Using Understand and unraveling ocaml: practice to theory and vice versa Developing Applications using Ocaml - O'Reilly The Objective Caml System - Official Manua Haskell Online Tutorials nlucaroni suggested the following: Explore functional programming with Haskell Books Real World Haskell Total Functional Programming LISP/Scheme Books wfarr suggested the following: The Little Schemer - Introduction to Scheme and functional programming in general The Seasoned Schemer - Followup to Little Schemer. Structure and Interpretation of Computer Programs - The definitive book on Lisp (also available online). Practical Common Lisp - A good introduction to Lisp with several examples of practical use. On Lisp - Advanced Topics in Lisp How to Design Programs - An Introduction to Computing and Programming Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp - an approach to high quality Lisp programming What about you guys? Am I totally off on some of there? Did I leave out your favorite language? I will take the best comments and modify the question with the suggestions. A: For C++, I suggest Accelerated C++ by Koenig and Moo as a beginning text, though I don't know how it would be for an absolute novice. It focuses on using the STL right away, which makes getting things done much easier. A: Haskell: O'Reilly Book: * *Real World Haskell, a great tutorial-oriented book on Haskell, available online and in print. My favorite general, less academic online tutorials: * *The Haskell wikibook which contains all of the excellent Yet Another Haskell Tutorial. (This tutorial helps with specifics of setting up a Haskell distro and running example programs, for example.) *Learn you a Haskell for Great Good, in the spirit of Why's Poignant Guide to Ruby but more to the point. *Write yourself a Scheme in 48 hours. Get your hands dirty learning Haskell with a real project. Books on Functional Programming with Haskell: * *Lambda calculus, combinators, more theoretical, but in a very down to earth manner: Davie's Introduction to Functional Programming Systems Using Haskell *Laziness and program correctness, thinking functionally: Bird's Introduction to Functional Programming Using Haskell A: Effective Java is a must but I recommend being comfortable with Java first to fully understand the examples. A: Ruby * *The Free Ruby on Rails Training Online Course by Sang Shin Isn't too bad. It also has a decent amount of further reading links on each subject in the course A: I'd add Bruce Eckel's programming books: * *Thinking in Java (print version: 4th edition; 3rd. ed. is online: http://www.mindview.net/Books/TIJ/) *Thinking in C++ (2nd ed, freely available online: http://mindview.net/Books/TICPP/ThinkingInCPP2e.html In general, his "Books" page (http://mindview.net/Books/) is a good resource. The freely availabe books can also be found at http://www.ibiblio.org/pub/docs/books/eckel/ A: Can't believe nobody has mentioned the Perl Best Practices. There's also a Twitter feed that delivers one PBP per day. I learned Perl from Robert's Perl Tutorial, which I recommend, but it hasn't been updated since 1999. A newer recommended tutorial is Steve's Perl Tutorial. For web development with Perl, the clear winner is Catalyst, and the Catalyst wiki is the starting point for learning. A: I know this is going to seem old-fashioned, but I don't think much of using online tutorials to learn programming languages or platforms. These generally give you no more than a little taste of the language. To really learn a language, you need the equivalent of a "book", and in many cases, this means a real dead-tree book. If you want to learn C, read K&R. If you want to learn C++, read Stroustrup. If you want to learn Lisp/Scheme, read SICP. Etc. If you're not willing to spend more than $30 and a few hours to learn a language, you probably aren't going to learn it. A: For Lisp and Scheme (hell, functional programming in general), there are few things that provide a more solid foundation than The Little Schemer and The Seasoned Schemer. Both provide a very simple and intuitive introduction to both Scheme and functional programming that proves far simpler for new students or hobbyists than any of the typical volumes that rub off like a nonfiction rendition of War & Peace. Once they've moved beyond the Schemer series, SICP and On Lisp are both fantastic choices. A: check out the programming section of wikibooks Many of them are fully formed, and quite a few have more advanced sections (which are in varying states of completion) on specific functionality. also, w3 schools has a great php tutorial and reference section their html and css sections are good for reference too. A: C++ * *Thinking in C++ by Bruce Eckel *C++ Coding Standards by Herb Sutter & Andrei Alexandrescu The first one is good for beginners and the second one requires more advanced level in C++. A: * *C - The C Programming Language - Obviously I had to reference K&R, one of the best programming books out there full stop. *C++ - Accelerated C++ - This clear, well written introduction to C++ goes straight to using the STL and gives nice, clear, practical examples. Lives up to its name. *C# - Pro C# 2008 and the .NET 3.5 Platform - Bit of a mouthful but wonderfully written and huge depth. *F# - Expert F# - Designed to take experienced programmers from zero to expert in F#. Very well written, one of the author's invented F# so you can't go far wrong! *Scheme - The Little Schemer - Really unique approach to teaching a programming language done really well. *Ruby - Programming Ruby - Affectionately known as the 'pick axe' book, this is THE defacto introduction to Ruby. Very well written, clear and detailed. A: For Javascript: * *Javascript: The Definitive Guide *Pro Javascript Techniques For PHP: * *PHP Objects, Patterns, and Practice For OO design & programming, patterns: * *Object-Oriented Software Construction (a bible, maybe the Head First OO would be nice, I don't know it) *Head First Design Patterns (I so love this book) *Design Patterns For Refactoring: * *Refactoring: Improving the Design of Existing Code *Working Effectively with Legacy Code For SQL/MySQL: * *Joe Celko: Tree and Hierarchies in SQL (only on a specific subject, but I found it interesting) *Pro MySQL A: These are all really good, written by academia and (some) are books (an unpublished oreilly book --translated from French, but no issues I've found), for example). I've *'d my favorite ones that helped me the most. ocaml : * **Introduction to ocaml *Using Understand and unraveling ocaml: practice to theory and vice versa **Developing Applications using Ocaml - O'Reilly *The Objective Caml System - Official Manual *A Concise Introduction to Objective Caml *Practical Ocaml Haskell : * *Explore functional programming with Haskell **Real World Haskell **Total Functional Programming A: Python: http://diveintopython.net/ JS: a re-introduction to JavaScript is the introduction to the language (not the browser specifics) for programmers. Don't know a good tutorial on JS in browser. Great idea by the way! A: C Primer Plus, 5th Edition - The C book to get if you're learning C without any prior programming experience. It's a personal favorite of mine as I learned to program from this book. It has all the qualities a beginner friendly book should have: * *Doesn't assume any prior exposure to programming *Enjoyable to read (without becoming annoying like For Dummies / *Doesn't oversimplify A: Let's not forget Head First Java, which could be considered the essential first step in this language or maybe the step after the online tutorials by Sun. It's great for the purpose of grasping the language concisely, while adding a bit of fun, serving as a stepping stone for the more in-depth books already mentioned. Sedgewick offers great series on Algorithms which are a must-have if you find Knuth's books to be too in-depth. Knuth aside, Sedgewick brings a solid approach to the field and he offers his books in C, C++ and Java. The C++ books could be used backwardly on C since he doesn't make a very large distinction between the two languages in his presentation. Whenever I'm working on C, C:A Reference Manual, by Harbison and Steele, goes with me everywhere. It's concise and efficient while being extremely thorough making it priceless(to me anyways). Languages aside, and if this thread is to become a go-to for references in which I think it's heading that way due to the number of solid contributions, please include Mastering Regular Expressions, for reasons I think most of us are aware of... some would also say that regex can be considered a language in its own right. Further, its usefulness in a wide array of languages makes it invaluable. A: Common Lisp For a good reference of CL check out Common Lisp the Language, 2nd Edition A: For Objective C: Cocoa Programming for Mac OSX - Third Edition Aaron Hillegass Published by Addison Wesley Programming in Objective C, Stephen G Kochan, A: Head First Javascript is a good intro to JS for beginning programmers - it creatively explains basic programming concepts using JS syntax. The Head First series is based on researched techniques for helping you learn and remember new information. They have you do a lot of exercises and puzzles which might seem juvenile, but really help cement the knowledge in your brain. One exercise I really liked was after they explained data types, they show a picture of a city street and say "label all the data types you can find in this picture." So the blinker on a car is a boolean, the sign on the store is a string, and the address is a number. That helped me get the idea of how to translate real information into a program. Based only on this book, I'd say the Head First series is a great way to learn something the first time, but the story-like format they have would make them difficult to use as references. A: The Ruby Way by Hal Fulton The Ruby Way cover http://rubyhacker.com/trw2cover.gif A: Given recent developments I think it's important to include the recent explosion of free online course offerings from universities and private companies. The new boston is a tutorial site i've always used for numerous languages for years, great beginner point. http://www.udacity.com/ https://www.coursera.org/ http://www.coursehero.org/ http://www.codecademy.com/ http://mitx.mit.edu/ http://www.khanacademy.org/ http://thenewboston.org/ A: For C#: * *CLR via C# * *C# in Depth A: I second Kristopher's recommendation of K&R for C. I've found the "Essential Actionscript 2.0" book quite useful for AS coding (there's an AS3 version out now I believe). I've found that having real books to thumb through is more helpful than an online reference in some cases. Not really sure why though. A: hmm, I don't know if I would say that online materials are useless, but I do agree that there is something about books. Maybe they are better written, or maybe it is the act of forking over $50 that makes you more inclined to study the material. Either way, I agree that books should be part of this question. If anyone has any suggestions for books for languages I will edit the post with the best suggestions. A: The reference you have listed for Ruby is for Ruby on Rails. While still ruby deep down, it is definitely not a place to start for people wanting to learn Ruby. For Ruby tutorials, I would suggest Why's (Poignant) Guide to Ruby as a great starting point for anyone interested in the language. If you would want to get into more detail, I would recommend the book Programming Ruby, which has become the standard for all things Ruby. The third edition is currently being written, highlighting Ruby 1.9 features, so I would hold off for a while if anyone is considering buying this book. A: For J2EE you have a very comprehensive tutorial at: http://java.sun.com/javaee/5/docs/tutorial/doc/ A: For Java, I highly recommend Core Java. It's a large tome (or two large tomes), but I've found it to be one of the best references on Java I've read. A: I know this is a cross post from here... but, I think one of the best Java books is Java Concurrency in Practice by Brian Goetz. A rather advanced book - but, it will wear well on your concurrent code and Java development in general. A: The defacto standard for learning Grails is the excellent Getting Started with Grails by Jason Rudolph. You can debate whether it is an online tutorial or a book since it can be purchased but is available as a free download. There are more "real" books being published and I recommend Beginning Groovy and Grails. A: C# C# to the Point by Hanspeter Mössenböck. On a mere 200 pages he explains C# in astonishing depth, focusing on underlying concepts and concise examples rather than hand waving and Visual Studio screenshots. For additional information on specific language features, check the C# language specification ECMA-334. Framework Design Guidelines, a book by Krzysztof Cwalina and Brad Abrams from Microsoft, provides further insight into the main design decisions behind the .NET library. A: For Python, I would like to suggest 'A Byte of Python'. Disclosure: I'm the author of this book, but the user feedback on the main page and the book should hopefully speak for itself :) A: I'll second Real World Haskell. After visiting the #stackoverflow IRC Channel (irc.freenode.net) As of now I have spoke to two authors, one on Reddit and one on the #haskell channel on the same server as the SO channel and they have been nothing but helpful in my quest to learn Haskell. It's the first time I would highly recommend a book on programming to anyone. A: Some books on Java I'd recommend: For Beginners: Head First Java is an excellent introduction to the language. And I must also mention Head First Design Patterns which is a great resource for learners to grasp what can be quite challenging concepts. The easy-going fun style of these books are ideal for ppl new to programming. A really thorough, comprehensive book on Java SE is Bruce Eckel's Thinking In Java v4. (At just under 1500 pages it's good for weight-training as well!) For those of us not on fat bank-bonuses there are older versions available for free download. Of course, as many ppl have already mentioned, Josh Bloch's Effective Java v2 is an essential part of any Java developer's library. A: Smalltalk * *Pharo by Example *Seaside book A: MSDN http://msdn.microsoft.com/en-us/library/ms229335.aspx A: For C++ I am a big fan of C++ Common Knowledge: Essential Intermediate Programming, I like that it is organized into small sections (usually less than 5 pages per topic) So it is easy for me to grab it and read up on concepts that I need to review. It is a must read for me the night before and on the plane to a job interview. A: For Java EE 5 there's a separate tutorial JEE tutorial. That's useful, as people often ask about persistence and xml binding in java. A: C# - Dot Net Book Zero A: Java: SCJP for Java 6. I still use it as a reference. A: For REALbasic: Buginning REALbasic, From Novice to Professional by Jerry Lee Ford Very basic, but a good way to get started A: Common Lisp I would add "Practical Common Lisp", by Peter Seibel to the lisp list. It is particularly good at providing examples (MP3 parsing, shoutcast server, HTML compiler) that are topical. http://gigamonkeys.com/book/ A: Java Java In a Nutshell. The name is a bit of a misnomer because it's quite thick but it really has everything you need to learn Java. A: For PHP, I'd recommend Advanced PHP Programming by George Schlossnagle. If you're just getting started in PHP, it's probably not the best book to start, but after you have an idea of what you are doing, it's a book that (in my opinion) tells you a lot of best practices and tips that you might miss out on otherwise. For learning Lisp, I've been recommend to read Practical Common Lisp by Peter Seibel. This one is available online at http://www.gigamonkeys.com/book/. For Lua, I recommend Programming in Lua by Roberto Ierusalimschy. This book is not the best programming book out there, but among the current selection of Lua books, this would be the best. This first edition of the book is also available online at http://www.lua.org/pil/. As the back cover of the book mentions, the book is oriented towards those who already have some programming experience in another language. A: One site I keep coming back to is http://www.javapractices.com. It covers most of the techniques that are discussed in the Effective Java book. Also another good site to check up coding examples (from basic to advanced) is http://www.java2s.com A: Design Patterns in Ruby: http://www.amazon.com/Design-Patterns-Ruby-Addison-Wesley-Professional/dp/0321490452#reader A: Erlang I've found Programming Erlang to be an excellent book for learning Erlang. It's written by the guy who created the language, and covers both basic and advanced topics very well. It has some great examples, too. A: C: “Programming in C”, Stephen G. Kochan, Developer's Library. Organized, clear, elaborate, beautiful. A: Java Java Notes - Very Neat for novice java programmer A: C K.N. King has a list of recommended C books in his personal page: * *The C Puzzle Book (Revised Edition) *C: A Reference Manual, Fifth Edition *C Unleashed *C Traps and Pitfalls *Expert C Programming A: Core Java Vol 1 and 2. By Cay S. Horstmann and Gary Cornell Best Java book EVER!!!!!! A: Perl Core Language - Little Black Book - excellent reference! A: System: Computer Systems: A Programmer's Perspective, 2/E Lisp: Let Over Lambda A: For C and C++ online tutorials (and other topics), http://www.cprogramming.com/tutorial.html
{ "language": "en", "url": "https://stackoverflow.com/questions/22873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "249" }
Q: How do you prevent leading zeros from being stripped when importing an excel doc using c# I'm able to connect to and read an excel file no problem. But when importing data such as zipcodes that have leading zeros, how do you prevent excel from guessing the datatype and in the process stripping out leading zeros? A: I believe you have to set the option in your connect string to force textual import rather than auto-detecting it. Provider=Microsoft.ACE.OLEDB.12.0; Data Source=c:\path\to\myfile.xlsx; Extended Properties=\"Excel 12.0 Xml;IMEX=1\"; Your milage may vary depending on the version you have installed. The IMEX=1 extended property tells Excel to treat intermixed data as text. A: Prefix with ' A: Prefixing the contents of the cell with ' forces Excel to see it as text instead of a number. The ' won't be displayed in Excel. A: There is a registry hack that can force Excel to read more than the first 8 rows when reading a column to determine the type: Change HKLM\Software\Microsoft\Jet\4.0\Engines\Excel\TypeGuessRows To be 0 to read all rows, or another number to set it to that number of rows. Not that this will have a slighht performance hit. A: I think the way to do this would be to format the source excel file such that the column is formatted as Text instead of General. Select the entire column and right click and select format cells, select text from the list of options. I think that would explicitly define that the column content is text and should be treated as such. Let me know if that works. A: Saving the file as a tab delimited text file has also worked well. ---old Unfortunately, we can't rely on the columns of the excel doc to stay in a particular format as the users will be pasting data into it regularly. I don't want the app to crash if we're relying on a certain datatype for a column. prefixing with ' would work, is there a reasonable way to do that programatically once the data already exists in the excel doc? A: Sending value 00022556 as '=" 00022556"' from Sql server is excellent way to handle leading zero problem A: Add "\t" before your string. It'll make the string seem in a new tab.
{ "language": "en", "url": "https://stackoverflow.com/questions/22879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What is the best way to prevent session hijacking? Specifically this is regarding when using a client session cookie to identify a session on the server. Is the best answer to use SSL/HTTPS encryption for the entire web site, and you have the best guarantee that no man in the middle attacks will be able to sniff an existing client session cookie? And perhaps second best to use some sort of encryption on the session value itself that is stored in your session cookie? If a malicious user has physical access to a machine, they can still look at the filesystem to retrieve a valid session cookie and use that to hijack a session? A: Try Secure Cookie protocol described in this paper by Liu, Kovacs, Huang, and Gouda: As stated in document: A secure cookie protocol that runs between a client and a server needs to provide the following four services: authentication, confidentiality, integrity and anti-replay. As for ease of deployment: In terms of efficiency, our protocol does not involve any database lookup or public key cryptography. In terms of deployability, our protocol can be easily deployed on an existing web server, and it does not require any change to the Internet cookie specication. In short: it is secure, lightweight, works for me just great. A: The SSL only helps with sniffing attacks. If an attacker has access to your machine I will assume they can copy your secure cookie too. At the very least, make sure old cookies lose their value after a while. Even a successful hijaking attack will be thwarted when the cookie stops working. If the user has a cookie from a session that logged in more than a month ago, make them reenter their password. Make sure that whenever a user clicks on your site's "log out" link, that the old session UUID can never be used again. I'm not sure if this idea will work but here goes: Add a serial number into your session cookie, maybe a string like this: SessionUUID, Serial Num, Current Date/Time Encrypt this string and use it as your session cookie. Regularly change the serial num - maybe when the cookie is 5 minutes old and then reissue the cookie. You could even reissue it on every page view if you wanted to. On the server side, keep a record of the last serial num you've issued for that session. If someone ever sends a cookie with the wrong serial number it means that an attacker may be using a cookie they intercepted earlier so invalidate the session UUID and ask the user to reenter their password and then reissue a new cookie. Remember that your user may have more than one computer so they may have more than one active session. Don't do something that forces them to log in again every time they switch between computers. A: Ensure you don't use incremting integers for session IDs. Much better to use a GUID, or some other long randomly generated character string. A: There are many ways to create protection against session hijack, however all of them are either reducing user satisfaction or are not secure. * *IP and/or X-FORWARDED-FOR checks. These work, and are pretty secure... but imagine the pain of users. They come to an office with WiFi, they get new IP address and lose the session. Got to log-in again. *User Agent checks. Same as above, new version of browser is out, and you lose a session. Additionally, these are really easy to "hack". It's trivial for hackers to send fake UA strings. *localStorage token. On log-on generate a token, store it in browser storage and store it to encrypted cookie (encrypted on server-side). This has no side-effects for user (localStorage persists through browser upgrades). It's not as secure - as it's just security through obscurity. Additionally you could add some logic (encryption/decryption) to JS to further obscure it. *Cookie reissuing. This is probably the right way to do it. The trick is to only allow one client to use a cookie at a time. So, active user will have cookie re-issued every hour or less. Old cookie is invalidated if new one is issued. Hacks are still possible, but much harder to do - either hacker or valid user will get access rejected. A: AFAIK the session object is not accessible at the client, as it is stored at the web server. However, the session id is stored as a Cookie and it lets the web server track the user's session. To prevent session hijacking using the session id, you can store a hashed string inside the session object, made using a combination of two attributes, remote addr and remote port, that can be accessed at the web server inside the request object. These attributes tie the user session to the browser where the user logged in. If the user logs in from another browser or an incognito mode on the same system, the IP addr would remain the same, but the port will be different. Therefore, when the application is accessed, the user would be assigned a different session id by the web server. Below is the code I have implemented and tested by copying the session id from one session into another. It works quite well. If there is a loophole, let me know how you simulated it. @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { HttpSession session = request.getSession(); String sessionKey = (String) session.getAttribute("sessionkey"); String remoteAddr = request.getRemoteAddr(); int remotePort = request.getRemotePort(); String sha256Hex = DigestUtils.sha256Hex(remoteAddr + remotePort); if (sessionKey == null || sessionKey.isEmpty()) { session.setAttribute("sessionkey", sha256Hex); // save mapping to memory to track which user attempted Application.userSessionMap.put(sha256Hex, remoteAddr + remotePort); } else if (!sha256Hex.equals(sessionKey)) { session.invalidate(); response.getWriter().append(Application.userSessionMap.get(sessionKey)); response.getWriter().append(" attempted to hijack session id ").append(request.getRequestedSessionId()); response.getWriter().append("of user ").append(Application.userSessionMap.get(sha256Hex)); return; } response.getWriter().append("Valid Session\n"); } I used the SHA-2 algorithm to hash the value using the example given at SHA-256 Hashing at baeldung Looking forward to your comments. A: Have you considered reading a book on PHP security? Highly recommended. I have had much success with the following method for non SSL certified sites. * *Dis-allow multiple sessions under the same account, making sure you aren't checking this solely by IP address. Rather check by token generated upon login which is stored with the users session in the database, as well as IP address, HTTP_USER_AGENT and so forth *Using Relation based hyperlinks Generates a link ( eg. http://example.com/secure.php?token=2349df98sdf98a9asdf8fas98df8 ) The link is appended with a x-BYTE ( preferred size ) random salted MD5 string, upon page redirection the randomly generated token corresponds to a requested page. * *Upon reload, several checks are done. *Originating IP Address *HTTP_USER_AGENT *Session Token *you get the point. *Short Life-span session authentication cookie. as posted above, a cookie containing a secure string, which is one of the direct references to the sessions validity is a good idea. Make it expire every x Minutes, reissuing that token, and re-syncing the session with the new Data. If any mis-matches in the data, either log the user out, or having them re-authenticate their session. I am in no means an expert on the subject, I'v had a bit of experience in this particular topic, hope some of this helps anyone out there. A: // Collect this information on every request $aip = $_SERVER['REMOTE_ADDR']; $bip = $_SERVER['HTTP_X_FORWARDED_FOR']; $agent = $_SERVER['HTTP_USER_AGENT']; session_start(); // Do this each time the user successfully logs in. $_SESSION['ident'] = hash("sha256", $aip . $bip . $agent); // Do this every time the client makes a request to the server, after authenticating $ident = hash("sha256", $aip . $bip . $agent); if ($ident != $_SESSION['ident']) { end_session(); header("Location: login.php"); // add some fancy pants GET/POST var headers for login.php, that lets you // know in the login page to notify the user of why they're being challenged // for login again, etc. } What this does is capture 'contextual' information about the user's session, pieces of information which should not change during the life of a single session. A user isn't going to be at a computer in the US and in China at the same time, right? So if the IP address changes suddenly within the same session that strongly implies a session hijacking attempt, so you secure the session by ending the session and forcing the user to re-authenticate. This thwarts the hack attempt, the attacker is also forced to login instead of gaining access to the session. Notify the user of the attempt (ajax it up a bit), and vola, Slightly annoyed+informed user and their session/information is protected. We throw in User Agent and X-FORWARDED-FOR to do our best to capture uniqueness of a session for systems behind proxies/networks. You may be able to use more information then that, feel free to be creative. It's not 100%, but it's pretty damn effective. There's more you can do to protect sessions, expire them, when a user leaves a website and comes back force them to login again maybe. You can detect a user leaving and coming back by capturing a blank HTTP_REFERER (domain was typed in the URL bar), or check if the value in the HTTP_REFERER equals your domain or not (the user clicked an external/crafted link to get to your site). Expire sessions, don't let them remain valid indefinitely. Don't rely on cookies, they can be stolen, it's one of the vectors of attack for session hijacking. A: Encrypting the session value will have zero effect. The session cookie is already an arbitrary value, encrypting it will just generate another arbitrary value that can be sniffed. The only real solution is HTTPS. If you don't want to do SSL on your whole site (maybe you have performance concerns), you might be able to get away with only SSL protecting the sensitive areas. To do that, first make sure your login page is HTTPS. When a user logs in, set a secure cookie (meaning the browser will only transmit it over an SSL link) in addition to the regular session cookie. Then, when a user visits one of your "sensitive" areas, redirect them to HTTPS, and check for the presence of that secure cookie. A real user will have it, a session hijacker will not. EDIT: This answer was originally written in 2008. It's 2016 now, and there's no reason not to have SSL across your entire site. No more plaintext HTTP! A: There is no way to prevent session hijaking 100%, but with some approach can we reduce the time for an attacker to hijaking the session. Method to prevent session hijaking: 1 - always use session with ssl certificate; 2 - send session cookie only with httponly set to true(prevent javascript to access session cookie) 2 - use session regenerate id at login and logout(note: do not use session regenerate at each request because if you have consecutive ajax request then you have a chance to create multiple session.) 3 - set a session timeout 4 - store browser user agent in a $_SESSION variable an compare with $_SERVER['HTTP_USER_AGENT'] at each request 5 - set a token cookie ,and set expiration time of that cookie to 0(until the browser is closed). Regenerate the cookie value for each request.(For ajax request do not regenerate token cookie). EX: //set a token cookie if one not exist if(!isset($_COOKIE['user_token'])){ //generate a random string for cookie value $cookie_token = bin2hex(mcrypt_create_iv('16' , MCRYPT_DEV_URANDOM)); //set a session variable with that random string $_SESSION['user_token'] = $cookie_token; //set cookie with rand value setcookie('user_token', $cookie_token , 0 , '/' , 'donategame.com' , true , true); } //set a sesison variable with request of www.example.com if(!isset($_SESSION['request'])){ $_SESSION['request'] = -1; } //increment $_SESSION['request'] with 1 for each request at www.example.com $_SESSION['request']++; //verify if $_SESSION['user_token'] it's equal with $_COOKIE['user_token'] only for $_SESSION['request'] > 0 if($_SESSION['request'] > 0){ // if it's equal then regenerete value of token cookie if not then destroy_session if($_SESSION['user_token'] === $_COOKIE['user_token']){ $cookie_token = bin2hex(mcrypt_create_iv('16' , MCRYPT_DEV_URANDOM)); $_SESSION['user_token'] = $cookie_token; setcookie('user_token', $cookie_token , 0 , '/' , 'donategame.com' , true , true); }else{ //code for session_destroy } } //prevent session hijaking with browser user agent if(!isset($_SESSION['user_agent'])){ $_SESSION['user_agent'] = $_SERVER['HTTP_USER_AGENT']; } if($_SESSION['user_agent'] != $_SERVER['HTTP_USER_AGENT']){ die('session hijaking - user agent'); } note: do not regenerate token cookie with ajax request note: the code above is an example. note: if users logout then the cookie token must be destroyed as well as the session 6 - it's not a good aproach to use user ip for preventing session hijaking because some users ip change with each request. THAT AFFECT VALID USERS 7 - personally I store session data in database , it's up to you what method you adopt If you find mistake in my approach please correct me. If you have more ways to prevent session hyjaking please tell me. A: Let us consider that during the login phase the client and server can agree on a secret salt value. Thereafter the server provides a count value with each update and expects the client to respond with the hash of the (secret salt + count). The potential hijacker does not have any way to obtain this secret salt value and thus cannot generate the next hash. A: To reduce the risk you can also associate the originating IP with the session. That way an attacker has to be within the same private network to be able to use the session. Checking referer headers can also be an option but those are more easily spoofed. A: Use SSL only and instead of encrypting the HTTP_USER_AGENT in the session id and verifying it on every request, just store the HTTP_USER_AGENT string in your session db as well. Now you only have a simple server based string compare with the ENV'HTTP_USER_AGENT'. Or you can add a certain variation in your string compare to be more robust against browser version updates. And you could reject certain HTTP_USER_AGENT id's. (empty ones i.e.) Does not resolve the problem completley, but it adds at least a bit more complexity. Another method could be using more sophisticated browser fingerprinting techniques and combine theyse values with the HTTP_USER_AGENT and send these values from time to time in a separate header values. But than you should encrypt the data in the session id itself. But that makes it far more complex and raises the CPU usage for decryption on every request. A: If ISP hijack the certificate-verification, ISP will possibly initiate a Man-in-the-middle attack. Especially with a compromised certificate authorities. So I believe you can not prevent session hijack from ISP. Especially when legal forces come with a fake certificate got from CA under law enforce. You will need something outside the network to protect your session, for example one time pad. This is why one time pad so sensitive and can only be sold by few companies. Be careful, one time pad may be exploited. Choose your one time pad with profession. A: Protect by: $ip=$_SERVER['REMOTE_ADDER']; $_SESSEION['ip']=$ip;
{ "language": "en", "url": "https://stackoverflow.com/questions/22880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "133" }
Q: What is the best way to communicate with a MySQL server? I am going to be using C/C++, and would like to know the best way to talk to a MySQL server. Should I use the library that comes with the server installation? Are they any good libraries I should consider other than the official one? A: MySQL++ A: That depends a bit on what you want to do. First, check out libraries that provide connectivity to more than on DBMS platform. For example, Qt makes it very easy to connect to MySQL, MS SQL Server and a bunch of others, and change the database driver (connection type) at runtime - with just a few lines of code. MySQL-specific libraries are fine, but bear in mind that you are locking yourself down to one DB implementation - if you ever need to change in the future it's gonna be a whole lot of work - even if you design your code such that the DB-specific stuff is behind a facade. Why not use a library that provides connectivity to multiple platforms, and save yourself the trouble? A: OTL is a solid cross-DBMS solution for C++ that my project has been using for years. We use it to talk to SQL Server (via ODBC) and Oracle (via OCI). It's fairly easy to drive, and comes with a large number of examples across all the supported databases. A: There is nothing wrong with MySQL's own client-libraries. If you are willing to settle for reduced functionality, you can buy yourself some extra portability by using ODBC, UDBC, apr_dbd, or some other database-abstraction library (such as the OTL offered already). This will make switching a back-end easier, but, as I mentioned, at the expense of offering less functionality compared to that of the native client. Because DB-vendors differ, the abstraction-libraries can only really offer the functions common to all (or most) of the back-ends. Whether you prefer to optimize for a particular DB or would rather make it easier to switch back-ends, is up to you (and, perhaps, your manager).
{ "language": "en", "url": "https://stackoverflow.com/questions/22901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I stop Windows applications from stealing focus I know this isn't strictly a programming question but y'all must have experienced this. So...you have four or five RDP sessions open over the corp VPN, you're bashing away inside your favourite IDE, your VPN to the data centre bounces briefly then recovers, all your RDP sessions start re-establishing their connections and whilst doing so sequentially keep grabbing focus, one after the other. Pretty bloody annoying and downright rude. Any idea how to prevent this behaviour and just make the RDP client flash it's taskbar button instead of totally grabbing focus away from whatever you were doing? @Jason - thanks for the reply, I'm running 64 bit Vista and 64 Bit Windows 2008. Any ideas how well it plays? @Jason - good idea. Done. @Ryan - thanks also for the answer. I tried Terminals a few times before, but quite often I need to see two or three sessions side by side which the tabbing doesn't really facilitate too well, would've been nice to have a 'pop out in own window' button. I did once grab the source code to fix stuff like that, but never got the time. I also found it behaved oddly whenever there was a brief network disconnect (e.g. xDSL flapping) and it would reconnect to the wrong session (usually a new one) and leave the session I had opened in a disconnected state on the server. Otherwise Terminals would've been really cool, we have 200+ windows servers, and organising all those .rdp files can be a pain. A: I use Tweak UI to configure explorer so that apps don't steal focus; you can also configure how many times they flash in the taskbar as well. EDIT: Once you are within Tweak UI, these options are found under General > Focus. EDIT: @Kev, apparently there is a 64-bit version (not MS approved, apparently, I would scan it for viruses of course) that works successfully with the 64-bit version of XP. From what I understand, you download that and then run it in XP compatibility mode as administrator and it will do the trick. Tweak UI is basically a nice wrapper around a collection of registry hacks, so I imagine you could find the hacks themselves if you didn't care for running Tweak UI in this manner. Hope that works for you! A: As an alternative, you could try using something like Terminals. It allows you to have multiple remote desktop windows open at once all as tabs in the same window. Quite cool. Also, it is open source so you can change its behavior if needed (although I don't believe it steals focus like a normal RDP session does). A: Since I don't think there's an approved version of TweakUI other than for XP. Apparently making this change in the registry has a similar impact for Vista: [HKEY_CURRENT_USER\ControlPanel\Desktop] ForegroundLockTimeout = 0 However I found (Vista x64) that while focus on the original was maintained the offending window would still take the foreground - quite distracting.
{ "language": "en", "url": "https://stackoverflow.com/questions/22903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Which is better: Ad hoc queries or stored procedures? Assuming you can't use LINQ for whatever reason, is it a better practice to place your queries in stored procedures, or is it just as good a practice to execute ad hoc queries against the database (say, SQL Server for argument's sake)? A: Stored procedures are definitely the way to go...they are compiled, have execution plan before hand and you could do rights management on them. I do not understand this whole source control issue on stored procedure. You definitely can source control them, if only you are a little disciplined. Always start with a .sql file that is the source of your stored procedure. Put it in version control once you have written your code. The next time you want to edit your stored procedure get it from your source control than your database. If you follow this, you will have as good source control as your code. I would like to quote Tom Kyte from Oracle here...Here's his rule on where to write code...though a bit unrelated but good to know I guess. * *Start with stored procedures in PL/SQL... *If you think something can't be done using stored procedure in PL/SQL, use Java stored procedure. *If you think something can't be done using Java Stored procedure, consider Pro*c. *If you think you can't achieve something using Pro*C, you might want to rethink what you need to get done. A: My answer from a different post: Stored Procedures are MORE maintainable because: * *You don't have to recompile your C# app whenever you want to change some SQL *You end up reusing SQL code. Code repetition is the worst thing you can do when you're trying to build a maintainable application! What happens when you find a logic error that needs to be corrected in multiple places? You're more apt to forget to change that last spot where you copy & pasted your code. In my opinion, the performance & security gains are an added plus. You can still write insecure/inefficient SQL stored procedures. Easier to port to another DB - no procs to port It's not very hard to script out all your stored procedures for creation in another DB. In fact - it's easier than exporting your tables because there are no primary/foreign keys to worry about. A: In our application, there is a layer of code that provides the content of the query (and is sometimes a call to a stored procedure). This allows us to: * *easily have all the queries under version control *to make what ever changes are required to each query for different database servers *eliminates repetition of the same query code through out our code Access control is implemented in the middle layer, rather than in the database, so we don't need stored procedures there. This is in some ways a middle road between ad hoc queries and stored procs. A: There are persuasive arguments for both - stored procedures are all located in a central repository, but are (potentially) hard to migrate and ad hoc queries are easier to debug as they are with your code, but they can also be harder to find in the code. The argument that stored procedures are more efficient doesn't hold water anymore. link text Doing a google for Stored Procedure vs Dynamic Query will show decent arguments either way and probably best for you to make your own decision... A: Store procedures should be used as much as possible, if your writing SQL into code your already setting yourself up for headaches in the futures. It takes about the same time to write a SPROC as it does to write it in code. Consider a query that runs great under a medium load but once it goes into fulltime production your badly optimized query hammers the system and brings it to a crawl. In most SQL servers you are not the only application/service that is using it. Your application has now brought a bunch of angry people at your door. If you have your queries in SPROCs you also allow your friendly DBA to manage and optimize with out recompiling or breaking your app. Remember DBA's are experts in this field, they know what to do and not do. It makes sense to utilise their greater knowledge! EDIT: someone said that recompile is a lazy excuse! yeah lets see how lazy you feel when you have to recompile and deploy your app to 1000's of desktops, all because the DBA has told you that your ad-hoc Query is eating up too much Server time! A: I can't speak to anything other than SQL Server, but the performance argument is not significantly valid there unless you're on 6.5 or earlier. SQL Server has been caching ad-hoc execution plans for roughly a decade now. A: I think this is a basic conflict between people who must maintain the database and people who develop the user interfaces. As a data person, I would not consider working with a database that is accessed through adhoc queries because they are difficult to effectively tune or manage. How can I know what affect a change to the schema will have? Additionally, I do not think users should ever be granted direct access to the database tables for security reasons (and I do not just mean SQL injection attacks, but also because it is a basic internal control to not allow direct rights and require all users to use only the procs designed for the app. This is to prevent possible fraud. Any financial system which allows direct insert, update or delete rights to tables is has a huge risk for fraud. This is a bad thing.). Databases are not object-oriented and code which seems good from an object-oriented perspective is can be extremely bad from a database perspective. Our developers tell us they are glad that all our databse access is through procs becasue it makes it much faster to fix a data-centered bug and then simply run the proc on the production environment rather than create a new branch of the code and recompile and reload to production. We require all our procs to be in subversion, so source control is not an issue at all. If it isn't in Subversion, it will periodically get dropped by the dbas, so there is no resistance to using Source Control. A: Some things to think about here: Who Needs Stored Procedures, Anyways? Clearly it's a matter of your own needs and preferences, but one very important thing to think about when using ad hoc queries in a public-facing environment is security. Always parameterize them and watch out for the typical vulnerabilities like SQL-injection attacks. A: someone said that recompile is a lazy excuse! yeah lets see how lazy you feel when you have to recompile and deploy your app to 1000's of desktops, all because the DBA has told you that your ad-hoc Query is eating up too much Server time! is it good system architecture if you let connect 1000 desktops directly to database? A: Stored procedures represent a software contract that encapsulates the actions taken against the database. The code in the procedures, and even the schema of the database itself can be changed without affecting compiled, deployed code, just so the inputs and outputs of the procedure remain the same. By embedding queries in your application, you are tightly coupling yourself to your data model. For the same reason, it is also not good practice to simply create stored procedures that are just CRUD queries against every table in your database, since this is still tight coupling. The procedures should instead be bulky, coarse grained operations. From a security perspective, it is good practice to disallow db_datareader and db_datawriter from your application and only allow access to stored procedures. A: In my experience writing mostly WinForms Client/Server apps these are the simple conclusions I've come to: Use Stored Procedures: * *For any complex data work. If you're going to be doing something truly requiring a cursor or temp tables it's usually fastest to do it within SQL Server. *When you need to lock down access to the data. If you don't give table access to users (or role or whatever) you can be sure that the only way to interact with the data is through the SP's you create. Use ad-hoc queries: * *For CRUD when you don't need to restrict data access (or are doing so in another manner). *For simple searches. Creating SP's for a bunch of search criteria is a pain and difficult to maintain. If you can generate a reasonably fast search query use that. In most of my applications I've used both SP's and ad-hoc sql, though I find I'm using SP's less and less as they end up being code just like C#, only harder to version control, test, and maintain. I would recommend using ad-hoc sql unless you can find a specific reason not to. A: Stored Procedures are great because they can be changed without a recompile. I would try to use them as often as possible. I only use ad-hoc for queries that are dynamically generated based on user input. A: Procs for the reasons mentioned by others and also it is easier to tune a proc with profiler or parts of a proc. This way you don't have to tell someone to run his app to find out what is being sent to SQL server If you do use ad-hoc queries make sure that they are parameterized A: Parametized SQL or SPROC...doesn't matter from a performance stand point...you can query optimize either one. For me the last remaining benefit of a SPROC is that I can eliminate a lot SQL rights management by only granting my login rights to execute sprocs...if you use Parametized SQL the login withing your connection string has a lot more rights (writing ANY kind of select statement on one of the tables they have access too for example). I still prefer Parametized SQL though... A: I haven't found any compelling argument for using ad-hoc queries. Especially those mixed up with your C#/Java/PHP code. A: The sproc performance argument is moot - the 3 top RDBMs use query plan caching and have been for awhile. Its been documented... Or is 1995 still? However, embedding SQL in your app is a terrible design too - code maintenance seems to be a missing concept for many. If an application can start from scratch with an ORM (greenfield applications are far and few between!) its a great choice as your class model drives your DB model - and saves LOTS of time. If an ORM framework is not available we have taken a hybrid of approach of creating an SQL resource XML file to look up SQL strings as we need them (they are then cached by the resource framework). If the SQL needs any minor manipulation its done in code - if major SQL string manipulation is needed we rethink the approach. This hybrid approach lends to easy management by the developers (maybe we are the minority as my team is bright enough to read a query plan) and deployment is a simple checkout from SVN. Also, it makes switching RDBMs easier - just swap out the SQL resource file (not as easy as an ORM tool of course, but connecting to legacy systems or non-supported database this works) A: Depends what your goal is. If you want to retrieve a list of items and it happens once during your application's entire run for example, it's probably not worth the effort of using a stored procedure. On the other hand, a query that runs repeatedly and takes a (relatively) long time to execute is an excellent candidate for database storage, since the performance will be better. If your application lives almost entirely within the database, stored procedures are a no-brainer. If you're writing a desktop application to which the database is only tangentially important, ad-hoc queries may be a better option, as it keeps all of your code in one place. @Terrapin: I think your assertion that the fact that you don't have to recompile your app to make modifications makes stored procedures a better option is a non-starter. There may be reasons to choose stored procedures over ad-hoc queries, but in the absence of anything else compelling, the compile issue seems like laziness rather than a real reason. A: My experience is that 90% of queries and/or stored procedures should not be written at all (at least by hand). Data access should be generated somehow automaticly. You can decide if you'd like to staticly generate procedures in compile time or dynamically at run time but when you want add column to the table (property to the object) you should modify only one file. A: I prefer keeping all data access logic in the program code, in which the data access layer executes straight SQL queries. On the other hand, data management logic I put in the database in the form of triggers, stored procedures, custom functions and whatnot. An example of something I deem worthy of database-ifying is data generation - assume our customer has a FirstName and a LastName. Now, the user interface needs a DisplayName, which is derived from some nontrivial logic. For this generation, I create a stored procedure which is then executed by a trigger whenever the row (or other source data) is updated. There appears to be this somewhat common misunderstanding that the data access layer IS the database and everything about data and data access goes in there "just because". This is simply wrong but I see a lot of designs which derive from this idea. Perhaps this is a local phenomonon, though. I may just be turned off the idea of SPs after seeing so many badly designed ones. For example, one project I participated in used a set of CRUD stored procedures for every table and every possible query they encountered. In doing so they simply added another completely pointless layer. It is painful to even think about such things. A: These days I hardly ever use stored procedures. I only use them for complicated sql queries that can't easily be done in code. One of the main reasons is because stored procedures do not work as well with OR mappers. These days I think you need a very good reason to write a business application / information system that does not use some sort of OR mapper. A: Stored procedure work as block of code so in place of adhoc query it work fast. Another thing is stored procedure give recompile option which the best part of SQL you just use this for stored procedures nothing like this in adhoc query. Some result in query and stored procedure are different that's my personal exp. Use cast and covert function for check this. Must use stored procedure for big projects to improve the performance. I had 420 procedures in my project and it's work fine for me. I work for last 3 years on this project. So use only procedures for any transaction. A: is it good system architecture if you let connect 1000 desktops directly to database? No it's obviously not, it's maybe a poor example but I think the point I was trying to make is clear, your DBA looks after your database infrastructure this is were their expertise is, stuffing SQL in code locks the door to them and their expertise.
{ "language": "en", "url": "https://stackoverflow.com/questions/22907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Is there some way to show HTML content inside Flash? I want to show HTML content inside Flash. Is there some way to do this? I am talking about full blown HTML (with JavaScript if possible). A: flashQuery supports HTML tags and CSS rules for Flash. It transforms flash into a really browser. Here it is: http://www.flashquery.org/ A: You could also try http://motionandcolor.com/ Wrapper is a cross-browser compliant HTML/CSS rendering engine written in ActionScript that sits on top of your standards compliant HTML page. Javascript might be tricker though. A: How complex HTML are we talking about? Simple HTML, like <b> and <i> is supported in text fields if you use the htmlText property. You can also attach a CSS style sheet to the text field for more styling. Have a look at TextField in the Flash API documentation (I'm sure you can just google it). A: Here is a decent article on how to accomplish that. @Flubba: I didn't say "great" article, I said "decent" - there is a big difference. Besides, no one else had answered and it had been around a while. I figured a "decent" answer was better than none. I am no Flash expert, so... A: @JasonBunting Here is a decent article on how to accomplish that. That's not a great article - it's seven years old and doesn't mention the CSS capabilities of Flash. It covers only the basics of HTML support in Flash. Adobe have a more authoritative page here: Using HTML text formatting in Flash CS3 Professional Things have moved on a lot since then. Flash MX 2004 added CSS capabilities and there is a good article from Kirupa.com about that - Using CSS Styles in Flash MX 2004 Don't be thinking you'll just import a modern page into Flash and it'll render - that ain't going to happen. This stuff is for styling text areas. You won't get JavaScript executing because you're reliant on the subset of HTML and CSS that Flash supports in a text object, and Flash has a different object model from a web page. A: If it is complex HTML and Javascript, one possible way is HTMLComponent, a method that uses an iframe over your flash to make it appear like the HTML is in your app. There are a few downsides to this method however - most of them described in detail at Deitte.com. If this can move offline, you could use Air (it has an mx:HTML component built in). Deitte.com has a detail of this technique as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/22909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: CSV (or sheet in XLS) to SQL create (and insert) statements with .Net? Does anyone have a technique for generating SQL table create (and data insert) commands pragmatically from a CSV (or sheet in a .xls) file? I've got a third party database system which I'd like to populate with data from a csv file (or sheet in a xls file) but the importer supplied can't create the table structure automatically as it does the import. My csv file has lots of tables with lots of columns so I'd like to automate the table creation process as well as the data importing if possible but I'm unsure about how to go about generating the create statement... A: In SQL server it is as easy as SELECT * INTO NewTablenNmeHere FROM OPENROWSET( 'Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=C:\testing.xls','SELECT * FROM [Sheet1$]') A: BULK INSERT CSVTest FROM 'c:\csvtest.txt' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) A: I recommend having a look at csvkit. Its csvsql function can generate table insert statements, or even execute them for you, from most tabular data sources. A: Unfortunately I'm using an SQLEngine for embedded systems so it does not support BULK INSERT or OLEDB datasources, which is why I was thinking of taking the sql statement generation approach.
{ "language": "en", "url": "https://stackoverflow.com/questions/22935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Does anybody know of existing code to read a mork file (Thunderbird Address Book)? I have the need to read the Thunderbird address book on the fly. It is stored in a file format called Mork. Not a pleasant file format to read. I found a 1999 article explaining the file format. I would love to know if someone already has gone through this process and could make the code available. I found mork.pl by Jamie Zawinski (he worked on Netscape Navigator), but I was hoping for a .NET solution. I'm hoping StackOverflow will come to the rescue, because this just seems like a waste of my time to write something to read this file format when it should be so simple. I love the comments that Jamie put in his perl script. Here is my favorite part: # Let me make it clear that McCusker is a complete barking lunatic. # This is just about the stupidest file format I've ever seen. A: The Beagle search engine had code to parse Mork files. It's not the most memory efficient solution, but it worked and could be a useful starting point. Here's a link to the file: http://svn.gnome.org/viewvc/beagle/tags/BEAGLE_0_2_18/Util/Mork.cs?view=markup (These days Beagle doesn't use this parser anymore; we took the easier (and supported) path of writing a Thunderbird extension which just sent the data to Beagle itself. Has the disadvantage of not working while Thunderbird is closed, but has the advantage of not instilling the desire to bash your head in with the nearest blunt instrument.)
{ "language": "en", "url": "https://stackoverflow.com/questions/22943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to process Excel files stored in an image data type column using SSIS package? I have a .NET webforms front end that allows admin users to upload two .xls files for offline processing. As these files will be used for validation (and aggregation) I store these in an image field in a table. My ultimate goal is to create an SSIS package that will process these files offline. Does anyone know how to use SSIS to read a blob from a table into its native (in this case .xls) format for use in a Data Flow task? A: In my (admittedly limited) experience with SSIS, it is quite good at rapidly getting something up and running, but frusteratingly limited in getting something that "feels" like the most elegant, efficient solution to a programmer. Since the Excel Source Editor seems to take only files as input, you need to give it a file or reimplement its functionality in code that can take a blob. I understand that this is unsatisfying, but in the end, this is a time saving tool.
{ "language": "en", "url": "https://stackoverflow.com/questions/22968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to provide namespaces in JavaScript with instanced objects I've got a JavaScript "object", built this way: function foo() { this.length = 0; } foo.prototype.getLength = function() { return this.length; } ... I know how to emulate namespaces with singleton JavaScript objects, but what is the best way to "namepace" an object such as that above that will intanced? I know that several JavaScript libraries have namepacing capabilities, but I'm using jQuery and would rather not add another library to the mix. I'd like to be able to provide my own, perhaps by exploiting jQuery, intrinsic namespacing scheme for the JS objects of mine that need to be instanced. Thanks rp A: Javascript doesn't really have namespace or packages like other languages. Instead it has closures. If you have an application that consists of multiple functions, variables and objects, then you should put them inside a single global object. This will have the same effect as a namespace. For example: var namespace = { this.foo: function(){ ... }, this.foo.prototype.getLength: function(){ ... } } You could also create a set of nested objects and simulate packages: loadPackage = function(){ var path = arguments[0]; for(var i=1; i<arguments.length; i++){ if(!path[arguments[i]]){ path[arguments[i]] = {}; } path = path[arguments[i]]; } return path; } loadPackage(this, "com", "google", "mail") = { username: "gundersen", login: function(password){ ... } } this.com.google.mail.login("mySecretPassword"); A: Shouldn't be much different: namespace.foo = function foo() {...} namespace.foo.prototype.getLength = function() { ... } or you could use (function() { function foo() { ... } foo.prototype... namespace.foo = foo; })(); to save some typing. A: Both answers were very helpful! Here's what I ended up with: if( typeof( rpNameSpace ) == "undefined" ) rpNameSpace = {}; rpNameSpace.foo = function() { this.length = 613; } rpNameSpace.foo.prototype.getLength = function() { return this.length * 2; } Then, to use the resulting "namespaced" object: var x = new rpNameSpace.foo() display( x.getLength() ); A: Simple: if(!MyNamespace) MyNamespace = {}; MyNamespace.foo = function() { this.length = 0; }; MyNamespace.foo.prototype.getLength = function() { return this.length; }; A: Another alternative may be the bob.js framework: bob.ns.setNs('myApp.myFunctions', { say: function(msg) { console.log(msg); } }); //sub-namespace bob.ns.setNs('myApp.myFunctions.mySubFunctions', { hello: function(name) { myApp.myFunctions.say('Hello, ' + name); } }); //call: myApp.myFunctions.mySubFunctions.hello('Bob');
{ "language": "en", "url": "https://stackoverflow.com/questions/22976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Detecting if SQL server is running I'm looking for a way to poll different servers and check that SQL server is up and running. I'm writing my code in C#. I don't particularly care about individual databases, just that SQL server is running and responsive. Any ideas? A: Well, the brute force solution is to attempt to initiate a connection with the database on each server. That will tell you whether it's running, though you could have timeout issues. The more elegant (but more difficult... isn't that always the way?) solution would be to use WMI to connect to the remote machine and find out if the SQL server process is running. A: System.Data.Sql.SqlDataSourceEnumerator will return all instances of SQL Server currently running. MSDN Link A: Use the TCPClient Class to create a generic function that connects in TCP to a given IP address. Then iterate over the list of servers you want to test and try to open a connection to port 1433. A: If you need specific servers, use WMI. If you just want all available servers: http://support.microsoft.com/kb/q287737/ A: SqlDataSourceEnumerator gives you all instances but they are not necessarily running. For local instances of SQL, you can use ServiceController object, namespace System.ServiceProcess. Service name is concatination of "MSSQL$" and "InstanceName" from SqlDataSourceEnumerator. Set ServiceName property of the ServiceController object, and you can check "Status" property - Stopped, Running, Pended etc. Hence, you can filter "Running" ones A: I would certainly go with Vincent's answer. Just make absolutely certain you are closing and disposing the tcp connections properly etc. WMI seems a bit of overkill to me if that is all you're after.
{ "language": "en", "url": "https://stackoverflow.com/questions/22979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Acts-as-readable Rails plugin Issue I'm using Intridea's Acts as Readable Rails plugin for a messaging system I'm currently building. I've defined my message class accordingly: class Post < ActiveRecord::Base acts-as-readable end And everything seems to be working according to plan, but when trying to make the app show unread messages in my message view, I run into problems. Their example: (I've changed underscores to hyphens due to formatting issues) bob = User.find_by_name("bob") bob.readings # => [] Post.find_unread_by(bob) # => [<Post 1>,<Post 2>,<Post 3>...] Post.find_read_by(bob) # => [] Post.find(1).read_by?(bob) # => false Post.find(1).read_by!(bob) # => <Reading 1> Post.find(1).read_by?(bob) # => true Post.find(1).users_who_read # => [<User bob>] Post.find_unread_by(bob) # => [<Post 2>,<Post 3>...] Post.find_read_by(bob) # => [<Post 1>] bob.readings # => [<Reading 1>] So it seems that if I wanted to list the number of unread messages sitting in a mailbox (for example Inbox (39) ), I should be able to do something like: <%= Post.find_unread_by(current-user).count %> But to no avail. I always seem to get stuck on the simple view issues after everything's set. Any ideas? A: The following will work <%= Post.find_unread_by(current_user).size %> or <%= Post.find_unread_by(current_user).length %> However if you check your development.log you should see that it gets the unread count by * *Retrieving all the posts *Retrieving all the posts read by the user *Removing all of 2. from 1. in ruby This will be very bad performance wise with lots of posts. A better way would be to retrieve the posts read by the current user and then use ActiveRecord::Calculations to get a count without retrieving all the posts in the database Post.count(:conditions => [ "id NOT IN (?)", Post.find_read_by(current_user)]) This should go into your Post model to follow best practices of not having finders in the view or controller Post.rb def self.unread_post_count_for_user(user) count(:conditions => [ "id NOT IN (?)", Post.find_read_by(user)]) end Then your view will just be <%= Post.unread_post_count_for_user(current-user) %>
{ "language": "en", "url": "https://stackoverflow.com/questions/22980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }