text
stringlengths
8
267k
meta
dict
Q: What's a good way to write a Cocoa front-end to an Erlang application? I'm exploring the possibility of writing an application in Erlang, but it would need to have a portion written in Cocoa (presumably Objective-C). I'd like the front-end and back-end to be able to communicate easily. How can this best be done? I can think of using C ports and connected processes, but I think I'd like a reverse situation (the front-end starting and connecting to the back-end). There are named pipes (FIFOs), or I could use network communications over a TCP port or a named BSD socket. Does anyone have experience in this area? A: One way is Theo's way with NSTask, NSPipe and NSFileHandle. You can start by looking at the code to CouchDBX http://couchprojects.googlecode.com/svn/trunk/unofficial-binary-releases/CouchDBX/ Ports are possible but not nice at all. Is there some reason for why this communication can't simply be handled with mochiweb and json communication? A: One way would be to have the Erlang core of the application be a daemon that the Cocoa front-end communicates with over a Unix-domain socket using some simple protocol you devise. The use of a Unix-domain socket means that the Erlang daemon could be launched on-demand by launchd and the Cocoa front-end could find the path to the socket to use via an environment variable. That makes the rendezvous between the app and the daemon trivial, and it also makes it straightforward to develop multiple front-ends (or possibly a framework that wraps communication with the daemon). The Mac OS X launchd system is really cool this way. If you specify that a job should be launched on-demand via a secure Unix-domain socket, launchd will actually create the socket itself with appropriate permissions, and advertise its location via the environment variable named in the job's property list. The job, when started, will actually be passed a file descriptor to the socket by launchd when it does a simple check-in. Ultimately this means that the entire process of the front-end opening the socket to communicate with the daemon, launchd launching the daemon, and the daemon responding to the communication can be secure, even if the front-end and the daemon run at different privilege levels. A: Usually when creating Cocoa applications that front UNIX commands or other headless programs you use an NSTask: Using the NSTask class, your program can run another program as a subprocess and can monitor that program’s execution. An NSTask object creates a separate executable entity; it differs from NSThread in that it does not share memory space with the process that creates it. A task operates within an environment defined by the current values for several items: the current directory, standard input, standard output, standard error, and the values of any environment variables. By default, an NSTask object inherits its environment from the process that launches it. If there are any values that should be different for the task, for example, if the current directory should change, you must change the value before you launch the task. A task’s environment cannot be changed while it is running. You can communicate with the backend process by way of stdin/stdout/stderr. Bascially NSTask is a high-level wrapper around exec (or fork or system, I always forget the difference). As I understand it you don't want the Erland program to be a background daemon that runs continuously, but if you do, go with @Chris's suggestion. A: The NSTask and Unix domain socket approaches are both great suggestions. Something to keep an eye on is an Erlang FFI implementation that's in the works: http://muvara.org/crs4/erlang/ffi A: erl_call should be usable from an NSTask. I use it from a Textmate command and it is very fast. Combining erl_call with an OTP gen_server would let you keep a persistent backend state with relative ease. See my post on erl_call at my blog for more details. A: Using NSTask you may also consider using PseudoTTY.app (which allows interactive communication)! Another sample code of interest could be BigSQL, a PostgreSQL client that enables the user to send SQL to a server and display the result. open -a Safari http://web.archive.org/web/20080324145441/http://www.bignerdranch.com/applications.shtml
{ "language": "en", "url": "https://stackoverflow.com/questions/37381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Having MSDN on a usb key Is there a way to have msdn documentation on a usb key ? either web or the MSDN Library program. i've been setting up my usbkey with portableapps stuff. A: i think when you do step 2 and install the documentation just tell direct it to the usb key drive letter. easy peasy. A: Have a look at the Visual Studio 2012/2013 Help Downloader. This tool allows Visual Studio 2012/2013 packages to be downloaded to an offline cache location before being imported into the Microsoft Help Viewer 2.0/2.1 for offline viewing. A: @Oleg You can use MSDN to USB, it works offline with any Visual Studio 2010|2015|2017|2019 IDE product. Apparently, you've to download the docs 1st, then use this tool to "Backup MSDN" to your USB drive, later use the same tool to "Locate MSDN".
{ "language": "en", "url": "https://stackoverflow.com/questions/37388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Arithmetic with Arbitrarily Large Integers in PHP Ok, so PHP isn't the best language to be dealing with arbitrarily large integers in, considering that it only natively supports 32-bit signed integers. What I'm trying to do though is create a class that could represent an arbitrarily large binary number and be able to perform simple arithmetic operations on two of them (add/subtract/multiply/divide). My target is dealing with 128-bit integers. There's a couple of approaches I'm looking at, and problems I see with them. Any input or commentary on what you would choose and how you might go about it would be greatly appreciated. Approach #1: Create a 128-bit integer class that stores its integer internally as four 32-bit integers. The only problem with this approach is that I'm not sure how to go about handling overflow/underflow issues when manipulating individual chunks of the two operands. Approach #2: Use the bcmath extension, as this looks like something it was designed to tackle. My only worry in taking this approach is the scale setting of the bcmath extension, because there can't be any rounding errors in my 128-bit integers; they must be precise. I'm also worried about being able to eventually convert the result of the bcmath functions into a binary string (which I'll later need to shove into some mcrypt encryption functions). Approach #3: Store the numbers as binary strings (probably LSB first). Theoretically I should be able to store integers of any arbitrary size this way. All I would have to do is write the four basic arithmetic functions to perform add/sub/mult/div on two binary strings and produce a binary string result. This is exactly the format I need to hand over to mcrypt as well, so that's an added plus. This is the approach I think has the most promise at the moment, but the one sticking point I've got is that PHP doesn't offer me any way to manipulate the individual bits (that I know of). I believe I'd have to break it up into byte-sized chunks (no pun intended), at which point my questions about handling overflow/underflow from Approach #1 apply. A: The PHP GMP extension will be better for this. As an added bonus, you can use it to do your decimal-to-binary conversion, like so: gmp_strval(gmp_init($n, 10), 2); A: There are already various classes available for this so you may wish to look at them before writing your own solution (if indeed writing your own solution is still needed). A: As far as I can tell, the bcmath extension is the one you'll want. The data in the PHP manual is a little sparse, but you out to be able to set the precision to be exactly what you need by using the bcscale() function, or the optional third parameter in most of the other bcmath functions. Not too sure on the binary strings thing, but a bit of googling tells me you ought to be able to do with by making use of the pack() function. A: I implemented the following PEMDAS complaint BC evaluator which may be useful to you. function BC($string, $precision = 32) { if (extension_loaded('bcmath') === true) { if (is_array($string) === true) { if ((count($string = array_slice($string, 1)) == 3) && (bcscale($precision) === true)) { $callback = array('^' => 'pow', '*' => 'mul', '/' => 'div', '%' => 'mod', '+' => 'add', '-' => 'sub'); if (array_key_exists($operator = current(array_splice($string, 1, 1)), $callback) === true) { $x = 1; $result = @call_user_func_array('bc' . $callback[$operator], $string); if ((strcmp('^', $operator) === 0) && (($i = fmod(array_pop($string), 1)) > 0)) { $y = BC(sprintf('((%1$s * %2$s ^ (1 - %3$s)) / %3$s) - (%2$s / %3$s) + %2$s', $string = array_shift($string), $x, $i = pow($i, -1))); do { $x = $y; $y = BC(sprintf('((%1$s * %2$s ^ (1 - %3$s)) / %3$s) - (%2$s / %3$s) + %2$s', $string, $x, $i)); } while (BC(sprintf('%s > %s', $x, $y))); } if (strpos($result = bcmul($x, $result), '.') !== false) { $result = rtrim(rtrim($result, '0'), '.'); if (preg_match(sprintf('~[.][9]{%u}$~', $precision), $result) > 0) { $result = bcadd($result, (strncmp('-', $result, 1) === 0) ? -1 : 1, 0); } else if (preg_match(sprintf('~[.][0]{%u}[1]$~', $precision - 1), $result) > 0) { $result = bcmul($result, 1, 0); } } return $result; } return intval(version_compare(call_user_func_array('bccomp', $string), 0, $operator)); } $string = array_shift($string); } $string = str_replace(' ', '', str_ireplace('e', ' * 10 ^ ', $string)); while (preg_match('~[(]([^()]++)[)]~', $string) > 0) { $string = preg_replace_callback('~[(]([^()]++)[)]~', __FUNCTION__, $string); } foreach (array('\^', '[\*/%]', '[\+-]', '[<>]=?|={1,2}') as $operator) { while (preg_match(sprintf('~(?<![0-9])(%1$s)(%2$s)(%1$s)~', '[+-]?(?:[0-9]++(?:[.][0-9]*+)?|[.][0-9]++)', $operator), $string) > 0) { $string = preg_replace_callback(sprintf('~(?<![0-9])(%1$s)(%2$s)(%1$s)~', '[+-]?(?:[0-9]++(?:[.][0-9]*+)?|[.][0-9]++)', $operator), __FUNCTION__, $string, 1); } } } return (preg_match('~^[+-]?[0-9]++(?:[.][0-9]++)?$~', $string) > 0) ? $string : false; } It automatically deals with rounding errors, just set the precision to whatever digits you need.
{ "language": "en", "url": "https://stackoverflow.com/questions/37391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Linux Lightweight Distro and X Windows for Development I want to build a lightweight linux configuration to use for development. The first idea is to use it inside a Virtual Machine under Windows, or old Laptops with 1Gb RAM top. Maybe even a distributable environment for developers. So the whole idea is to use a LAMP server, Java Application Server (Tomcat or Jetty) and X Windows (any Window manager, from FVWM to Enlightment), Eclipse, maybe jEdit and of course Firefox. Edit: I am changing this post to compile a possible list of distros and window managers that can be used to configure a real lightweight development environment. I am using as base personal experiences on this matter. Info about the distros can be easily found in their sites. So please, focus on personal use of those systems Distros Ubuntu / Xubuntu Pros: * *Personal Experience in old systems or low RAM environment - @Schroeder, @SCdF *Several sugestions based on personal knowledge - @Kyle, @Peter Hoffmann Gentoo Pros: * *Not targeted to Desktop Users - @paan *Don't come with a huge ammount of applications - @paan Slackware Pros: * *Suggested as best performance in a wise install/configuration - @Ryan Damn Small Linux Pros: * *Main focus is the lightweight factor - 50MB LiveCD - @Ryan Debian Pros: * *Very versatile, can be configured for both heavy and lightweight computers - @Ryan *APT as package manager - @Kyle *Based on compatibility and usability - @Kyle -- Fell Free to add Prós and Cons on this, so we can compile a good Reference. -- X Windows suggestion keep coming about XFCE. If others are to add here, open a session for it Like the distro one :) A: Try using Gentoo, Most distros with X are targetted towards desktop user and by default includes a lot of other application you don't need and at the same time lacks a lot of the stuff you need. YOu could customize the install but usually a lot of useless stuff will get into the 'base' install anyway. If you worried about compile time, you can specify portage(the getoo package management system) to fetch binaries when available instead of compiling. It allows you to get the flexibility of installing a system with only the stuff you want. I used gentoo and never went back. http://www.gentoo.org/ A: I installed Arch (www.archlinux.org) on my old MacMini (there is a PPC version) which only has 512MB RAM and a single 2.05GHz processor and it absolutely flys! It is almost bare after installation, so about a lightweight as you can get.. but comes with pacman, a software package manager, which is as-good-as apt-get (ubuntu/debian) if not better. You have a choice of installing many desktop managers such as: awesome, dwm, wmii, fvwm, GNOME, XFCE, KDE, etc.. straight from pacman using a single line of code. In my opinion(!!) it's lightweight like Gentoo but a binary distro so it isn't as much hassle (although I can imagine it can be a little daunting if you're new to Linux). I had a system running (with X and awesome WM) in about 1.5 hours! A: My 2c: I'd recommend basing your system on Debian - the apt system has become the de-facto way to quickly install and update programs on Linux. Ubuntu is Debian based with an emphasis on usability and compatibility. As for windowing managers, in my opinion Xfce hits the right balance between being lightweight and functional. The Ubuntu-based Xubuntu would probably be a good match. Remember - for security only install essential network services like SSH. If it were my decision, I would set up a PXE boot server to easily install Ubuntu Server Edition to any computer on the network. The reason why I would choose Ubuntu is because it's the one I've had the most experience with and the one I can easily find help for. If I needed a windowing manager for the particular installation, I would also install either Xfce or Blackbox. In fact, I have an old laptop in my basement that I've set up in exactly this way and it's worked out quite well for me. A: I'm in a similar situation to Schroeder; having a laptop with 512mb RAM is a PITA. I tried running Xubuntu but tbh I didn't find it that it was either useable or a great saver on RAM. So I switched to Ubuntu and it's worked out pretty well. A: I would recommend to use Archlinux which I'm using now. XFCE is my choice for desktop environment by now but if you prefer more lightweight one you can try LXDE Archlinux is much like Gentoo but with binary packages prebuilt and with more simpler configuration If all those distos still won't work for you, you may want to try LFS - Linux From Scratch A: I am writing this on a Centrino 1.5GHz, 512MB RAM running Ubuntu. It's Debian based and is the first Linux distro I have tried that actually worked with my laptop on first install. Find more info here. A: I would recommend Xubuntu. It's based on Ubuntu/Debian and optimized for small footprint with the Xfce desktop environment. A: Second the Arch suggestion. You will be tinkering quite a few configuration files to get everything going, but I've found none better for a lean and mean setup. A: I suggest you should checkout the following three distros: * *Damn Small Linux - Very lightweight. Includes its own lightweight browser (Dillo), but you can install Firefox easily. The entire distro fits on a 50MB LiveCD. *Slackware - Performance wise Slackware will probably perform the best out of the three, but I'd suggest running your own benchmarks with your hardware. *Debian- Debian is extremely versatile. This is the only distro of the three I'd recommend for both a 32 bit 1GB RAM laptop and also a 4GB RAM 64 bit machine. A: I would recommend something mcuh lighter than XFCE: IceWM. It takes so time to configure it to be really usable, but it's worth it. I have a fully running IceWM which only takes about 5MB of RAM. A: The primary reason I use Linux is because it can be lightweight. In 1999, I used Redhat, Mandrake (now Mandriva), and Debian. All were faster and more lightweight than my typical Windows 98 installations. Not so anymore. I now have to research and experiment in order to find distros that are lightweight in both storage and memory footprint, and speedy. These are the ones I have played with lately: * *Slitaz, a French distro (I use the English version and it works well). *Crunchbang, a lightweight Ubuntu and Debian-derived distro *Crux, which is source-only and very low-level geeky (I chose it because it has good support for PowerPC, and I was using it on my aging Powerbook G4) Currently, however, I use Archlinux for most of my work, as it offers a good compromise between lightweight and feature-full. But if you decide to roll your own distro from scratch, you may want to try Buildroot or Openembedded. I do not have much experience yet with Openembedded, but using Buildroot I have been able to create a very simple OS that boots quickly, loads only what I want, and only takes up 7 MB of storage space (adding development tools will increase this greatly, of course; I am merely using it as an ssh terminal, although I can do some editing with vi, and some text-only web browsing). As far as window managers, I have been very happy with OpenBox. I frequently experiment with lighter-weight window mangers listed on this page, however. A: here is my opinions as well. I have used Fedora, Gentoo, SliTaz, Archlinux, and Puppy Linux for development. The constraints: the system virtual image had to be under 800mb to allow for easy download and include all necessary software. The system had to be fast and customizable. It had to support version control SVN and Git, XAMPP or LAMP, SHH client, window environment (X or whatever) with latest video drivers/higher resolution, and some graphical manipulation software for images. I tried Archlinux, Puppy, and SliTaz. I have to say that SliTaz was the easiest to work with and to set up. The complete base-OS install from the image is around 120mb using the cooking version. TazPkg is a great package manager but some of the listed packages were outdated. Some of the latest versions needed to be built from source code. SliTaz is extremely lightweight and you have to live with some older packages in the supported TazPkg package list. There is increasing support and XAMPP, Java, Perl, Python, and SVN port well using TazPkg with latest versions. SliTaz is all about customization and lightweight. The final size was 800mb with all necessary software. ArchLinux and Pupppy, although also lightweight were over 1.5GB after all of the software was installed. The base systems were not comparable to SliTaz. If anyone is interested in a virtual image for SliTaz with XAMPP to try out, contact away and link will be posted. All the best and happy development! :)
{ "language": "en", "url": "https://stackoverflow.com/questions/37396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I make a fully statically linked .exe with Visual Studio Express 2005? My current preferred C++ environment is the free and largely excellent Microsoft Visual Studio 2005 Express edition. From time to time I have sent release .exe files to other people with pleasing results. However recently I made the disturbing discovery that the pleasing results were based on more luck that I would like. Attempting to run one of these programs on an old (2001 vintage, not scrupulously updated) XP box gave me nothing but a nasty "System cannot run x.exe" (or similar) message. Some googling revealed that with this toolset, even specifying static linking results in a simple hello-world.exe actually relying on extra .dll files (msvcm80.dll etc.). An incredibly elaborate version scheming system (manifest files anyone?) then will not let the .exe run without exactly the right .dll versions. I don't want or need this stuff, I just want an old fashioned self contained .exe that does nothing but lowest common denominator Win32 operations and runs on any old win32 OS. Does anyone know if its possible to do what I want to do with my existing toolset ? Thank you. A: I've had this same dependency problem and I also know that you can include the VS 8.0 DLLs (release only! not debug!---and your program has to be release, too) in a folder of the appropriate name, in the parent folder with your .exe: How to: Deploy using XCopy (MSDN) Also note that things are guaranteed to go awry if you need to have C++ and C code in the same statically linked .exe because you will get linker conflicts that can only be resolved by ignoring the correct libXXX.lib and then linking dynamically (DLLs). Lastly, with a different toolset (VC++ 6.0) things "just work", since Windows 2000 and above have the correct DLLs installed. A: My experience in Visual Studio 2010 is that there are two changes needed so as to not need DLL's. From the project property page (right click on the project name in the Solution Explorer window): * *Under Configuration Properties --> General, change the "Use of MFC" field to "Use MFC in a Static Library". *Under Configuration Properties --> C/C++ --> Code Generation, change the "Runtime Library" field to "Multi-Threaded (/MT)" Not sure why both were needed. I used this to remove a dependency on glut32.dll. Added later: When making these changes to the configurations, you should make them to "All Configurations" --- you can select this at the top of the Properties window. If you make the change to just the Debug configuration, it won't apply to the Release configuration, and vice-versa. A: In regards Jared's response, having Windows 2000 or better will not necessarily fix the issue at hand. Rob's response does work, however it is possible that this fix introduces security issues, as Windows updates will not be able to patch applications built as such. In another post, Nick Guerrera suggests packaging the Visual C++ Runtime Redistributable with your applications, which installs quickly, and is independent of Visual Studio. A: For the C-runtime go to the project settings, choose C/C++ then 'Code Generation'. Change the 'runtime library' setting to 'multithreaded' instead of 'multithreaded dll'. If you are using any other libraries you may need to tell the linker to ignore the dynamically linked CRT explicitly.
{ "language": "en", "url": "https://stackoverflow.com/questions/37398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "131" }
Q: What is the best way to interpret Perfmon analysis into application specific observations/data? Many of us have used Perfmon tool to do performance analysis. Especially with .Net counters, but there are so many variables going on in Perfmon, that it always becomes hard to interpret Perfmon results in to valuable feedback about my application. I want to use perfmon, (not a tool like Ants Profiler etc) but how do I accurately interpret the observations? Any inputs are welcome. A: I use the Performance Analysis of Logs (PAL) tool: http://pal.codeplex.com/ It's not an "official" Microsoft tool, but I believe the author works for Microsoft. The project seems to be fairly active. In addition to the canned threshold files provided (which are pretty good), you can write your own thresholds to analyze what your app needs. The generation of the HTML report with charts is also very nice. UPDATE: PAL 2.3.2 no longer depends on the MS LogParser or MS Office Web Components; it uses PowerShell v2.0 or greater, MS .NET Framework 3.5 SP1, and the MS Chart Controls for .NET 3.5.
{ "language": "en", "url": "https://stackoverflow.com/questions/37425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Get back to basics. How do I get back into C++? I haven't used C++ since college. Even though I've wanted to I haven't needed to do any until I started wanting to write plugins for Launchy. Is there a good book to read to get back into it? My experience since college is mainly C# and recently ruby. I bought some book for C# developers and it ended up being on how to write C++ with CLI. While a good book it wasn't quite what I was looking for. A: The best way to get back into C++ is to jump in. You can't learn a real language without spending any serious time in a country where they speak it. I wouldn't try to learn a programming language without spending time coding in it either. I wouldn't recommend learning C first though. That's a good way to pick up some bad habits in C++. A: I haven't tried it myself but have heard from people and sources I trust that "Accelerated C++" by Koenig and Moo is a good book for people who want to pick up C++ quickly. Compared to the more traditional route of learning C first then C++ as a kind of C with classes the K+M approach helps you become productive quickly while avoiding pitfalls and bad habits associated with the legacy of the language. A: A good starting place is "Thinking in C++" by Bruce Eckel, I've rarely had anyone complain about the book. Well written and also has a version available online. A: Another online book that I pick up whenever I need to get back into C++ is "C++ In Action" by Bartosz Milewski. Its online at his site. A: My favorites are Effective C++, More Effective C++, and Effective STL by Scott Meyers. Also C++ Coding Standards by Sutter and Alexandrescu. A: The C++ Programming Language by Bjarne Stroustrup covers C++ in depth. Bjarne is the inventor of C++. It also provides insights into why the language is the way it is. Some people find the book a little terse. I found it to be an enjoyable read. If you have done some C++ before it's a great place to start. It is by no means a beginners book on C++. A: My book recommendations: Essential C++ (Lippman) C++ Common Knowledge: Essential Intermediate Programming (Dewhurst) ...and I second the Effective C++ suggestion above. A very handy alternative to buying books in meatspace is to subscribe to a service like Safari Books Online. For a not unreasonable monthly fee you'll get access to all of the above books plus a bajillion others. If you desire fast random access to more than a couple books, it pretty much pays for itself. It's an easy case to make if you want to convince your employer to pay for it. Beyond that, sit yourself in front of an IDE that has a C++ code completion feature (I use Eclipse/CDT most of the time).
{ "language": "en", "url": "https://stackoverflow.com/questions/37428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Reasons for SQL differences Why are SQL distributions so non-standard despite an ANSI standard existing for SQL? Are there really that many meaningful differences in the way SQL databases work or is it just the two databases with which I have been working: MS-SQL and PostgreSQL? Why do these differences arise? A: The ANSI standard specifies only a limited set of commands and data types. Once you go beyond those, the implementors are on their own. And some very important concepts aren't specified at all, such as auto-incrementing columns. SQLite just picks the first non-null integer, MySQL requires AUTO INCREMENT, PostgreSQL uses sequences, etc. It's a mess, and that's only among the OSS databases! Try getting Oracle, Microsoft, and IBM to collectively decide on a tricky bit of functionality. A: It's a form of "Stealth lock-in". Joel goes into great detail here: * *http://www.joelonsoftware.com/articles/fog0000000056.html *http://www.joelonsoftware.com/articles/fog0000000052.html Companies end up tying their business functionality to non-standard or weird unsupported functionality in their implementation, this restricts their ability to move away from their vendor to a competitor. On the other hand, it's pretty short-sighted because anyone with half a brain will tend to abstract away the proprietary pieces, or avoid the lock-in altogether, if it gets too egregious. A: First, I don't find databases to be as, say, browsers or operating systems in terms of incompatibility. Anyone with a few hours of training can start doing selects, inserts, deletes and updates on any SQL database. Meanwhile, it's difficult to write HTML that renders identically on every browser or write system code for more than one OS. Generally, differences in SQL are related to performance or fairly esoteric features. The major exception seems to be date formats and functions. Second, database developers generally are motivated to add features that differentiate their product from everyone else. Products like Oracle, MS SQL Server and MySQL are vast ecosystems that rarely cross-pollinate in practice. At my workplace, we use Oracle and MySQL, but we could probably switch over to 100% Oracle in about a day if needed or desired. So I care a lot about the shiny toys Oracle gives us with each release, but I don't even know what version of MySQL we are using. IBM, Microsoft, PostgreSQL and the rest might as well not exist as far as we are concerned. Having the features to get and keep customers and users is far more important than compatibility in the database world. (That's the positive spin on the "lock-in" answer, I suppose.) Third, there are legitimate reasons for different companies to implement SQL differently. For instance, Oracle has a multi-versioning system that allows very fast and scalable consistent reads. Other databases lack that feature, but usually are faster inserting rows and rolling back transactions. This is a fundamental difference in these systems. It doesn't make one better than the other (at least in the general case), just different. One should not be surprised if the SQL ontop of a database engine takes advantage of its strengths and attempts to minimize its weaknesses. In fact, it would be irresponsible of the developers to not do this. A: John: The standard actually covers lots of subjects, including identity columns, sequences, triggers, routines, upsert, etc. But of course, many of these standards-components may have been brought in place later than the first implementations; and this could be a reason why SQL standards compliance is somewhat low, generally. Neall: There are actually areas where the SQL standard is ahead of the implementations. For example, it would be nice to have CREATE ASSERTION, but as far as I know, no DBMS implements assertions yet. Personally, I believe that the closed nature of some ISO standards (like the SQL standard) is part of the problem: When a standard is not readily available online, it's less likely to be known by implementors/planners, and too few customers ask for compliance because they don't know what to ask for. A: It's certainly effective lock-in, as 1800 says. But in fairness to the database vendors, the SQL standard is always playing catch-up to current databases' feature sets. Most databases we have today are of pretty ancient lineages. If you trace Microsoft SQL Server back to its roots, I think you'll find Ingres - one of the very first relational databases written in the '70s. And Postgres was originally written by some of the same people in the '80s as a successor to Ingres. Oracle goes way back, and I'm not sure where MySQL came in. Database non-portability does suck, but it could be a lot worse.
{ "language": "en", "url": "https://stackoverflow.com/questions/37441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Equivalent to StAX for C I've used the StAX API in Java quite a bit, and find it quite a clean way of dealing with XML files. Is there any equivalent library I could use for performing similar processing in C? A: libxml is a heavily used and documented XML library for C, which provides a SAX API. Expat is another, but in my experience is not as well documented. A: I have used Expat pretty extensively - I like it for its simplicity and small footprint. A: If you are not opposed to C++ then try LLama A: Expat does StAX #include "expat.h"` VRM_parser = XML_ParserCreate("ISO-8859-1"); XML_SetElementHandler(VRM_parser, CbStartTagHandler, CbEndTagHandler); XML_Parse(VRM_parser, text, strlen(text), 0); // start of XML XML_Parse(VRM_parser, text, strlen(text), 0); // more XML XML_Parse(VRM_parser, text, strlen(text), 0); // more XML XML_Parse(VRM_parser, text, strlen(text), 0); // more XML XML_Parse(VRM_parser, "", 0, 1); // to finish parsing
{ "language": "en", "url": "https://stackoverflow.com/questions/37449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: iPhone App Minus App Store? If I create an application on my Mac, is there any way I can get it to run on an iPhone without going through the app store? It doesn't matter if the iPhone has to be jailbroken, as long as I can still run an application created using the official SDK. For reasons I won't get into, I can't have this program going through the app store. A: With the help of this post, I have made a script that will install via the app Installous for rapid deployment: # compress application. /bin/mkdir -p $CONFIGURATION_BUILD_DIR/Payload /bin/cp -R $CONFIGURATION_BUILD_DIR/MyApp.app $CONFIGURATION_BUILD_DIR/Payload /bin/cp iTunesCrap/logo_itunes.png $CONFIGURATION_BUILD_DIR/iTunesArtwork /bin/cp iTunesCrap/iTunesMetadata.plist $CONFIGURATION_BUILD_DIR/iTunesMetadata.plist cd $CONFIGURATION_BUILD_DIR # zip up the HelloWorld directory /usr/bin/zip -r MyApp.ipa Payload iTunesArtwork iTunesMetadata.plist What Is missing in the post referenced above, is the iTunesMetadata. Without this, Installous will not install apps correctly. Here is an example of an iTunesMetadata: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>appleId</key> <string></string> <key>artistId</key> <integer>0</integer> <key>artistName</key> <string>MYCOMPANY</string> <key>buy-only</key> <true/> <key>buyParams</key> <string></string> <key>copyright</key> <string></string> <key>drmVersionNumber</key> <integer>0</integer> <key>fileExtension</key> <string>.app</string> <key>genre</key> <string></string> <key>genreId</key> <integer>0</integer> <key>itemId</key> <integer>0</integer> <key>itemName</key> <string>MYAPP</string> <key>kind</key> <string>software</string> <key>playlistArtistName</key> <string>MYCOMPANY</string> <key>playlistName</key> <string>MYAPP</string> <key>price</key> <integer>0</integer> <key>priceDisplay</key> <string>nil</string> <key>rating</key> <dict> <key>content</key> <string></string> <key>label</key> <string>4+</string> <key>rank</key> <integer>100</integer> <key>system</key> <string>itunes-games</string> </dict> <key>releaseDate</key> <string>Sunday, December 12, 2010</string> <key>s</key> <integer>143441</integer> <key>softwareIcon57x57URL</key> <string></string> <key>softwareIconNeedsShine</key> <false/> <key>softwareSupportedDeviceIds</key> <array> <integer>1</integer> </array> <key>softwareVersionBundleId</key> <string>com.mycompany.myapp</string> <key>softwareVersionExternalIdentifier</key> <integer>0</integer> <key>softwareVersionExternalIdentifiers</key> <array> <integer>1466803</integer> <integer>1529132</integer> <integer>1602608</integer> <integer>1651681</integer> <integer>1750461</integer> <integer>1930253</integer> <integer>1961532</integer> <integer>1973932</integer> <integer>2026202</integer> <integer>2526384</integer> <integer>2641622</integer> <integer>2703653</integer> </array> <key>vendorId</key> <integer>0</integer> <key>versionRestrictions</key> <integer>0</integer> </dict> </plist> Obviously, replace all instances of MyApp with the name of your app and MyCompany with the name of your company. Basically, this will install on any jailbroken device with Installous installed. After it is set up, this results in very fast deployment, as it can be installed from anywhere, just upload it to your companies website, and download the file directly to the device, and copy / move it to ~/Documents/Installous/Downloads. A: With the upcoming Xcode 7 it's now possible to install apps on your devices without an apple developer license, so now it is possible to skip the app store and you don't have to jailbreak your device. Now everyone can get their app on their Apple device. Xcode 7 and Swift now make it easier for everyone to build apps and run them directly on their Apple devices. Simply sign in with your Apple ID, and turn your idea into an app that you can touch on your iPad, iPhone, or Apple Watch. Download Xcode 7 beta and try it yourself today. Program membership is not required. Quoted from: https://developer.apple.com/xcode/ Update: XCode 7 is now released: Free On-Device Development Now everyone can run and test their own app on a device—for free. You can run and debug your own creations on a Mac, iPhone, iPad, iPod touch, or Apple Watch without any fees, and no programs to join. All you need to do is enter your free Apple ID into Xcode. You can even use the same Apple ID you already use for the App Store or iTunes. Once you’ve perfected your app the Apple Developer Program can help you get it on the App Store. See Launching Your App on Devices for detailed information about installing and running on devices. A: It's worth noting that if you go the jailbroken route, it's possible (likely?) that an iPhone OS update would kill your ability to run these apps. I'd go the official route and pay the $99 to get authorized. In addition to not having to worry about your apps being clobbered, you also get the opportunity (should you choose) to release your apps on the store. A: After copying the the app to the iPhone in the way described by @Jason Weathered, make sure to "chmod +x" of the app, otherwise it won't run. A: Official Developer Program For a standard iPhone you'll need to pay the US$99/yr to be a member of the developer program. You can then use the adhoc system to install your application onto up to 100 devices. The developer program has the details but it involves adding UUIDs for each of the devices to your application package. UUIDs can be easiest retrieved using Ad Hoc Helper available from the App Store. For further details on this method, see Craig Hockenberry's Beta testing on iPhone 2.0 article Jailbroken iPhone For jailbroken iPhones, you can use the following method which I have personally tested using the AccelerometerGraph sample app on iPhone OS 3.0. Create Self-Signed Certificate First you'll need to create a self signed certificate and patch your iPhone SDK to allow the use of this certificate: * *Launch Keychain Access.app. With no items selected, from the Keychain menu select Certificate Assistant, then Create a Certificate. Name: iPhone Developer Certificate Type: Code Signing Let me override defaults: Yes *Click Continue Validity: 3650 days *Click Continue *Blank out the Email address field. *Click Continue until complete. You should see "This root certificate is not trusted". This is expected. *Set the iPhone SDK to allow the self-signed certificate to be used: sudo /usr/bin/sed -i .bak 's/XCiPhoneOSCodeSignContext/XCCodeSignContext/' /Developer/Platforms/iPhoneOS.platform/Info.plist If you have Xcode open, restart it for this change to take effect. Manual Deployment over WiFi The following steps require openssh, and uikittools to be installed first. Replace jasoniphone.local with the hostname of the target device. Be sure to set your own password on both the mobile and root users after installing SSH. To manually compile and install your application on the phone as a system app (bypassing Apple's installation system): * *Project, Set Active SDK, Device and Set Active Build Configuration, Release. *Compile your project normally (using Build, not Build & Go). *In the build/Release-iphoneos directory you will have an app bundle. Use your preferred method to transfer this to /Applications on the device. scp -r AccelerometerGraph.app root@jasoniphone:/Applications/ *Let SpringBoard know the new application has been installed: ssh [email protected] uicache This only has to be done when you add or remove applications. Updated applications just need to be relaunched. To make life easier for yourself during development, you can setup SSH key authentication and add these extra steps as a custom build step in your project. Note that if you wish to remove the application later you cannot do so via the standard SpringBoard interface and you'll need to use SSH and update the SpringBoard: ssh [email protected] rm -r /Applications/AccelerometerGraph.app && ssh [email protected] uicache A: Yes, once you have joined the iPhone Developer Program, and paid Apple $99, you can provision your applications on up to 100 iOS devices. A: * *Build your app *Upload to a crack site *(If you app is good enough) the crack version will be posted minutes later and ready for everyone to download ;-) A: *Changes/Notes to make this work for Xcode 3.2.1 and iPhone SDK 3.1.2 Manual Deployment over WiFi 2) Be sure to restart Xcode after modifying the Info.plist 3) The "uicache" command is not found, using killall -HUP SpringBoard worked fine for me. Other then that, I can confirm this works fine. Mac users, using PwnageTool 3.1.4 worked great for Jailbreaking (DL via torrent). A: If you patch /Developer/Platforms/iPhoneOS.platform/Info.plist and then try to debug a application running on the device using a real development provisionen profile from Apple it will probably not work. Symptoms are weird error messages from com.apple.debugserver and that you can use any bundle identifier without getting a error when building in Xcode. The solution is to restore Info.plist.
{ "language": "en", "url": "https://stackoverflow.com/questions/37464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "201" }
Q: How to Determine the Installed ASP.NET Version of Host from a Web Page I have a site running in a Windows shared hosting environment. In their control panel for the shared hosting account I have it set to use ASP.NET version 3.0 but it doesn't say 3.5 SP1 specifically. How can I view the installed version running on the server where my website is hosted in an asp.net page? A: Thanks! I just dropped <%=Environment.Version%> on a page and got 2.0.50727.3053 A: @Jon Limjap: Unfortunately, this tells you the .NET CLR (runtime library) version, not the version of the .NET Framework. These two version numbers are not always the same; in particular, the .NET Framework 3.0 and 3.5 both use the .NET CLR 2.0. So the OP may indeed have only .NET 2.0 SP1, as the Environment.Version indicates, or he may also have the .NET 3.5 SP1 which he is looking for. A: One way is to throw an exception in Page Load, but don't catch it. At the bottom of the page, you'll see the version number. A: The hint from Brian Boatright by putting the <%=Environment.Version%> on a page, and save it in DotNetVersion.aspx, upload it, at testing on the right URL, world great. Sadly it was an too old version for me: 1.1.4322.2443
{ "language": "en", "url": "https://stackoverflow.com/questions/37468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Is this minimum spanning tree algorithm correct? The minimum spanning tree problem is to take a connected weighted graph and find the subset of its edges with the lowest total weight while keeping the graph connected (and as a consequence resulting in an acyclic graph). The algorithm I am considering is: * *Find all cycles. *remove the largest edge from each cycle. The impetus for this version is an environment that is restricted to "rule satisfaction" without any iterative constructs. It might also be applicable to insanely parallel hardware (i.e. a system where you expect to have several times more degrees of parallelism then cycles). Edits: The above is done in a stateless manner (all edges that are not the largest edge in any cycle are selected/kept/ignored, all others are removed). A: What happens if two cycles overlap? Which one has its longest edge removed first? Does it matter if the longest edge of each is shared between the two cycles or not? For example: V = { a, b, c, d } E = { (a,b,1), (b,c,2), (c,a,4), (b,d,9), (d,a,3) } There's an a -> b -> c -> a cycle, and an a -> b -> d -> a A: Your algorithm isn't quite clearly defined. If you have a complete graph, your algorithm would seem to entail, in the first step, removing all but the two minimum elements. Also, listing all the cycles in a graph can take exponential time. Elaboration: In a graph with n nodes and an edge between every pair of nodes, there are, if I have my math right, n!/(2k(n-k)!) cycles of size k, if you're counting a cycle as some subgraph of k nodes and k edges with each node having degree 2. A: @shrughes.blogspot.com: I don't know about removing all but two - I've been sketching out various runs of the algorithm and assuming that parallel runs may remove an edge more than once I can't find a situation where I'm left without a spanning tree. Whether or not it's minimal I don't know. A: For this to work, you'd have to detail how you would want to find all cycles, apparently without any iterative constructs, because that is a non-trivial task. I'm not sure that's possible. If you really want to find a MST algorithm that doesn't use iterative constructs, take a look at Prim's or Kruskal's algorithm and see if you could modify those to suit your needs. Also, is recursion barred in this theoretical architecture? If so, it might actually be impossible to find a MST on a graph, because you'd have no means whatsoever of inspecting every vertex/edge on the graph. A: I dunno if it works, but no matter what your algorithm is not even worth implementing. Finding all cycles will be the freaking huge bottleneck that will kill it. Also doing that without iterations is impossible. Why don't you implement some standard algorithm, let's say Prim's. A: @Tynan The system can be described (somewhat over simplified) as a systems of rules describing categorizations. "Things are in category A if they are in B but not in C", "Nodes connected to nodes in Z are also in Z", "Every category in M is connected to a node N and has 'child' categories, also in M for every node connected to N". It's slightly more complicated than this. (I have shown that by creating unstable rules you can model a turning machine but that's beside the point.) It can't explicitly define iteration or recursion but can operate on recursive data with rules like the 2nd and 3rd ones. @Marcin, Assume that there are an unlimited number of processors. It is trivial to show that the program can be run in O(n^2) for n being the longest cycle. With better data structures, this can be reduced to O(n*O(set lookup function)), I can envision hardware (quantum computers?) that can evaluate all cycles in constant time. giving a O(1) solution to the MST problem. The Reverse-delete algorithm seems to provide a partial proof of correctness (that the proposed algorithm will not produce a non-minimal spanning tree) this is derived by arguing that mt algorithm will remove every edge that the Reverse-delete algorithm will. However I'm not sure how to show that my algorithm won't delete more than that algorithm. Hhmm.... A: OK this is an attempt to finish the proof of correctness. By analogy to the Reverse-delete algorithm, we know that enough edges will be removed. What remains is to show that there will not be to many edges removed. Removing to many edges can be described as removing all the edges between the side of a binary partition of the graph nodes. However only edges in a cycle are ever removed, therefor, for all edge between partitions to be removed, there needs to be a return path to complete the cycle. If we only consider edges between the partitions then the algorithm can at most remove the larger of each pair of edges, this can never remove the smallest bridging edge. Therefor for any arbitrary binary partitioning, the algorithm can't sever all links between the side. What remains is to show that this extends to >2 way partitions.
{ "language": "en", "url": "https://stackoverflow.com/questions/37471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I assert() without using abort()? If I use assert() and the assertion fails then assert() will call abort(), ending the running program abruptly. I can't afford that in my production code. Is there a way to assert in runtime yet be able to catch failed assertions so I have the chance to handle them gracefully? A: Asserts in C/C++ only run in debug builds. So this won't happen at runtime. In general asserts should mark things that if they happen indicate a bug, and generally show assumptions in your code etc. If you want to have code that checks for errors at runtime (in release) you should probably use exceptions rather than asserts as these are what they are designed to do. Your answer basically wraps an exception thrower in assert syntax. While this will work, there is no particular advantage to this that I can see over just throwing the exception in the first place. A: Here's what I have my in "assert.h" (Mac OS 10.4): #define assert(e) ((void) ((e) ? 0 : __assert (#e, __FILE__, __LINE__))) #define __assert(e, file, line) ((void)printf ("%s:%u: failed assertion `%s'\n", file, line, e), abort(), 0) Based on that, replace the call to abort() by a throw( exception ). And instead of printf you can format the string into the exception's error message. In the end, you get something like this: #define assert(e) ((void) ((e) ? 0 : my_assert (#e, __FILE__, __LINE__))) #define my_assert( e, file, line ) ( throw std::runtime_error(\ std::string(file:)+boost::lexical_cast<std::string>(line)+": failed assertion "+e)) I haven't tried to compile it, but you get the meaning. Note: you'll need to make sure that the "exception" header is always included, as well as boost's (if you decide to use it for formatting the error message). But you can also make "my_assert" a function and only declare its prototype. Something like: void my_assert( const char* e, const char* file, int line); And implement it somewhere where you can freely include all the headers you require. Wrap it in some #ifdef DEBUG if you need it, or not if you always want to run those checks. A: Yes, as a matter of fact there is. You will need to write a custom assert function yourself, as C++'s assert() is exactly C's assert(), with the abort() "feature" bundled in. Fortunately, this is surprisingly straightforward. Assert.hh template <typename X, typename A> inline void Assert(A assertion) { if( !assertion ) throw X(); } The above function will throw an exception if a predicate doesn't hold. You will then have the chance to catch the exception. If you don't catch the exception, terminate() will be called, which will end the program similarly to abort(). You may wonder what about optimizing away the assertion when we're building for production. In this case, you can define constants that will signify that you're building for production and then refer to the constant when you Assert(). debug.hh #ifdef NDEBUG const bool CHECK_WRONG = false; #else const bool CHECK_WRONG = true; #endif main.cc #include<iostream> struct Wrong { }; int main() { try { Assert<Wrong>(!CHECK_WRONG || 2 + 2 == 5); std::cout << "I can go to sleep now.\n"; } catch( Wrong e ) { std::cerr << "Someone is wrong on the internet!\n"; } return 0; } If CHECK_WRONG is a constant then the call to Assert() will be compiled away in production, even if the assertion is not a constant expression. There is a slight disadvantage in that by referring to CHECK_WRONG we type a little more. But in exchange we gain an advantage in that we can classify various groups of assertions and enable and disable each of them as we see fit. So, for example we could define a group of assertions that we want enabled even in production code, and then define a group of assertions that we only want to see in development builds. The Assert() function is equivalent to typing if( !assertion ) throw X(); but it clearly indicates the intent of the programmer: make an assertion. Assertions are also easier to grep for with this approach, just like plain assert()s. For more details on this technique see Bjarne Stroustrup's The C++ Programming Language 3e, section 24.3.7.2. A: glib's error reporting functions take the approach of continuing after an assert. glib is the underlying platform independence library that Gnome (via GTK) uses. Here's a macro that checks a precondition and prints a stack trace if the precondition fails. #define RETURN_IF_FAIL(expr) do { \ if (!(expr)) \ { \ fprintf(stderr, \ "file %s: line %d (%s): precondition `%s' failed.", \ __FILE__, \ __LINE__, \ __PRETTY_FUNCTION__, \ #expr); \ print_stack_trace(2); \ return; \ }; } while(0) #define RETURN_VAL_IF_FAIL(expr, val) do { \ if (!(expr)) \ { \ fprintf(stderr, \ "file %s: line %d (%s): precondition `%s' failed.", \ __FILE__, \ __LINE__, \ __PRETTY_FUNCTION__, \ #expr); \ print_stack_trace(2); \ return val; \ }; } while(0) Here's the function that prints the stack trace, written for an environment that uses the gnu toolchain (gcc): void print_stack_trace(int fd) { void *array[256]; size_t size; size = backtrace (array, 256); backtrace_symbols_fd(array, size, fd); } This is how you'd use the macros: char *doSomething(char *ptr) { RETURN_VAL_IF_FAIL(ptr != NULL, NULL); // same as assert(ptr != NULL), but returns NULL if it fails. if( ptr != NULL ) // Necessary if you want to define the macro only for debug builds { ... } return ptr; } void doSomethingElse(char *ptr) { RETURN_IF_FAIL(ptr != NULL); } A: If you want to throw a character string with information about the assertion: http://xll8.codeplex.com/SourceControl/latest#xll/ensure.h A: _set_error_mode(_OUT_TO_MSGBOX); believe me, this function can help you.
{ "language": "en", "url": "https://stackoverflow.com/questions/37473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: How can I simply inherit methods from an existing instance? Below I have a very simple example of what I'm trying to do. I want to be able to use HTMLDecorator with any other class. Ignore the fact it's called decorator, it's just a name. import cgi class ClassX(object): pass # ... with own __repr__ class ClassY(object): pass # ... with own __repr__ inst_x=ClassX() inst_y=ClassY() inst_z=[ i*i for i in range(25) ] inst_b=True class HTMLDecorator(object): def html(self): # an "enhanced" version of __repr__ return cgi.escape(self.__repr__()).join(("<H1>","</H1>")) print HTMLDecorator(inst_x).html() print HTMLDecorator(inst_y).html() wrapped_z = HTMLDecorator(inst_z) inst_z[0] += 70 wrapped_z[0] += 71 print wrapped_z.html() print HTMLDecorator(inst_b).html() Output: Traceback (most recent call last): File "html.py", line 21, in print HTMLDecorator(inst_x).html() TypeError: default __new__ takes no parameters Is what I'm trying to do possible? If so, what am I doing wrong? A: Very close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way. Looks like you're trying to set up some sort of proxy object scheme. That's doable, and there are better solutions than your colleague's, but first consider whether it would be easier to just patch in some extra methods. This won't work for built-in classes like bool, but it will for your user-defined classes: def HTMLDecorator (obj): def html (): sep = cgi.escape (repr (obj)) return sep.join (("<H1>", "</H1>")) obj.html = html return obj And here is the proxy version: class HTMLDecorator(object): def __init__ (self, wrapped): self.__wrapped = wrapped def html (self): sep = cgi.escape (repr (self.__wrapped)) return sep.join (("<H1>", "</H1>")) def __getattr__ (self, name): return getattr (self.__wrapped, name) def __setattr__ (self, name, value): if not name.startswith ('_HTMLDecorator__'): setattr (self.__wrapped, name, value) return super (HTMLDecorator, self).__setattr__ (name, value) def __delattr__ (self, name): delattr (self.__wraped, name) A: Both of John's solutions would work. Another option that allows HTMLDecorator to remain very simple and clean is to monkey-patch it in as a base class. This also works only for user-defined classes, not builtin types: import cgi class ClassX(object): pass # ... with own __repr__ class ClassY(object): pass # ... with own __repr__ inst_x=ClassX() inst_y=ClassY() class HTMLDecorator: def html(self): # an "enhanced" version of __repr__ return cgi.escape(self.__repr__()).join(("<H1>","</H1>")) ClassX.__bases__ += (HTMLDecorator,) ClassY.__bases__ += (HTMLDecorator,) print inst_x.html() print inst_y.html() Be warned, though -- monkey-patching like this comes with a high price in readability and maintainability of your code. When you go back to this code a year later, it can become very difficult to figure out how your ClassX got that html() method, especially if ClassX is defined in some other library. A: Is what I'm trying to do possible? If so, what am I doing wrong? It's certainly possible. What's wrong is that HTMLDecorator.__init__() doesn't accept parameters. Here's a simple example: def decorator (func): def new_func (): return "new_func %s" % func () return new_func @decorator def a (): return "a" def b (): return "b" print a() # new_func a print decorator (b)() # new_func b A: @John (37448): Sorry, I might have misled you with the name (bad choice). I'm not really looking for a decorator function, or anything to do with decorators at all. What I'm after is for the html(self) def to use ClassX or ClassY's __repr__. I want this to work without modifying ClassX or ClassY. A: Ah, in that case, perhaps code like this will be useful? It doesn't really have anything to do with decorators, but demonstrates how to pass arguments to a class's initialization function and to retrieve those arguments for later. import cgi class ClassX(object): def __repr__ (self): return "<class X>" class HTMLDecorator(object): def __init__ (self, wrapped): self.__wrapped = wrapped def html (self): sep = cgi.escape (repr (self.__wrapped)) return sep.join (("<H1>", "</H1>")) inst_x=ClassX() inst_b=True print HTMLDecorator(inst_x).html() print HTMLDecorator(inst_b).html() A: @John (37479): Very close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way. import cgi from math import sqrt class ClassX(object): def __repr__(self): return "Best Guess" class ClassY(object): pass # ... with own __repr__ inst_x=ClassX() inst_y=ClassY() inst_z=[ i*i for i in range(25) ] inst_b=True avoid="__class__ __init__ __dict__ __weakref__" class HTMLDecorator(object): def __init__(self,master): self.master = master for attr in dir(self.master): if ( not attr.startswith("__") or attr not in avoid.split() and "attr" not in attr): self.__setattr__(attr, self.master.__getattribute__(attr)) def html(self): # an "enhanced" version of __repr__ return cgi.escape(self.__repr__()).join(("<H1>","</H1>")) def length(self): return sqrt(sum(self.__iter__())) print HTMLDecorator(inst_x).html() print HTMLDecorator(inst_y).html() wrapped_z = HTMLDecorator(inst_z) print wrapped_z.length() inst_z[0] += 70 #wrapped_z[0] += 71 wrapped_z.__setitem__(0,wrapped_z.__getitem__(0)+ 71) print wrapped_z.html() print HTMLDecorator(inst_b).html() Output: <H1>Best Guess</H1> <H1><__main__.ClassY object at 0x891df0c></H1> 70.0 <H1>[141, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400, 441, 484, 529, 576]</H1> <H1>True</H1>
{ "language": "en", "url": "https://stackoverflow.com/questions/37479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Calculate Video Duration I suck at math. I need to figure out how to calculate a video duration with only a few examples of values. For example, a value of 70966 is displayed as 1:10 minutes. A value of 30533 displays as 30 seconds. A value of 7007 displays as 7 seconds. A: Looks like the numbers are in milliseconds. So to convert to seconds, divide by 1000, then divide by 60 to find minutes etc. A: It's a simple matter of division: * *70966 / 70 seconds (1:10 minutes) = 1013.8 *30533 / 30 = 1017.76 *7007 / 7 = 1001 Looks like the numbers are nothing but milliseconds. 70966 displays as 1:10 minutes because it shaves of the millisecond part (last 3 digits). A: I'm not sure if I completely understand this, but: 70966 / 70 seconds = 1013.8 So dividing the "value" by 1013.8 should get the duration, approximately... Edit: Yes, Ben is right, you should divide by 1000. I got 1013.8 because the 70 seconds was rounded down from 70.966 seconds to 70. A: To expand on what Ben said, it looks like they are milliseconds, and the display value is rounded slightly, possibly to the nearest 100 milliseconds and then 'cropped' to seconds. This would explain why 30533 is 30s and 70966 is 70s.
{ "language": "en", "url": "https://stackoverflow.com/questions/37483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Filter out HTML tags and resolve entities in python Because regular expressions scare me, I'm trying to find a way to remove all HTML tags and resolve HTML entities from a string in Python. A: While I agree with Lucas that regular expressions are not all that scary, I still think that you should go with a specialized HTML parser. This is because the HTML standard is hairy enough (especially if you want to parse arbitrarily "HTML" pages taken off the Internet) that you would need to write a lot of code to handle the corner cases. It seems that python includes one out of the box. You should also check out the python bindings for TidyLib which can clean up broken HTML, making the success rate of any HTML parsing much higher. A: How about parsing the HTML data and extracting the data with the help of the parser ? I'd try something like the author described in chapter 8.3 in the Dive Into Python book A: Use lxml which is the best xml/html library for python. import lxml.html t = lxml.html.fromstring("...") t.text_content() And if you just want to sanitize the html look at the lxml.html.clean module A: if you use django you might also use http://docs.djangoproject.com/en/dev/ref/templates/builtins/#striptags ;) A: Use BeautifulSoup! It's perfect for this, where you have incoming markup of dubious virtue and need to get something reasonable out of it. Just pass in the original text, extract all the string tags, and join them. A: You might need something more complicated than a regular expression. Web pages often have angle brackets that aren't part of a tag, like this: <div>5 < 7</div> Stripping the tags with regex will return the string "5 " and treat < 7</div> as a single tag and strip it out. I suggest looking for already-written code that does this for you. I did a search and found this: http://zesty.ca/python/scrape.html It also can resolve HTML entities. A: Regular expressions are not scary, but writing your own regexes to strip HTML is a sure path to madness (and it won't work, either). Follow the path of wisdom, and use one of the many good HTML-parsing libraries. Lucas' example is also broken because "sub" is not a method of a Python string. You'd have to "import re", then call re.sub(pattern, repl, string). But that's neither here nor there, as the correct answer to your question does not involve writing any regexes. A: Looking at the amount of sense people are demonstrating in other answers here, I'd say that using a regex probably isn't the best idea for your situation. Go for something tried and tested, and treat my previous answer as a demonstration that regexes need not be that scary.
{ "language": "en", "url": "https://stackoverflow.com/questions/37486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How to track if browser is Silverlight enabled I'm trying to get some stats on how many of the visitors to our website have Silverlight enabled browsers. We currently use Google Analytics for the rest of our stats so ideally we'd like to just add 'Silverlight enabled' tracking in with the rest of our Google Analytics stats. But if it has to get written out to a DB etc then so be it. Nikhil has some javascript to Silverlight tracking to Google Analytics. I have tried this code but Google Analytics doesn't pick it up. Does anyone have any other ideas/techniques? A: In case you missed it, there's a link to a more detailed article as well in the comments: http://blogs.msdn.com/jeffwilcox/archive/2007/10/01/using-google-analytics-with-rich-managed-web-applications-in-silverlight.aspx Edit: As David pointed out, this article covers the reverse scenario more (how to write your silverlight app so that it plays well with Analytics). A: I think you answered it yourself. The page you are linking to does just that: detect which version of Silverlight the user has (not if s/he installs it). From the page: After a little poking around, I found that Google Analytics has support for reporting a user-defined field. ... Basically this detects the presence of Silverlight, and if its available, it records the version as the value of the user-defined field. Now your analytics reports will have one of three values: "(not set)", "Silverlight/1.0" or "Silverlight/2.0". A: @Vaibhav The Using Google Analytics with rich (managed) web applications in Silverlight article is very interesing but is more focused on how to get your Silverlight app to send messages to Google Analytics. @Cd-MaN Yeah, I thought that too but I have tried running my page with Nikhil's javascript and Google Analytics didn't pick it up. But I could have screwed something up somewhere. I'm just interested to know if anyone else has managed to do this (track Silverlight-ness) successfully. A: I've written a lightweight Silverlight library that helps make it easy to integrate Google Analytics in your silverlight app. You can grab download the code or binaries here. A: I think the code posted on Nikhil's blog is out of date if you are using ga.js and not urchin.js. The use of the global function __utmSetVar() is replaced by the tracker method _setCustomVar() http://code.google.com/apis/analytics/docs/gaJS/gaJSApiBasicConfiguration.html#_gat.GA_Tracker_._setCustomVar
{ "language": "en", "url": "https://stackoverflow.com/questions/37495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: List of ASP.NET Versions? Thanks to a Q&A on stackoverflow. I just found out how to determine the installed version on my hosting provider's server. Now I need to know what that number means. Using <%=Environment.Version%> on my local machine returns 2.0.50727.3053. Can someone give me a list of the version 1, 1.1, 2, etc. to the actual Environment.Version codes or break down what that code means? A: List of .NET Framework versions. A: http://msdn.microsoft.com/en-us/library/bb822049.aspx This link may also be helpful. ASP.NET version known as CLR version and .NET Framework version are different. A: When you refer to a given .NET framework then all below can have different version at a given point of time: * *.NET Framework *Common Language Run-time(CLR): Virtual Execution Environment of a process running a .NET EXE *C#: The programming language you're using to write code *ASP.NET (Part of .NET Framework library used for web development) The code <%=Environment.Version%> referred in your post gives the version of CLR (2nd bullet point above) while you seem to be interested in version of ASP.NET library. Each release of .NET Framework has ASP.NET library pre-built in it. All its versions can be found in below link: https://en.wikipedia.org/wiki/ASP.NET
{ "language": "en", "url": "https://stackoverflow.com/questions/37501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is it possible to convert projects wizard created for MSVS 2005 to MSVS 2008 format automatically I have a bunch of wizards created for MSVS 2005. Is it possible to convert them to MSVS 2008 format automatically? A: Looking at these posts: * *How to use solutions and projects between Visual Studio 2005 and 2008 *Running VS2008 & VS 2005 side by side - project file difference You will only need to change a few lines if you do it manually. So i guess this is trivial if you automate it.
{ "language": "en", "url": "https://stackoverflow.com/questions/37503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Recursion in production-quality VC++ Code When writing production-quality VC++ code, is the use of recursion acceptable? Why or why not? A: Is there a way to determine at what point I would encounter a stack overflow? Not really. A stack overflow happens when you exhaust the stack space - however... * *The initial stack size can be changed programatically and may default to different amounts depending on your OS/compiler/etc *How much of it is already used up depends on what your app (and the libraries your app uses) has previously done - this is often impossible to predict *How much of the stack each call requires depends on what you do in your function. If you only allocate say 1 integer on the stack, you may be able to recurse an enourmous amount of times, but if you are allocating a 200k buffer on the stack, not so much. The only times I've ever hit one is in an infinite loop, or using the aforementioned 200k buffer. I find it far more prefereable for my app to just crash, than for it to loop forever using 100% CPU and have to be forcefully killed (this is a right PITA on a remote server over a bad connection as windows lacks SSH) A rough guideline: Do you think your recursive function is likely to call itself more than say 10,000 times consecutively? Or are you doing something dumb like allocating 200k buffers on the stack? If yes, worry about it. If no, carry on with more important things. A: Yes. But never in dead code. That would be silly. A: Sure - e.g. if you want to traverse a tree structure what else would you use ? Maybe you would like to have something like a maximum depth to be sure you're not writing an infinite loop. (if this makes sense in your example) A: Is there a way to determine at what point I would encounter a stack overflow? Depends how deep you go, and how large the actual recursion is. I take it you understand what recursion does? A: Recursion is almost essential to traverse File structures like folder/directories. Traversing a tree like structure is very easy if recursion is used.
{ "language": "en", "url": "https://stackoverflow.com/questions/37516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Add XML Comments to class properties generated by the LINQ to SQL designer I used the LINQ to SQL designer in Visual Studio to create an object model of a database. Now, I want to add XML comments to each generated property but I can't figure out how to do it without erasing the properties the next time the dbml file is refreshed. How can this be done? A: I believe it's not possible to keep xml comments in sync with autogenerated code automatically. However, xml comments can leave in separate file (just set "XML documentation file" option on "Project properties"->"Build" tab). You can create initial version of XML documentation file and update in manually if necessary A: This tool can do it: http://www.huagati.com/dbmltools/
{ "language": "en", "url": "https://stackoverflow.com/questions/37519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What steps can I give a windows user to make a given file writeable Imagine we have a program trying to write to a particular file, but failing. On the Windows platform, what are the possible things which might be causing the file to be un-writable, and what steps could be suggested to an end user/administrator to fix it. Please include steps which might require administrator permissions (obviously users may not be administrators, but for this question, let's assume they are (or can become) administrators. Also, I'm not really familiar with how permissions are calculated in windows. - Does the user need write access to each directory up the tree, or anything similar to that? A: Some suggestions: * *No write permission (get permission through Security tab on file Properties window; you must be the file owner or an Administrator) *File is locked (close any program that may have the file open, then reboot if that doesn't help) *File has the read-only DOS attribute set (unset it from file Properties window, or with attrib -r; you must be the file owner or an Administrator) Edit 1: Only the second item (file is locked) has a possible solution that all users are likely to be able to do without help. For the first and third, you'll probably want to provide guidance (and hope the file wasn't made read-only intentionally!). Edit 2: Technically, the user does need write and execute (chdir) permissions on all directories up to the root. Windows may skip some of the recursive checks up the tree as a performance optimization, but you should not rely on this because admins can force on these so-called "traverse checks" for certain users. Edit 3: @RobM: Yes, you should check that there is no obvious reason that the user should not have the permissions she needs but does not have. I alluded to this in a less direct way in my first edit. However, in some cases users should have write permission to a file but do not because of filesystem corruption, a misbehaving program, or a mistake on their own part. A: If you are having trouble working out if the file is locked, try using Unlocker - it's a really useful free utility that shows you the process that has locked the file and lets you force an unlock if you need to. A: On Vista could it also be that it's "marked" as unsafe because it's been downloaded from the internet and you have to click the unblock button on it's explorer properties dialog? A: Lets change this around a bit. If your program is trying to write to a file and failing you either need to change the location of the file to one where the user can write to, or check the correct rights when the program starts and refuse to run if the user doesn't have them. Trampling over the system permissions is not the answer.
{ "language": "en", "url": "https://stackoverflow.com/questions/37525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Simple audio input API on a Mac? I'd like to pull a stream of PCM samples from a Mac's line-in or built-in mic and do a little live analysis (the exact nature doesn't pertain to this question, but it could be an FFT every so often, or some basic statistics on the sample levels, or what have you). What's a good fit for this? Writing an AudioUnit that just passes the sound through and incidentally hands it off somewhere for analysis? Writing a JACK-aware app and figuring out how to get it to play with the JACK server? Ecasound? This is a cheesy proof-of-concept hobby project, so simplicity of API is the driving factor (followed by reasonable choice of programming language). A: The principal framework for audio development in Mac OS X is Core Audio; it's the basis for all audio I/O. There are layers on top of it like Audio Toolbox, Audio Queue Services, QuickTime, and QTKit that you can use if you want a simplified API for common tasks. To just pull a stream of samples, you'd probably want to use Audio Queue Services; the AudioQueueNewInput function will set up recording of PCM data and pass it to a callback you supply. On your Mac there's a set of Core Audio examples in /Developer/Examples/CoreAudio/SimpleSDK that includes a use (AQRecord in AudioQueueTools) of the Audio Queue Services recording APIs. A: I think portaudio is what you need. Reading from the mike from a console app is a 10 line C file (see patests in the portaudio distrib). A: Apple provides sample code for reading and writing audio data. Additionally there is a lot of good information in the Audio section of the Apple Developer site.
{ "language": "en", "url": "https://stackoverflow.com/questions/37529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Is it possible to start a scheduled Windows task from a package? Does anyone know if you can and how to start off a scheduled Windows task on a Remote Server from within a SQL Server Integration Services (SSIS) package? A: Assuming you run it on Windows Server 2003/2008 or Vista, use SSIS Execute Process Task to start SCHTASKS.EXE with appropriate params (SCHTASKS /Run /? to see details). A: It should be possible as the Task Scheduler has a scriptable COM API that can be used for interacting with tasks. You could therefore either create a custom task that uses COM interop to call the Task Scheduler API, or it'd probably be quicker to use an Active X Script task to do your dirty work. A: I invested a lot of time in the aforementioned COM API back in 2002. It was, to put it mildly, "flakey". What we ended up doing instead is having our tasks run every minute. The first thing the task did was check the database to see if it should continue running or not. Then "starting" a scheduled task from SSIS was as simple as changing a database field.
{ "language": "en", "url": "https://stackoverflow.com/questions/37532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What's the easiest way to read a FoxPro DBF file from Python? I've got a bunch of FoxPro (VFP9) DBF files on my Ubuntu system, is there a library to open these in Python? I only need to read them, and would preferably have access to the memo fields too. Update: Thanks @cnu, I used Yusdi Santoso's dbf.py and it works nicely. One gotcha: The memo file name extension must be lower case, i.e. .fpt, not .FPT which was how the filename came over from Windows. A: If you're still checking this, I have a GPL FoxPro-to-PostgreSQL converter at https://github.com/kstrauser/pgdbf . We use it to routinely copy our tables into PostgreSQL for fast reporting. A: You can try this recipe on Active State. There is also a DBFReader module which you can try. For support for memo fields. A: Check out http://groups.google.com/group/python-dbase It currently supports dBase III and Visual Foxpro 6.0 db files... not sure if the file layout change in VFP 9 or not... A: It's 2016 now and I had to fiddle with the dbf package to get it to work. Here is a python3 version to just export a dbf file to a csv import dbf d=dbf.Table('mydbf.dbf') d.open() dbf.export(d, filename='mydf_exported.csv', format='csv', header=True) I had some unicode error at first, but got around that by turning off memos. import dbf d=dbf.Table('mydbf.dbf', ignore_memos=True) d.open() dbf.export(d, filename='mydf_exported.csv', format='csv', header=True) A: I prefer dbfpy. It supports both reading and writing of .DBF files and can cope with most variations of the format. It's the only implementation I have found that could both read and write the legacy DBF files of some older systems I have worked with. A: I was able to read a DBF file (with associated BAK, CDX, FBT, TBK files**) using the dbf package from PyPI http://pypi.python.org/pypi/dbf . I am new to python and know nothing about DBF files, but it worked easily to read a DBF file from my girlfriend's business (created with a music store POS application called AIMsi). After installing the dbf package (I used aptitude and installed dbf version 0.88 I think), the following python code worked: from dbf import * test = Table("testfile.dbf") for record in test: print record x = raw_input("") # to pause between showing records That's all I know for now, but hopefully it's a useful start for someone else who finds this question! April 21, 2012 SJK Edit: Per Ethan Furman's comment, I should point out that I actually don't know which of the data files were necessary, besides the DBF file. The first time I ran the script, with only the DBF available, it complained of a missing support file. So, I just copied over the BAK, CDX, FPT (not FBT as I said before edit), TBK files and then it worked.
{ "language": "en", "url": "https://stackoverflow.com/questions/37535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: How do I determine the size of my array in C? How do I determine the size of my array in C? That is, the number of elements the array can hold? A: The sizeof "trick" is the best way I know, with one small but (to me, this being a major pet peeve) important change in the use of parenthesis. As the Wikipedia entry makes clear, C's sizeof is not a function; it's an operator. Thus, it does not require parenthesis around its argument, unless the argument is a type name. This is easy to remember, since it makes the argument look like a cast expression, which also uses parenthesis. So: If you have the following: int myArray[10]; You can find the number of elements with code like this: size_t n = sizeof myArray / sizeof *myArray; That, to me, reads a lot easier than the alternative with parenthesis. I also favor use of the asterisk in the right-hand part of the division, since it's more concise than indexing. Of course, this is all compile-time too, so there's no need to worry about the division affecting the performance of the program. So use this form wherever you can. It is always best to use sizeof on an actual object when you have one, rather than on a type, since then you don't need to worry about making an error and stating the wrong type. For instance, say you have a function that outputs some data as a stream of bytes, for instance across a network. Let's call the function send(), and make it take as arguments a pointer to the object to send, and the number of bytes in the object. So, the prototype becomes: void send(const void *object, size_t size); And then you need to send an integer, so you code it up like this: int foo = 4711; send(&foo, sizeof (int)); Now, you've introduced a subtle way of shooting yourself in the foot, by specifying the type of foo in two places. If one changes but the other doesn't, the code breaks. Thus, always do it like this: send(&foo, sizeof foo); Now you're protected. Sure, you duplicate the name of the variable, but that has a high probability of breaking in a way the compiler can detect, if you change it. A: If you really want to do this to pass around your array I suggest implementing a structure to store a pointer to the type you want an array of and an integer representing the size of the array. Then you can pass that around to your functions. Just assign the array variable value (pointer to first element) to that pointer. Then you can go Array.arr[i] to get the i-th element and use Array.size to get the number of elements in the array. I included some code for you. It's not very useful but you could extend it with more features. To be honest though, if these are the things you want you should stop using C and use another language with these features built in. /* Absolutely no one should use this... By the time you're done implementing it you'll wish you just passed around an array and size to your functions */ /* This is a static implementation. You can get a dynamic implementation and cut out the array in main by using the stdlib memory allocation methods, but it will work much slower since it will store your array on the heap */ #include <stdio.h> #include <string.h> /* #include "MyTypeArray.h" */ /* MyTypeArray.h #ifndef MYTYPE_ARRAY #define MYTYPE_ARRAY */ typedef struct MyType { int age; char name[20]; } MyType; typedef struct MyTypeArray { int size; MyType *arr; } MyTypeArray; MyType new_MyType(int age, char *name); MyTypeArray newMyTypeArray(int size, MyType *first); /* #endif End MyTypeArray.h */ /* MyTypeArray.c */ MyType new_MyType(int age, char *name) { MyType d; d.age = age; strcpy(d.name, name); return d; } MyTypeArray new_MyTypeArray(int size, MyType *first) { MyTypeArray d; d.size = size; d.arr = first; return d; } /* End MyTypeArray.c */ void print_MyType_names(MyTypeArray d) { int i; for (i = 0; i < d.size; i++) { printf("Name: %s, Age: %d\n", d.arr[i].name, d.arr[i].age); } } int main() { /* First create an array on the stack to store our elements in. Note we could create an empty array with a size instead and set the elements later. */ MyType arr[] = {new_MyType(10, "Sam"), new_MyType(3, "Baxter")}; /* Now create a "MyTypeArray" which will use the array we just created internally. Really it will just store the value of the pointer "arr". Here we are manually setting the size. You can use the sizeof trick here instead if you're sure it will work with your compiler. */ MyTypeArray array = new_MyTypeArray(2, arr); /* MyTypeArray array = new_MyTypeArray(sizeof(arr)/sizeof(arr[0]), arr); */ print_MyType_names(array); return 0; } A: The best way is you save this information, for example, in a structure: typedef struct { int *array; int elements; } list_s; Implement all necessary functions such as create, destroy, check equality, and everything else you need. It is easier to pass as a parameter. A: The function sizeof returns the number of bytes which is used by your array in the memory. If you want to calculate the number of elements in your array, you should divide that number with the sizeof variable type of the array. Let's say int array[10];, if variable type integer in your computer is 32 bit (or 4 bytes), in order to get the size of your array, you should do the following: int array[10]; size_t sizeOfArray = sizeof(array)/sizeof(int); A: int size = (&arr)[1] - arr; Check out this link for explanation A: I would advise to never use sizeof (even if it can be used) to get any of the two different sizes of an array, either in number of elements or in bytes, which are the last two cases I show here. For each of the two sizes, the macros shown below can be used to make it safer. The reason is to make obvious the intention of the code to maintainers, and difference sizeof(ptr) from sizeof(arr) at first glance (which written this way isn't obvious), so that bugs are then obvious for everyone reading the code. TL;DR: #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + must_be_array(arr)) #define ARRAY_BYTES(arr) (sizeof(arr) + must_be_array(arr)) must_be_array(arr) (defined below) IS needed as -Wsizeof-pointer-div is buggy (as of april/2020): #define is_same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b)) #define is_array(arr) (!is_same_type((arr), &(arr)[0])) #define must_be(e) \ ( \ 0 * (int)sizeof( \ struct { \ static_assert(e); \ char ISO_C_forbids_a_struct_with_no_members__; \ } \ ) \ ) #define must_be_array(arr) must_be(is_array(arr)) There have been important bugs regarding this topic: https://lkml.org/lkml/2015/9/3/428 I disagree with the solution that Linus provides, which is to never use array notation for parameters of functions. I like array notation as documentation that a pointer is being used as an array. But that means that a fool-proof solution needs to be applied so that it is impossible to write buggy code. From an array we have three sizes which we might want to know: * *The size of the elements of the array *The number of elements in the array *The size in bytes that the array uses in memory The size of the elements of the array The first one is very simple, and it doesn't matter if we are dealing with an array or a pointer, because it's done the same way. Example of usage: void foo(size_t nmemb, int arr[nmemb]) { qsort(arr, nmemb, sizeof(arr[0]), cmp); } qsort() needs this value as its third argument. For the other two sizes, which are the topic of the question, we want to make sure that we're dealing with an array, and break the compilation if not, because if we're dealing with a pointer, we will get wrong values. When the compilation is broken, we will be able to easily see that we weren't dealing with an array, but with a pointer instead, and we will just have to write the code with a variable or a macro that stores the size of the array behind the pointer. The number of elements in the array This one is the most common, and many answers have provided you with the typical macro ARRAY_SIZE: #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0])) Recent versions of compilers, such as GCC 8, will warn you when you apply this macro to a pointer, so it is safe (there are other methods to make it safe with older compilers). It works by dividing the size in bytes of the whole array by the size of each element. Examples of usage: void foo(size_t nmemb) { char buf[nmemb]; fgets(buf, ARRAY_SIZE(buf), stdin); } void bar(size_t nmemb) { int arr[nmemb]; for (size_t i = 0; i < ARRAY_SIZE(arr); i++) arr[i] = i; } If these functions didn't use arrays, but got them as parameters instead, the former code would not compile, so it would be impossible to have a bug (given that a recent compiler version is used, or that some other trick is used), and we need to replace the macro call by the value: void foo(size_t nmemb, char buf[nmemb]) { fgets(buf, nmemb, stdin); } void bar(size_t nmemb, int arr[nmemb]) { for (size_t i = nmemb - 1; i < nmemb; i--) arr[i] = i; } The size in bytes that the array uses in memory ARRAY_SIZE is commonly used as a solution to the previous case, but this case is rarely written safely, maybe because it's less common. The common way to get this value is to use sizeof(arr). The problem: the same as with the previous one; if you have a pointer instead of an array, your program will go nuts. The solution to the problem involves using the same macro as before, which we know to be safe (it breaks compilation if it is applied to a pointer): #define ARRAY_BYTES(arr) (sizeof((arr)[0]) * ARRAY_SIZE(arr)) How it works is very simple: it undoes the division that ARRAY_SIZE does, so after mathematical cancellations you end up with just one sizeof(arr), but with the added safety of the ARRAY_SIZE construction. Example of usage: void foo(size_t nmemb) { int arr[nmemb]; memset(arr, 0, ARRAY_BYTES(arr)); } memset() needs this value as its third argument. As before, if the array is received as a parameter (a pointer), it won't compile, and we will have to replace the macro call by the value: void foo(size_t nmemb, int arr[nmemb]) { memset(arr, 0, sizeof(arr[0]) * nmemb); } Update (23/apr/2020): -Wsizeof-pointer-div is buggy: Today I found out that the new warning in GCC only works if the macro is defined in a header that is not a system header. If you define the macro in a header that is installed in your system (usually /usr/local/include/ or /usr/include/) (#include <foo.h>), the compiler will NOT emit a warning (I tried GCC 9.3.0). So we have #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0])) and want to make it safe. We will need C2X static_assert() and some GCC extensions: Statements and Declarations in Expressions, __builtin_types_compatible_p: #include <assert.h> #define is_same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b)) #define is_array(arr) (!is_same_type((arr), &(arr)[0])) #define Static_assert_array(arr) static_assert(is_array(arr)) #define ARRAY_SIZE(arr) \ ({ \ Static_assert_array(arr); \ sizeof(arr) / sizeof((arr)[0]); \ }) Now ARRAY_SIZE() is completely safe, and therefore all its derivatives will be safe. Update: libbsd provides __arraycount(): Libbsd provides the macro __arraycount() in <sys/cdefs.h>, which is unsafe because it lacks a pair of parentheses, but we can add those parentheses ourselves, and therefore we don't even need to write the division in our header (why would we duplicate code that already exists?). That macro is defined in a system header, so if we use it we are forced to use the macros above. #inlcude <assert.h> #include <stddef.h> #include <sys/cdefs.h> #include <sys/types.h> #define is_same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b)) #define is_array(arr) (!is_same_type((arr), &(arr)[0])) #define Static_assert_array(arr) static_assert(is_array(arr)) #define ARRAY_SIZE(arr) \ ({ \ Static_assert_array(arr); \ __arraycount((arr)); \ }) #define ARRAY_BYTES(arr) (sizeof((arr)[0]) * ARRAY_SIZE(arr)) Some systems provide nitems() in <sys/param.h> instead, and some systems provide both. You should check your system, and use the one you have, and maybe use some preprocessor conditionals for portability and support both. Update: Allow the macro to be used at file scope: Unfortunately, the ({}) gcc extension cannot be used at file scope. To be able to use the macro at file scope, the static assertion must be inside sizeof(struct {}). Then, multiply it by 0 to not affect the result. A cast to (int) might be good to simulate a function that returns (int)0 (in this case it is not necessary, but then it is reusable for other things). Additionally, the definition of ARRAY_BYTES() can be simplified a bit. #include <assert.h> #include <stddef.h> #include <sys/cdefs.h> #include <sys/types.h> #define is_same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b)) #define is_array(arr) (!is_same_type((arr), &(arr)[0])) #define must_be(e) \ ( \ 0 * (int)sizeof( \ struct { \ static_assert(e); \ char ISO_C_forbids_a_struct_with_no_members__; \ } \ ) \ ) #define must_be_array(arr) must_be(is_array(arr)) #define ARRAY_SIZE(arr) (__arraycount((arr)) + must_be_array(arr)) #define ARRAY_BYTES(arr) (sizeof(arr) + must_be_array(arr)) Notes: This code makes use of the following extensions, which are completely necessary, and their presence is absolutely necessary to achieve safety. If your compiler doesn't have them, or some similar ones, then you can't achieve this level of safety. * *__builtin_types_compatible_p() *typeof() I also make use of the following C2X feature. However, its absence by using an older standard can be overcome using some dirty tricks (see for example: What is “:-!!” in C code?) (in C11 you also have static_assert(), but it requires a message). * *static_assert() A: You can use the sizeof operator, but it will not work for functions, because it will take the reference of a pointer. You can do the following to find the length of an array: len = sizeof(arr)/sizeof(arr[0]) The code was originally found here: C program to find the number of elements in an array A: A more elegant solution will be size_t size = sizeof(a) / sizeof(*a); A: If you know the data type of the array, you can use something like: int arr[] = {23, 12, 423, 43, 21, 43, 65, 76, 22}; int noofele = sizeof(arr)/sizeof(int); Or if you don't know the data type of array, you can use something like: noofele = sizeof(arr)/sizeof(arr[0]); Note: This thing only works if the array is not defined at run time (like malloc) and the array is not passed in a function. In both cases, arr (array name) is a pointer. A: The macro ARRAYELEMENTCOUNT(x) that everyone is making use of evaluates incorrectly. This, realistically, is just a sensitive matter, because you can't have expressions that result in an 'array' type. /* Compile as: CL /P "macro.c" */ # define ARRAYELEMENTCOUNT(x) (sizeof (x) / sizeof (x[0])) ARRAYELEMENTCOUNT(p + 1); Actually evaluates as: (sizeof (p + 1) / sizeof (p + 1[0])); Whereas /* Compile as: CL /P "macro.c" */ # define ARRAYELEMENTCOUNT(x) (sizeof (x) / sizeof (x)[0]) ARRAYELEMENTCOUNT(p + 1); It correctly evaluates to: (sizeof (p + 1) / sizeof (p + 1)[0]); This really doesn't have a lot to do with the size of arrays explicitly. I've just noticed a lot of errors from not truly observing how the C preprocessor works. You always wrap the macro parameter, not an expression in might be involved in. This is correct; my example was a bad one. But that's actually exactly what should happen. As I previously mentioned p + 1 will end up as a pointer type and invalidate the entire macro (just like if you attempted to use the macro in a function with a pointer parameter). At the end of the day, in this particular instance, the fault doesn't really matter (so I'm just wasting everyone's time; huzzah!), because you don't have expressions with a type of 'array'. But really the point about preprocessor evaluation subtles I think is an important one. A: You can use the & operator. Here is the source code: #include<stdio.h> #include<stdlib.h> int main(){ int a[10]; int *p; printf("%p\n", (void *)a); printf("%p\n", (void *)(&a+1)); printf("---- diff----\n"); printf("%zu\n", sizeof(a[0])); printf("The size of array a is %zu\n", ((char *)(&a+1)-(char *)a)/(sizeof(a[0]))); return 0; }; Here is the sample output 1549216672 1549216712 ---- diff---- 4 The size of array a is 10 A: For multidimensional arrays it is a tad more complicated. Oftenly people define explicit macro constants, i.e. #define g_rgDialogRows 2 #define g_rgDialogCols 7 static char const* g_rgDialog[g_rgDialogRows][g_rgDialogCols] = { { " ", " ", " ", " 494", " 210", " Generic Sample Dialog", " " }, { " 1", " 330", " 174", " 88", " ", " OK", " " }, }; But these constants can be evaluated at compile-time too with sizeof: #define rows_of_array(name) \ (sizeof(name ) / sizeof(name[0][0]) / columns_of_array(name)) #define columns_of_array(name) \ (sizeof(name[0]) / sizeof(name[0][0])) static char* g_rgDialog[][7] = { /* ... */ }; assert( rows_of_array(g_rgDialog) == 2); assert(columns_of_array(g_rgDialog) == 7); Note that this code works in C and C++. For arrays with more than two dimensions use sizeof(name[0][0][0]) sizeof(name[0][0][0][0]) etc., ad infinitum. A: It is worth noting that sizeof doesn't help when dealing with an array value that has decayed to a pointer: even though it points to the start of an array, to the compiler it is the same as a pointer to a single element of that array. A pointer does not "remember" anything else about the array that was used to initialize it. int a[10]; int* p = a; assert(sizeof(a) / sizeof(a[0]) == 10); assert(sizeof(p) == sizeof(int*)); assert(sizeof(*p) == sizeof(int)); A: Size of an array in C: int a[10]; size_t size_of_array = sizeof(a); // Size of array a int n = sizeof (a) / sizeof (a[0]); // Number of elements in array a size_t size_of_element = sizeof(a[0]); // Size of each element in array a // Size of each element = size of type A: Executive summary: int a[17]; size_t n = sizeof(a)/sizeof(a[0]); Full answer: To determine the size of your array in bytes, you can use the sizeof operator: int a[17]; size_t n = sizeof(a); On my computer, ints are 4 bytes long, so n is 68. To determine the number of elements in the array, we can divide the total size of the array by the size of the array element. You could do this with the type, like this: int a[17]; size_t n = sizeof(a) / sizeof(int); and get the proper answer (68 / 4 = 17), but if the type of a changed you would have a nasty bug if you forgot to change the sizeof(int) as well. So the preferred divisor is sizeof(a[0]) or the equivalent sizeof(*a), the size of the first element of the array. int a[17]; size_t n = sizeof(a) / sizeof(a[0]); Another advantage is that you can now easily parameterize the array name in a macro and get: #define NELEMS(x) (sizeof(x) / sizeof((x)[0])) int a[17]; size_t n = NELEMS(a); A: sizeof(array) / sizeof(array[0]) A: #define SIZE_OF_ARRAY(_array) (sizeof(_array) / sizeof(_array[0])) A: The sizeof way is the right way iff you are dealing with arrays not received as parameters. An array sent as a parameter to a function is treated as a pointer, so sizeof will return the pointer's size, instead of the array's. Thus, inside functions this method does not work. Instead, always pass an additional parameter size_t size indicating the number of elements in the array. Test: #include <stdio.h> #include <stdlib.h> void printSizeOf(int intArray[]); void printLength(int intArray[]); int main(int argc, char* argv[]) { int array[] = { 0, 1, 2, 3, 4, 5, 6 }; printf("sizeof of array: %d\n", (int) sizeof(array)); printSizeOf(array); printf("Length of array: %d\n", (int)( sizeof(array) / sizeof(array[0]) )); printLength(array); } void printSizeOf(int intArray[]) { printf("sizeof of parameter: %d\n", (int) sizeof(intArray)); } void printLength(int intArray[]) { printf("Length of parameter: %d\n", (int)( sizeof(intArray) / sizeof(intArray[0]) )); } Output (in a 64-bit Linux OS): sizeof of array: 28 sizeof of parameter: 8 Length of array: 7 Length of parameter: 2 Output (in a 32-bit windows OS): sizeof of array: 28 sizeof of parameter: 4 Length of array: 7 Length of parameter: 1 A: The simplest answer: #include <stdio.h> int main(void) { int a[] = {2,3,4,5,4,5,6,78,9,91,435,4,5,76,7,34}; // For example only int size; size = sizeof(a)/sizeof(a[0]); // Method printf("size = %d", size); return 0; } A: For a predefined array: int a[] = {1, 2, 3, 4, 5, 6}; Calculating number of elements in the array: element _count = sizeof(a) / sizeof(a[0]); A: Note: This one can give you undefined behaviour as pointed out by M.M in the comment. int a[10]; int size = (*(&a+1)-a); For more details, see here and also here. A: To know the size of a fixed array declared explicitly in code and referenced by its variable, you can use sizeof, for example: int a[10]; int len = sizeof(a)/sizeof(int); But this is usually useless, because you already know the answer. But if you have a pointer you can’t use sizeof, its a matter of principle. But...Since arrays are presented as linear memory for the user, you can calculate the size if you know the last element address and if you know the size of the type, then you can count how many elements it have. For example: #include <stdio.h> int main(){ int a[10]; printf("%d\n", sizeof(a)/sizeof(int)); int *first = a; int *last = &(a[9]); printf("%d\n", (last-first) + 1); } Output: 10 10 Also if you can't take advantage of compile time you can: #include <stdio.h> int main(){ int a[10]; printf("%d\n", sizeof(a)/sizeof(int)); void *first = a; void *last = &(a[9]); printf("%d\n", (last-first)/sizeof(int) + 1); } A: Beside the answers already provided, I want to point out a special case by the use of sizeof(a) / sizeof (a[0]) If a is either an array of char, unsigned char or signed char you do not need to use sizeof twice since a sizeof expression with one operand of these types do always result to 1. Quote from C18,6.5.3.4/4: "When sizeof is applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1." Thus, sizeof(a) / sizeof (a[0]) would be equivalent to NUMBER OF ARRAY ELEMENTS / 1 if a is an array of type char, unsigned char or signed char. The division through 1 is redundant. In this case, you can simply abbreviate and do: sizeof(a) For example: char a[10]; size_t length = sizeof(a); If you want a proof, here is a link to GodBolt. Nonetheless, the division maintains safety, if the type significantly changes (although these cases are rare). A: "you've introduced a subtle way of shooting yourself in the foot" C 'native' arrays do not store their size. It is therefore recommended to save the length of the array in a separate variable/const, and pass it whenever you pass the array, that is: #define MY_ARRAY_LENGTH 15 int myArray[MY_ARRAY_LENGTH]; If you are writing C++, you SHOULD always avoid native arrays anyway (unless you can't, in which case, mind your foot). If you are writing C++, use the STL's 'vector' container. "Compared to arrays, they provide almost the same performance", and they are far more useful! // vector is a template, the <int> means it is a vector of ints vector<int> numbers; // push_back() puts a new value at the end (or back) of the vector for (int i = 0; i < 10; i++) numbers.push_back(i); // Determine the size of the array cout << numbers.size(); See: http://www.cplusplus.com/reference/stl/vector/
{ "language": "en", "url": "https://stackoverflow.com/questions/37538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1352" }
Q: Multiple threads stuck in native calls (Java) I have a problem with an application running on Fedora Core 6 with JDK 1.5.0_08. After some amount of uptime (usually some days) threads begin getting stuck in native methods. The threads are locked in something like this: "pool-2-thread-2571" prio=1 tid=0x08dd0b28 nid=0x319e waiting for monitor entry [0xb91fe000..0xb91ff7d4] at java.lang.Class.getDeclaredConstructors0(Native Method) or "pool-2-thread-2547" prio=1 tid=0x75641620 nid=0x1745 waiting for monitor entry [0xbc7fe000..0xbc7ff554] at sun.misc.Unsafe.defineClass(Native Method) Especially puzzling to me is this one: "HealthMonitor-10" daemon prio=1 tid=0x0868d1c0 nid=0x2b72 waiting for monitor entry [0xbe5ff000..0xbe5ff4d4] at java.lang.Thread.dumpThreads(Native Method) at java.lang.Thread.getStackTrace(Thread.java:1383) The threads remain stuck until the VM is restarted. Can anyone give me an idea as to what is happening here, what might be causing the native methods to block? The monitor entry address range at the top of each stuck thread is different. How can I figure out what is holding this monitor? A: My initial suspicion would be that you are experiencing some sort of class-loader realted dead lock. I imagine, that class loading needs to be synchronized at some level because class information will become available for the entire VM, not just the thread where it was initially loaded. The fact that the methods on top of the stack are native methods seems to be pure coincidence, since part of the class loading mechanism happens to implemented that way. I would investigate further what is going on class-loading wise. Maybe some thread uses the class loader to load a class from a network location which is slow/unavailable and thus blocks for a really long time, not yielding the monitor to other threads that want to load a class. Investigating the output when starting the JVM with -verbose:class might be one thing to try. A: I was having similar problems a few months ago and found the jthread(?) utility to be invaluable. You give it the process ID for your Java application and it will dump the entire stack for each thread in your process. From the output of jthread, I could see one thread was trying to obtain a lock after having entered a monitor and another thread was trying to enter the monitor after obtaining the lock. A recipe for deadlock. I was also wondering if your application was running into a garbage collection issue. You say it runs for a couple days before it stops like this. How long have you let it sit in the stuck state to see if maybe the GC ever finishes? A: Can you find out which thread is actually synchronizing on the monitor on which the native method is waiting? At least the thread-dump you get from the VM when you send it a SIGQUIT (kill -3) should show this information, as in "Thread-0" prio=5 tid=0x0100b060 nid=0x84c000 waiting for monitor entry [0xb0c8a000..0xb0c8ad90] at Deadlock$1.run(Deadlock.java:8) - waiting to lock <0x255e5b38> (a java.lang.Object) ... "main" prio=5 tid=0x01001350 nid=0xb0801000 waiting on condition [0xb07ff000..0xb0800148] at java.lang.Thread.sleep(Native Method) at Deadlock.main(Deadlock.java:21) - locked <0x255e5b38> (a java.lang.Object) In the dumps you've posted so far, I can't see any thread that is actually waiting to lock a specific monitor... A: Maybe you should use another jdk version. For your "puzzling one", there is a bug entry for 1.5.0_08. A memory leak is reported (I do not know, if this is related to your problem): http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6469701 Also you could get the source code and look, what happens at line 1383. On the other side, it could just be the stack dump, after the original error occurred. A: I found this thread after hitting the same problem - JDK 1.6.0_23 running on Linux with Tomcat 6.0.29. Not sure those bits are relevant, though - what I did notice was that aside from many threads getting "stuck" in the getDeclaredConstructors() native method, the CPU was at 100% for the java process. So, all request threads getting stuck here, CPU at 100%, thread dumps not showing any deadlocks (and no other threads doing any significant activity), it smelled like a thrashing garbage collector to me. Sure enough, checked the server logs and there were numerous OutOfMemory errors - heap space was exhausted. Can't say that this is going to be the root cause of threads getting stuck here every time, but hopefully the info here will help others at least rule out this as a possible cause...
{ "language": "en", "url": "https://stackoverflow.com/questions/37551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Adding server-side event to extender control I have an extender control that raises a textbox's OnTextChanged event 500ms after the user has finished typing. The problem with this is that OnTextChanged gets raised when the textbox loses focus, which causes problems (because of the postback). What I'd like to do is give the extender control its own server-side event (say, OnDelayedSubmit) so I can handle it separately. The event will originate in the extender control's behavior script (after the 500ms delay), so putting a __doPostBack in onchanged is not an option. Can anyone shed light on how to go about this? A: After plenty of reading up on extender controls and JavaScript, I've cobbled together a solution that seems to be working so far. The main trick was getting the necessary postback code from server-side to the client-side behavior script. I did this by using an ExtenderControlProperty (which is set in the control's OnPreRender function), and then eval'd in the behavior script. The rest was basic event-handling stuff. So now my extender control's .cs file looks something like this: public class DelayedSubmitExtender : ExtenderControlBase, IPostBackEventHandler { // This is where we'll give the behavior script the necessary code for the // postback event protected override void OnPreRender(EventArgs e) { string postback = Page.ClientScript.GetPostBackEventReference(this, "DelayedSubmit") + ";"; PostBackEvent = postback; } // This property matches up with a pair of get & set functions in the behavior script [ExtenderControlProperty] public string PostBackEvent { get { return GetPropertyValue<string>("PostBackEvent", ""); } set { SetPropertyValue<string>("PostBackEvent", value); } } // The event handling stuff public event EventHandler Submit; // Our event protected void OnSubmit(EventArgs e) // Called to raise the event { if (Submit != null) { Submit(this, e); } } public void RaisePostBackEvent(string eventArgument) // From IPostBackEventHandler { if (eventArgument == "DelayedSubmit") { OnSubmit(new EventArgs()); } } } And my behavior script looks something like this: DelayedSubmitBehavior = function(element) { DelayedSubmitBehavior.initializeBase(this, [element]); this._postBackEvent = null; // Stores the script required for the postback } DelayedSubmitBehavior.prototype = { // Delayed submit code removed for brevity, but normally this would be where // initialize, dispose, and client-side event handlers would go // This is the client-side part of the PostBackEvent property get_PostBackEvent: function() { return this._postBackEvent; }, set_PostBackEvent: function(value) { this._postBackEvent = value; } // This is the client-side event handler where the postback is initiated from _onTimerTick: function(sender, eventArgs) { // The following line evaluates the string var as javascript, // which will cause the desired postback eval(this._postBackEvent); } } Now the server-side event can be handled the same way you'd handle an event on any other control.
{ "language": "en", "url": "https://stackoverflow.com/questions/37555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What exactly is Appdomain recycling I am trying to figure out what exactly is Appdomain recycling? When a aspx page is requested for the first time from a DotNet application, i understand that an appdomain for that app is created, and required assemblies are loaded into that appdomain, and the request will be served. Now, if the web.config file or the contents of the bin folder, etc are modified, the appdomain will be "recycled". My question is, at the end of the recycling process, will the appdomain be loaded with assemblies and ready to serve the next request? or a page has to be requested to trigger the assemblies to load?. A: Take a look at this - that might explain it: http://weblogs.asp.net/owscott/archive/2006/02/21/ASP.NET-v2.0-2D00-AppDomain-recycles_2C00_-more-common-than-before.aspx#440333 In general. What is called "first hit" on an ASP.NET Website is usually taking longer time, due to compilation, and creation of an AppDomain. Whenever you deploy a site - make sure to use the "Publish Website" function in Visual Studio, to pre-compile your website. Then the "first hit" penalty is reduced. And remember to set the configuration to Release, and not Debug! A: Well, I think the thread was getting smoothly to a final conclusion, but in the end, it was otherwise. I'll try to answer the question based on my understanding and leveraging what i've just read about in other web sites. First of all, I myself try to avoid the term recycle other than for Application Pools since this may render someone confused. Now, getting to process, pools and AppDomain, I see the picture as follows: An Application Pool is, in short, a region of memory that is maintained up and running by a process called W3WP.exe, aka Worker Process. Recycling an Application Pool means bringing that process down, eliminating it from memory and then originating a brand new Worker Process, with a newly assigned process ID. Regarding Application Domains, I see it as subsets of memory regions, within the aforementioned region that plays the role of a container. In other words, the process in memory, W3WP.exe in this case, is a macro memory region for applications that stores subset regions, called Application Domains. Having said that, one process in memory may store different Application Domains, one for each application that is assigned to run within a given Application Pool. When it comes to recycling, as I initially told, it's something that I myself reserve only for Application Pools. For AppDomains, I prefer using the term 'restart', in order to avoid misconception. Based on this, restarting a AppDomain means starting over a given application with the newly added settings, such as refreshing the existing configuration. That happens within the boundaries of that sub-region of memory, called AppDomain, that ultimately lies within the process associated with a respective Application Pool. Those new settings may come from files such as web.config, machine.config, global.asax, Bin directory, App_Code, and there may be others. AppDomain are isolated from each other, that makes total sense. If not so, if changes to a web.config, let's say, of application 1, requited recycle of the pool, all other applications assigned to that pool would get restarted, what was definitely not desired by Microsoft and by anyone else. Summarizing my point, * *Process (W3WP.exe) * *AppDomain 1 *AppDomain 2 *AppDomain 3 *AppDomain n n = the number of assigned applications to the Application Pool managed by the given W3WP.exe * *Processes are memory regions isolated from one another *AppDomains are sub-memory regions isolated from one another, within the same process *Global IIS settings changes may require Application Pool recycle (killing and starting a new Worker Process, W3WP.exe) *Application-wide settings changes AppDomains concerns, and they may get restarted after changes in some specific files such as the ones outline above For further information, I recommend: http://blogs.msdn.com/b/david.wang/archive/2006/03/12/thoughts-on-iis-configuration-changes-and-when-it-takes-effect.aspx What causes an application pool in IIS to recycle? http://blogs.msdn.com/b/tess/archive/2006/08/02/asp-net-case-study-lost-session-variables-and-appdomain-recycles.aspx Regards from Brazil! A: Recycle shuts down the process hosting the appdomain. You'll notice that the PID changes when you recycle it. Unloading the AppDomin simply unloads all of the assemblies in the AppDomain, which can then be reused. The important thing to remember is that once the CLR is loaded into a process, it can't be removed. So if you needed to do something as soon as the CLR is loaded, then simply unloading the AppDomain won't help, because the CLR won't be reloaded. Also not that IIS isn't the only process which can host the AppDomain - any process can, and you don't always want to kill the whole process just to unload your assemblies. A: If your pages are "updatable," they must be compiled before use. That means, yes, on first request the assemblies are loaded, compiled, and made ready for accessing. Whenever these files are changed (even some virus software can trigger this by changing the modified date of the files!), the appdomain gets recycled. You can configure your web application to not be updatable. Everything gets compiled into DLLs, and you won't see any .ASPX or .CS files in the virtual directory. It makes your code harder to update (need to put some additional text on your webpage? Recompile time!), but it increases the availability of your web app. However, this still won't prevent your web app from being recycled if any of the files are altered. For example, if you edit web.config, your appdomain will recycle even if its compiled.
{ "language": "en", "url": "https://stackoverflow.com/questions/37564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: find duplicate addresses in database, stop users entering them early? How do I find duplicate addresses in a database, or better stop people already when filling in the form ? I guess the earlier the better? Is there any good way of abstracting street, postal code etc so that typos and simple attempts to get 2 registrations can be detected? like: Quellenstrasse 66/11 Quellenstr. 66a-11 I'm talking German addresses... Thanks! A: The earlier you can stop people, the easier it'll be in the long run! Not being too familiar with your db schema or data entry form, I'd suggest a route something like the following: * *have distinct fields in your db for each address "part", e.g. street, city, postal code, Länder, etc. *have your data entry form broken down similarly, e.g. street, city, etc The reasoning behind the above is that each part will likely have it's own particular "rules" for checking slightly-changed addressed, ("Quellenstrasse"->"Quellenstr.", "66/11"->"66a-11" above) so your validation code can check if the values as presented for each field exist in their respective db field. If not, you can have a class that applies the transformation rules for each given field (e.g. "strasse" stemmed to "str") and checks again for duplicates. Obviously the above method has it's drawbacks: * *it can be slow, depending on your data set, leaving the user waiting *users may try to get around it by putting address "Parts" in the wrong fields (appending post code to city, etc). but from experience we've found that introducing even simple checking like the above will prevent a large percentage of users from entering pre-existing addresses. Once you've the basic checking in place, you can look at optimising the db accesses required, refining the rules, etc to meet your particular schema. You might also take a look at MySQL's match() function for working out similar text. A: Johannes: @PConroy: This was my initial thougt also. the interesting part on this is to find good transformation rules for the different parts of the address! Any good suggestions? When we were working on this type of project before, our approach was to take our existing corpus of addresses (150k or so), then apply the most common transformations for our domain (Ireland, so "Dr"->"Drive", "Rd"->"Road", etc). I'm afraid there was no comprehensive online resource for such things at the time, so we ended up basically coming up with a list ourselves, checking things like the phone book (pressed for space there, addresses are abbreviated in all manner of ways!). As I mentioned earlier, you'd be amazed how many "duplicates" you'll detect with the addition of only a few common rules! I've recently stumbled across a page with a fairly comprehensive list of address abbreviations, although it's american english, so I'm not sure how useful it'd be in Germany! A quick google turned up a couple of sites, but they seemed like spammy newsletter sign-up traps. Although that was me googling in english, so you may have more look with "german address abbreviations" in german :) A: Before you start searching for duplicate addresses in your database, you should first make sure you store the addresses in a standard format. Most countries have a standard way of formatting addresses, in the US it's the USPS CASS system: http://www.usps.com/ncsc/addressservices/certprograms/cass.htm But most other countries have a similar service/standard. Try this site for more international formats: http://bitboost.com/ref/international-address-formats.html This not only helps in finding duplicates, but also saves you money when mailing you customers (the postal service charges less if the address is in a standard format). Depending on your application, in some cases you might want to store a "vanity" address record as well as the standard address record. This keeps your VIP customers happy. A "vanity" address might be something like: 62 West Ninety First Street Apartment 4D Manhattan, New York, NY 10001 While the standard address might look like this: 62 W 91ST ST APT 4D NEW YORK NY 10024-1414 A: One thing you might want to look at are Soundex searches, which are quite useful for misspellings and contractions. This however is not an in-database validation so it may or may not be what you're looking for. A: Another possible solution (assuming you actually need reliable address data and you're not just using addresses as a way to prevent duplicate accounts) is to use a third-party web service to standardize the addresses provided by your users. It works this way -- your system accepts a user's address via an online form. Your form hands off the user's address to the third-party address standardization web service. The web service gives you back the same address but now with the data standardized into discrete address fields, and with the standard abbreviations and formats applied. Your application displays this standardized address to your user for their confirmation before attempting to save the data in your DB. If all the user addresses go through a standardization step and only standardized addresses are saved to your DB, then finding duplicate records should be greatly simplified since you are now comparing apples to apples. One such third-party service is Global Address's Interactive Service which includes Germany in the list of supported countries, and also has an online demo that demonstrates how their service works (demo link can be found on that web page). There's a cost disadvantage to this approach, obviously. However, on the plus side: * *you would not need to create and maintain your own address standardization metadata *you won't need to continuously enhance your address standardization routines, and *you're free to focus your software development energy on the parts of the application that are unique to your requirements Disclaimer: I don't work for Global Address and have not tried using their service. I'm merely mentioning them as an example since they have an online demo that you can actually play with. A: I realize that the original post is specif to German addresses, but this is a good questions for addresses in general. In the United States, there is a part of an address called a delivery point barcode. It's a unique 12-digit number that identifies a single point of delivery and can serve as the unique identifier of an address. To get this value you'll want to use an address verification or address standardization web service API, which can cost about $20/mo depending upon the volume of requests you make to it. In the interest of full disclosure, I'm the founder of SmartyStreets. We offer just such an address validation web service API called LiveAddress. You're more than welcome to contact me personally with any questions you have. A: You could use the Google GeoCode API Wich in fact gives results for both of your examples, just tried it. That way you get structured results that you can save in your database. If the lookup fails, ask the user to write the address in another way. A: To add an answer to my own question: A different way of doing it is ask users for their mobile phone number, send them a text msg for verification. This stops most people messing with duplicate addresses. I'm talking from personal experience. (thanks pigsback !) They introduced confirmation through mobile phone. That stopped me having 2 accounts! :-) A: Machine learning and AI has algorithms to find string similarities and duplicate measures. Record linkage or the task of matching equivalent records that differ syntactically—was first explored in the late 1950s and 1960s. You can represent every pair of records using a vector of features that describe the similarity between individual record fields. For example, Adaptive Duplicate Detection Using Learnable String Similarity Measures. for example, read this doc * *You can use generic or manually tuned distance metrics for estimating the similarity of potential duplicates. *You can use adaptive name matching algorithms, like Jaro metric, which is based on the number and order of common characters between two strings. *Token-based and hybrid distance. In such cases, we can convert the strings s and t to token multisets (where each token is a word) and consider similarity metrics on these multisets. A: Often you use constraints in a database to ensure data to be "unique" in the data-based sense. Regarding "isomorphisms" I think you are on your own, ie writing the code your self. If in the database you could use a trigger. A: I'm looking for an answer addressing United States addresses The issue in question is prevent users from entering duplicates like Quellenstrasse 66/11 and Quellenstr. 66a-11 This happens when you let your user enter the complete address in input box. There are some methods you can use to prevent this. 1. Uniform formatting using RegEx * *You can prompt users to enter the details in a uniform format. *That is very efficient while querying too *test the user entered value against some regular expressions and if failed, ask user to correct it. 2.Use a map api like google maps and ask the user to select details from it. * *If you choose google maps, you can achieve it using Reverse Geocoding. From Google Developer's guide, The term geocoding generally refers to translating a human-readable address into a location on a map. The process of doing the opposite, translating a location on the map into a human-readable address, is known as reverse geocoding. 3. Allow heterogeneous data as shown in the question and compare it with different formatting. * *In the question, the OP allow address in different format. *In such case, you can change it to different forms and check it with database to get a solution. *This may take more time and the time is completely depends on the number of test cases. 4. Split the address into different parts and store it in db and provide such a form to user. * *That is provide different fields to store Street, city, state etc in database. *Also provide the different input fields to user to enter street, city, state, etc in top down format. *When user enter state, narrow the query to find dupes to that state only. *When user enter city, narrow it to that city only. *When user enter the street, narrow it to that street. And finally * *When user enter the address, change it to different formats and test it against Data Base. This is efficient even the number of test cases may high, the number of entries you test against will be very less and so it will consume very less amount of time. A: In the USA, you can use USPS Address Standardization Web Tool. It verifies and normalizes addresses for you. This way, you can normalize the address before checking if it already exists in the database. If all the addresses in the database are already normalized, you'll be able to spot duplicates easily. Sample URL: https://production.shippingapis.com/ShippingAPI.dll?API=Verify&XML=insert_request_XML_here Sample request: <AddressValidateRequest USERID="XXXXX"> <IncludeOptionalElements>true</IncludeOptionalElements> <ReturnCarrierRoute>true</ReturnCarrierRoute> <Address ID="0"> <FirmName /> <Address1 /> <Address2>205 bagwell ave</Address2> <City>nutter fort</City> <State>wv</State> <Zip5></Zip5> <Zip4></Zip4> </Address> </AddressValidateRequest> Sample response: <AddressValidateResponse> <Address ID="0"> <Address2>205 BAGWELL AVE</Address2> <City>NUTTER FORT</City> <State>WV</State> <Zip5>26301</Zip5> <Zip4>4322</Zip4> <DeliveryPoint>05</DeliveryPoint> <CarrierRoute>C025</CarrierRoute> </Address> </AddressValidateResponse> Other countries might have their own APIs. Other people mentioned 3rd party APIs that support multiple countries that might be useful in some cases. A: As google fetch suggesions for search you can search database address fields First, let’s create an index.htm(l) file: <!DOCTYPE html> <html lang="en"> <head> <meta http-equiv="Content-Language" content="en-us"> <title>Address Autocomplete</title> <meta charset="utf-8"> <link href="//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css" rel="stylesheet"> <script src="//code.jquery.com/jquery-2.1.4.min.js"></script> <script src="//maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script> <script src="//netsh.pp.ua/upwork-demo/1/js/typeahead.js"></script> <style> h1 { font-size: 20px; color: #111; } .content { width: 80%; margin: 0 auto; margin-top: 50px; } .tt-hint, .city { border: 2px solid #CCCCCC; border-radius: 8px 8px 8px 8px; font-size: 24px; height: 45px; line-height: 30px; outline: medium none; padding: 8px 12px; width: 400px; } .tt-dropdown-menu { width: 400px; margin-top: 5px; padding: 8px 12px; background-color: #fff; border: 1px solid #ccc; border: 1px solid rgba(0, 0, 0, 0.2); border-radius: 8px 8px 8px 8px; font-size: 18px; color: #111; background-color: #F1F1F1; } </style> <script> $(document).ready(function() { $('input.city').typeahead({ name: 'city', remote: 'city.php?query=%QUERY' }); }) </script> <script> function register_address() { $.ajax({ type: "POST", data: { City: $('#city').val(), }, url: "addressexists.php", success: function(data) { if(data === 'ADDRESS_EXISTS') { $('#address') .css('color', 'red') .html("This address already exists!"); } } }) } </script> </head> <body> <div class="content"> <form> <h1>Try it yourself</h1> <input type="text" name="city" size="30" id="city" class="city" placeholder="Please Enter City or ZIP code"> <span id="address"></span> </form> </div> </body> </html> Now we will create a city.php file which will aggregate our query to MySQL DB and give response as JSON. Here is the code: <?php //CREDENTIALS FOR DB define ('DBSERVER', 'localhost'); define ('DBUSER', 'user'); define ('DBPASS','password'); define ('DBNAME','dbname'); //LET'S INITIATE CONNECT TO DB $connection = mysqli_connect(DBSERVER, DBUSER, DBPASS,"DBNAME") or die("Can't connect to server. Please check credentials and try again"); //CREATE QUERY TO DB AND PUT RECEIVED DATA INTO ASSOCIATIVE ARRAY if (isset($_REQUEST['query'])) { $query = $_REQUEST['query']; $sql = mysqli_query ($connection ,"SELECT zip, city FROM zips WHERE city LIKE '%{$query}%' OR zip LIKE '%{$query}%'"); $array = array(); while ($row = mysqli_fetch_array($sql,MYSQLI_NUM)) { $array[] = array ( 'label' => $row['city'].', '.$row['zip'], 'value' => $row['city'], ); } //RETURN JSON ARRAY echo json_encode ($array); } ?> and then prevent saving them into database if found duplicate in table column And for your addressexists.php code: <?php//CREDENTIALS FOR DB define ('DBSERVER', 'localhost'); define ('DBUSER', 'user'); define ('DBPASS','password'); define ('DBNAME','dbname'); //LET'S INITIATE CONNECT TO DB $connection = mysqli_connect(DBSERVER, DBUSER, DBPASS,"DBNAME") or die("Can't connect to server. Please check credentials and try again"); $city= mysqli_real_escape_string($_POST['city']); // $_POST is an array (not a function) // mysqli_real_escape_string is to prevent sql injection $sql = "SELECT username FROM ".TABLENAME." WHERE city='".$city."'"; // City must enclosed in two quotations $query = mysqli_query($connection,$sql); if(mysqli_num_rows($query) != 0) { echo('ADDRESS_EXISTS'); } ?> A: Match address to addresses provided by DET BundesPost to detect duplicates. DET probably sells a CD like USA does. The problem then becomes matching to the Bundespost addresses. Just a long process of replacing abbreviations with the post approved abbreviations and such. Same way in USA. Match to USPostOffice addresses (Sorry these cost money so its not entirely open CDs are available from the US post office) to find duplicates. A: In my opinion, assuming that you already had a lot of dirty data in your DB, You have to do build your "handmade" dirty filter which may detect a maximum of german abreviation ... But If you treat a lot of data, you will take the risk to find some false-positive and true-negative sample... Finally a semi automated job (machine with human assist when probability of a case of false-positive or true-negative is too high) will be the best solution. More you treat "exception" (because human raise exception when filling data), more your "handmade" filter will fit your requierement. In the other hand, you may also use a germany address verification service on user side, and store only the verified one...
{ "language": "en", "url": "https://stackoverflow.com/questions/37568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Integrating InstantRails with Aptana or any other IDE So I've been using InstantRails to check out Ruby on rails. I've been using Notepad++ for the editing. Now I don't want to install Ruby or Rails on my machine. Is there any walk through/tutorial on how to integrate Radrails or Netbeans with InstantRails? A: Here's a tutorial: http://ruby.meetup.com/73/boards/view/viewthread?thread=2203432 (I don't know if it's any good.) And here's one with InstantRails+Netbeans: https://web.archive.org/web/20100505044104/http://weblogs.java.net/blog/bleonard/archive/2007/03/instant_rails_w.html A: I recommend learning Rails and Ruby itself first, and then picking up something like InstantRails. Having too many layers when learning something new can make it hard to know what features are part of which language, and potentially confuse you when trying to determine where a bug is occurring.
{ "language": "en", "url": "https://stackoverflow.com/questions/37573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Queue alternatives to MSMQ on Windows? If you want to use a queuing product for durable messaging under Windows, running .NET 2.0 and above, which alternatives to MSMQ exist today? I know of ActiveMQ (http://activemq.apache.org/), and I've seen references to WSMQ (pointing to http://wsmq.net), but the site seems to be down. Are there any other alternatives? A: What about SQL 2005's service broker? A: If cost isn't an issue (there is also an Express SKU) then take a look at the 800,000 pound gorilla. WebSphere MQ (MQ Series). It runs on practically any platform and supports so many different queue managers and messaging patterns it really isn't appropriate to list them here. * *IBM's WebSphere MQ Site: http://www.ibm.com/software/integration/wmq/ *The MQ Support Forum: http://www.mqseries.net/phpBB2/index.php A: Why not use ActiveMQ? :) A: May not be "best practice" advice here... but based on real life needs and experience: we have distributed system, 60 boxes running each 10 clients all do task X, and they need to take the next task from a queue. The queue is being fed from one other "client"... We had used inter process communication, we used MSMQ, we tried service broker... It just doesn't work in the long term because you are giving away the control of your application to Microsoft. It works great as long as your needs are satisfied. it becomes hell when you need something not supported. The best solution for us was: Use a SQL Database table as the queue. Don't reinvent the wheel there, since you will make mistakes (locks). There is info out there on how to do it, it is very easy and we handled over 200K messages per 24H (with 60x10 = 600 concurrent reads and writes to the queue). That is in addition to the same SQL server handling the rest of the application stuff... Some reasons why MSMQ doesn't work: * *When you need to change the logic of the queue to not FIFO, but something like "the oldest RED message" or "the oldest BLUE message" you can't do it. (I know what people will say, you can do it by having a red queue and a blue queue.. .But what if the number/types of queues is dynamic based on the way the application is administrated and changes daily?) *It adds a point of failure and deployment nightmare (the queue is a point of failure and you need to deal with setting the right permissions on all boxes to read/write messages etc' in Enterprise software you pay in blood for these type of things). SQL server... all clients are writing/reading already from the DB, it is just one more table.. A: I can't begin to say enough good things about Tibco EMS - an implementation of the Java JMS messaging spec. Tibco EMS has superb support for .NET clients - including Compact Framework .NET on WinCE. (They also have C client libraries too.) So if you're building a heterogeneous distributed application involving messaging code running on Windows, Unix (AIX/Solaris), Linux, or Mac OS X, then Tibco EMS is the ticket. Check out my article here: Using JMS For Distributed Software Development I used to work at Microsoft and did some implementation with MSMQ while there. But you know, Microsoft just concerns itself with Windows. They depended on 3rd parties to provide MSMQ clients to other platforms. My encounter with Tibco EMS was a much better experience. It was very evident that Tibco understood messaging much more so than Microsoft. And Tibco put the effort into supporting diverse client bindings themselves. That is why they eventually changed the product name from Tibco JMS to Tibco EMS (Enterprise Messaging Service). And I did build heterogeneous software systems around Tibco EMS. Rolled C# .NET Winform clients interacting with Java/JBoss middle-tier via Tibco EMS messaging. (And also have WinCE industrial embedded computers that use the Compact Framework .NET Tibco client.) Links To My JMS Writings A: The RabbitMQ framework seems to have been overlooked here. If folks still care, it does have a .NET 2.0 code base and it comes with a WCF binding similar to netMsmqBinding. The binding naturally requires at least .NET 3.0 and it has more features than the built-in netMsmqBinding. On top of it all, it is Mono friendly. Its worth a look. A: If high availability is important Amazon SQS is worth looking at. There's not much additional overhead if messages come from different physical locations. Cheap and scalable! A: Redis is another hot breed on this platform. Check with their Set based queueing implementation and also Pub/Sub pattern. It looks promosing
{ "language": "en", "url": "https://stackoverflow.com/questions/37579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: InfoPath 2003 and the xs:any type I am implementing exception handling for our BizTalk services, and have run into a fairly major stumbling block. In order to make the exception processing as generic as possible, and therefore to allow us to use it for any BizTalk application, our XML error schema includes an xs:any node, into which we can place a variety of data, depending on the actual exception. The generated XML should then be presented to a user through an InfoPath 2003 form for manual intervention before being represented back to BizTalk. The problem is that InfoPath 2003 doesn't like schemas with an xs:any node. What we'd really like to do is the show the content of the exception report in a form with all relevant parameters mapped , and the entire content of the xs:any node in a text box, since users who are able to see these messages will be conversant with XML. Unfortunately, I am unable to make InfoPath even load the schema at design time. Does anyone have any recommendation for how to achieve what we need, please? A: Does your xs:any element have a minOccurs > 0? http://msdn.microsoft.com/en-us/library/bb251017.aspx#UnsupportedConstructs I've also read that due to the way that InfoPath works, it can not handly more than one schema for each namespace. Hence, your xs:any (and the sequence that it defines) should have a unique namespace. A: Unfortunately, things have moved on, and we have (almost) made the decision not to use InfoPath for this requirement. It's only partially to do with the xs:any issue, but more to do with (external) audit trails, calls to custom code and web services, and a couple of other factors.
{ "language": "en", "url": "https://stackoverflow.com/questions/37584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Consuming web services from Oracle PL/SQL Our application is interfacing with a lot of web services these days. We have our own package that someone wrote a few years back using UTL_HTTP and it generally works, but needs some hard-coding of the SOAP envelope to work with certain systems. I would like to make it more generic, but lack experience to know how many scenarios I would have to deal with. The variations are in what namespaces need to be declared and the format of the elements. We have to handle both simple calls with a few parameters and those that pass a large amount of data in an encoded string. I know that 10g has UTL_DBWS, but there are not a huge number of use-cases on-line. Is it stable and flexible enough for general use? Documentation A: I have used UTL_HTTP which is simple and works. If you face a challenge with your own package, you can probably find a solution in one of the many wrapper packages around UTL_HTTP on the net (Google "consuming web services from pl/sql", leading you to e.g. http://www.oracle-base.com/articles/9i/ConsumingWebServices9i.php) The reason nobody is using UTL_DBWS is that it is not functional in a default installed database. You need to load a ton of Java classes into the database, but the standard instructions seem to be defective - the process spews Java errors right and left and ultimately fails. It seems very few people have been willing to take the time to track down the package dependencies in order to make this approach work. A: I had this challenge and found and installed the 'SOAP API' package that Sten suggests on Oracle-Base. It provides some good envelope-creation functionality on top of UTL_HTTP. However there were some limitations that pertain to your question. SOAP_API assumes all requests are simple XML- i.e. only one layer tag hierarchy. I extended the SOAP_API package to allow the client code to arbitrarily insert an extra tag. So you can insert a sub-level such as , continue to build the request, and remember to insert a closing tag. The namespace issue was a bear for the project- different levels of XML had different namespaces. A nice debugging tool that I used is TCP Trace from Pocket Soap. www.pocketsoap.com/tcptrace/ You set it up like a proxy and watch the HTTP request and response objects between client and server code. Having said all that, we really like having a SOAP client in the database- we have full access to all data and existing PLSQL code, can easily loop through cursors and call the external app via SOAP when needed. It was a lot quicker and easier than deploying a middle tier with lots of custom Java or .NET code. Good luck and let me know if you'd like to see my enhanced SOAP API code. A: We have also used UTL_HTTP in a manner similar to what you have described. I don't have any direct experience with UTL_DBWS, so I hope you can follow up with any information/experience you can gather. @kogus, no it's a quite good design for many applications. PL/SQL is a full-fledged programming language that has been used for many big applications. A: Check out this older post. I have to agree with that post's #1 answer; it's hard to imagine a scenario where this could be a good design. Can't you write a service, or standalone application, which would talk to a table in your database? Then you could implement whatever you want as a trigger on that table.
{ "language": "en", "url": "https://stackoverflow.com/questions/37586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Displaying XML data in a Winforms control I would like to display details of an xml error log to a user in a winforms application and am looking for the best control to do the job. The error data contains all of the sever variables at the time that the error occurred. These have been formatted into an XML document that looks something to the effect of: <error> <serverVariables> <item> <value> </item> </serverVariables> <queryString> <item name=""> <value string=""> </item> </queryString> </error> I would like to read this data from the string that it is stored in and display it to the user via a windows form in a useful way. XML Notepad does a cool job of formatting xml, but is not really was I am looking for since I would prefer to rather display item details in a Name : string format. Any suggestions or am I looking and a custom implementation? [EDIT] A section of the data that needs to be displayed: <?xml version="1.0" encoding="utf-8"?> <error host="WIN12" type="System.Web.HttpException" message="The file '' does not exist." source="System.Web" detail="System.Web.HttpException: The file '' does not exist. at System.Web.UI.Util.CheckVirtualFileExists(VirtualPath virtualPath) at" time="2008-09-01T07:13:08.9171250+02:00" statusCode="404"> <serverVariables> <item name="ALL_HTTP"> <value string="HTTP_CONNECTION:close HTTP_USER_AGENT:Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) " /> </item> <item name="AUTH_TYPE"> <value string="" /> </item> <item name="HTTPS"> <value string="off" /> </item> <item name="HTTPS_KEYSIZE"> <value string="" /> </item> <item name="HTTP_USER_AGENT"> <value string="Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1)" /> </item> </serverVariables> <queryString> <item name="tid"> <value string="196" /> </item> </queryString> </error> A: You can transform your XML data using XSLT Another option is to use XLinq. If you want concrete code example provide us with sample data EDIT: here is a sample XSLT transform for your XML file: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text"/> <xsl:template match="//error/serverVariables"> <xsl:text>Server variables: </xsl:text> <xsl:for-each select="item"> <xsl:value-of select="@name"/>:<xsl:value-of select="value/@string"/> <xsl:text> </xsl:text> </xsl:for-each> </xsl:template> <xsl:template match="//error/queryString"> <xsl:text>Query string items: </xsl:text> <xsl:for-each select="item"> <xsl:value-of select="@name"/>:<xsl:value-of select="value/@string"/> <xsl:text> </xsl:text> </xsl:for-each> </xsl:template> </xsl:stylesheet> You can apply this transform using XslCompiledTransform class. It should give output like this: Server variables: ALL_HTTP:HTTP_CONNECTION:close HTTP_USER_AGENT:Mozilla/4.0 (compatible MSIE 6.0; Windows NT 5.1; SV1) AUTH_TYPE: HTTPS:off HTTPS_KEYSIZE: HTTP_USER_AGENT:Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;S ) Query string items: tid:196 A: You could try using the DataGridView control. To see an example, load an XML file in DevStudio and then right-click on the XML and select "View Data Grid". You'll need to read the API documentation on the control to use it. A: You could use a treeview control and use a recursive XLinq algorithm to put the data in there. I've done that myself with an interface allow a user to build up a custom XML representation and it worked really well. A: See XML data binding. Use Visual Studio or xsd.exe to generate DataSet or classes from XSD, then use System.Xml.Serialization.XmlSerializer if needed to turn your XML into objects/DataSet. Massage the objects. Display them in grid.
{ "language": "en", "url": "https://stackoverflow.com/questions/37591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Where can I get free Vista style developer graphics? What is the best source of free Vista style graphics for application development? I want 32x32 and 16x16 that I can use in a Winforms application. A: Best place I've found for commercial toolbar icons etc is glyfx.com. A: If you're using Visual Studio Professional or above, you've got a zip file of icons in your VS path under Common7\VS2008ImageLibrary. Some of the images use the Vista style. A: The Tango project has some good icons For areas that only need 16x16, the silk icons from famfamfam are good too Both are Creative Commons licensed
{ "language": "en", "url": "https://stackoverflow.com/questions/37593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why does TreeNodeCollection not implenent IEnumerable? TreeNodeCollection, like some of the other control collections in System.Windows.Forms, implements IEnumerable. Is there any design reason behind this or is it just a hangover from the days before generics? A: Yes, there are many .NET Framework collection, that does not implement generic IEnumerable. I think that's because after 2.0 there was no (at least not so match) development of the core part of FW. Meanwhile I suggest you to make use of following workaround: using System.Linq; ... var nodes = GetTreeNodeCollection().OfType<TreeNode>(); A: Yes, Windows Forms dates back to before generics in .Net
{ "language": "en", "url": "https://stackoverflow.com/questions/37597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Connecting Team Explorer to Codeplex anonymously I was using Codeplex and tried connecting to their source control using Team Explorer, with no joy. I also tried connecting with HTTPS or HTTP, using the server name and the project name. As I do not have a user account on Codeplex I could not login. I am just trying to check out some code without changing it. My question is: How can I connect Team Explorer to a Codeplex server anonymously? A: As the person primarily responsible for making anonymous access work against the TFS CodePlex servers, I can tell you that it isn't possible with Team Explorer. We tried to make it happen, but the way you get anonymous to work would've caused a pretty stellar-sized security hole with Team Explorer. So, as others have mentioned, the custom-written clients (CPC and SvnBridge) do support anonymous. I know the Teamprise guys were talking about adding it to Teamprise for a while, but not sure if they ever got around to it. It would've been a pretty big change in the way they work (since it basically has to be Workspace-less). Edit: Brannon helped, too. Wrote all the horrible C++ that I refuse to write. He just bugged me on IM, so I better amend my previous remarks. :-p A: I think you have to use the CodePlex Source Control Client. In includes cpc.exe which supports the anonymous access features of CodePlex TFS servers for non-coordinator/developer access. But according to the site: The CodePlex Client is not currently being maintained. The focus of the CodePlex team now is on the SvnBridge. I'm using TortoiseSVN with SvnBridge with no problems. A: I have used SVNBridge with TortoiseSVN, which workes like a charm. What I was looking for here is a way for anonymous access that is directly integrated with VS. Guess that's not possible at the moment. Also just found out you can connect directly via TortoiseSVN, without SVNBridge. Look for the "SvnBridge on the CodePlex servers?" heading A: I think it's not possible with Team Explorer. But you can with CodePlex Source Control Client or Tortoise
{ "language": "en", "url": "https://stackoverflow.com/questions/37614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is reflection and why is it useful? What is reflection, and why is it useful? I'm particularly interested in Java, but I assume the principles are the same in any language. A: Uses of Reflection Reflection is commonly used by programs which require the ability to examine or modify the runtime behavior of applications running in the Java virtual machine. This is a relatively advanced feature and should be used only by developers who have a strong grasp of the fundamentals of the language. With that caveat in mind, reflection is a powerful technique and can enable applications to perform operations which would otherwise be impossible. Extensibility Features An application may make use of external, user-defined classes by creating instances of extensibility objects using their fully-qualified names. Class Browsers and Visual Development Environments A class browser needs to be able to enumerate the members of classes. Visual development environments can benefit from making use of type information available in reflection to aid the developer in writing correct code. Debuggers and Test Tools Debuggers need to be able to examine private members in classes. Test harnesses can make use of reflection to systematically call a discoverable set APIs defined on a class, to ensure a high level of code coverage in a test suite. Drawbacks of Reflection Reflection is powerful, but should not be used indiscriminately. If it is possible to perform an operation without using reflection, then it is preferable to avoid using it. The following concerns should be kept in mind when accessing code via reflection. * *Performance Overhead Because reflection involves types that are dynamically resolved, certain Java virtual machine optimizations cannot be performed. Consequently, reflective operations have slower performance than their non-reflective counterparts and should be avoided in sections of code which are called frequently in performance-sensitive applications. * *Security Restrictions Reflection requires a runtime permission which may not be present when running under a security manager. This is in an important consideration for code which has to run in a restricted security context, such as in an Applet. * *Exposure of Internals Since reflection allows code to perform operations that would be illegal in non-reflective code, such as accessing private fields and methods, the use of reflection can result in unexpected side-effects, which may render code dysfunctional and may destroy portability. Reflective code breaks abstractions and therefore may change behavior with upgrades of the platform. source: The Reflection API A: From java documentation page java.lang.reflect package provides classes and interfaces for obtaining reflective information about classes and objects. Reflection allows programmatic access to information about the fields, methods and constructors of loaded classes, and the use of reflected fields, methods, and constructors to operate on their underlying counterparts, within security restrictions. AccessibleObject allows suppression of access checks if the necessary ReflectPermission is available. Classes in this package, along with java.lang.Class accommodate applications such as debuggers, interpreters, object inspectors, class browsers, and services such as Object Serialization and JavaBeans that need access to either the public members of a target object (based on its runtime class) or the members declared by a given class It includes following functionality. * *Obtaining Class objects, *Examining properties of a class (fields, methods, constructors), *Setting and getting field values, *Invoking methods, *Creating new instances of objects. Have a look at this documentation link for the methods exposed by Class class. From this article (by Dennis Sosnoski, President, Sosnoski Software Solutions, Inc) and this article (security-explorations pdf): I can see considerable drawbacks than uses of using Reflection User of Reflection: * *It provides very versatile way of dynamically linking program components *It is useful for creating libraries that work with objects in very general ways Drawbacks of Reflection: * *Reflection is much slower than direct code when used for field and method access. *It can obscure what's actually going on inside your code *It bypasses the source code can create maintenance problems *Reflection code is also more complex than the corresponding direct code *It allows violation of key Java security constraints such as data access protection and type safety General abuses: * *Loading of restricted classes, *Obtaining references to constructors, methods or fields of a restricted class, *Creation of new object instances, methods invocation, getting or setting field values of a restricted class. Have a look at this SE question regarding abuse of reflection feature: How do I read a private field in Java? Summary: Insecure use of its functions conducted from within a system code can also easily lead to the compromise of a Java security model. So use this feature sparingly A: As name itself suggest it reflects what it holds for example class method,etc apart from providing feature to invoke method creating instance dynamically at runtime. It is used by many frameworks and application under the wood to invoke services without actually knowing the code. A: Reflection gives you the ability to write more generic code. It allows you to create an object at runtime and call its method at runtime. Hence the program can be made highly parameterized. It also allows introspecting the object and class to detect its variables and method exposed to the outer world. A: Reflection has many uses. The one I am more familiar with, is to be able to create code on the fly. IE: dynamic classes, functions, constructors - based on any data (xml/array/sql results/hardcoded/etc..) A: Reflection is a key mechanism to allow an application or framework to work with code that might not have even been written yet! Take for example your typical web.xml file. This will contain a list of servlet elements, which contain nested servlet-class elements. The servlet container will process the web.xml file, and create new a new instance of each servlet class through reflection. Another example would be the Java API for XML Parsing (JAXP). Where an XML parser provider is 'plugged-in' via well-known system properties, which are used to construct new instances through reflection. And finally, the most comprehensive example is Spring which uses reflection to create its beans, and for its heavy use of proxies A: Not every language supports reflection, but the principles are usually the same in languages that support it. Reflection is the ability to "reflect" on the structure of your program. Or more concrete. To look at the objects and classes you have and programmatically get back information on the methods, fields, and interfaces they implement. You can also look at things like annotations. It's useful in a lot of situations. Everywhere you want to be able to dynamically plug in classes into your code. Lots of object relational mappers use reflection to be able to instantiate objects from databases without knowing in advance what objects they're going to use. Plug-in architectures is another place where reflection is useful. Being able to dynamically load code and determine if there are types there that implement the right interface to use as a plugin is important in those situations. A: Reflection allows instantiation of new objects, invocation of methods, and get/set operations on class variables dynamically at run time without having prior knowledge of its implementation. Class myObjectClass = MyObject.class; Method[] method = myObjectClass.getMethods(); //Here the method takes a string parameter if there is no param, put null. Method method = aClass.getMethod("method_name", String.class); Object returnValue = method.invoke(null, "parameter-value1"); In above example the null parameter is the object you want to invoke the method on. If the method is static you supply null. If the method is not static, then while invoking you need to supply a valid MyObject instance instead of null. Reflection also allows you to access private member/methods of a class: public class A{ private String str= null; public A(String str) { this.str= str; } } . A obj= new A("Some value"); Field privateStringField = A.class.getDeclaredField("privateString"); //Turn off access check for this field privateStringField.setAccessible(true); String fieldValue = (String) privateStringField.get(obj); System.out.println("fieldValue = " + fieldValue); * *For inspection of classes (also know as introspection) you don't need to import the reflection package (java.lang.reflect). Class metadata can be accessed through java.lang.Class. Reflection is a very powerful API but it may slow down the application if used in excess, as it resolves all the types at runtime. A: I want to answer this question by example. First of all Hibernate project uses Reflection API to generate CRUD statements to bridge the chasm between the running application and the persistence store. When things change in the domain, the Hibernate has to know about them to persist them to the data store and vice versa. Alternatively works Lombok Project. It just injects code at compile time, result in code being inserted into your domain classes. (I think it is OK for getters and setters) Hibernate chose reflection because it has minimal impact on the build process for an application. And from Java 7 we have MethodHandles, which works as Reflection API. In projects, to work with loggers we just copy-paste the next code: Logger LOGGER = Logger.getLogger(MethodHandles.lookup().lookupClass().getName()); Because it is hard to make typo-error in this case. A: As I find it best to explain by example and none of the answers seem to do that... A practical example of using reflections would be a Java Language Server written in Java or a PHP Language Server written in PHP, etc. Language Server gives your IDE abilities like autocomplete, jump to definition, context help, hinting types and more. In order to have all tag names (words that can be autocompleted) to show all the possible matches as you type the Language Server has to inspect everything about the class including doc blocks and private members. For that it needs a reflection of said class. A different example would be a unit-test of a private method. One way to do so is to create a reflection and change the method's scope to public in the test's set-up phase. Of course one can argue private methods shouldn't be tested directly but that's not the point. A: Reflection is a language's ability to inspect and dynamically call classes, methods, attributes, etc. at runtime. For example, all objects in Java have the method getClass(), which lets you determine the object's class even if you don't know it at compile time (e.g. if you declared it as an Object) - this might seem trivial, but such reflection is not possible in less dynamic languages such as C++. More advanced uses lets you list and call methods, constructors, etc. Reflection is important since it lets you write programs that do not have to "know" everything at compile time, making them more dynamic, since they can be tied together at runtime. The code can be written against known interfaces, but the actual classes to be used can be instantiated using reflection from configuration files. Lots of modern frameworks use reflection extensively for this very reason. Most other modern languages use reflection as well, and in scripting languages (such as Python) they are even more tightly integrated, since it feels more natural within the general programming model of those languages. A: Java Reflection is quite powerful and can be very useful. Java Reflection makes it possible to inspect classes, interfaces, fields and methods at runtime, without knowing the names of the classes, methods etc. at compile time. It is also possible to instantiate new objects, invoke methods and get/set field values using reflection. A quick Java Reflection example to show you what using reflection looks like: Method[] methods = MyObject.class.getMethods(); for(Method method : methods){ System.out.println("method = " + method.getName()); } This example obtains the Class object from the class called MyObject. Using the class object the example gets a list of the methods in that class, iterates the methods and print out their names. Exactly how all this works is explained here Edit: After almost 1 year I am editing this answer as while reading about reflection I got few more uses of Reflection. * *Spring uses bean configuration such as: <bean id="someID" class="com.example.Foo"> <property name="someField" value="someValue" /> </bean> When the Spring context processes this < bean > element, it will use Class.forName(String) with the argument "com.example.Foo" to instantiate that Class. It will then again use reflection to get the appropriate setter for the < property > element and set its value to the specified value. * *Junit uses Reflection especially for testing Private/Protected methods. For Private methods, Method method = targetClass.getDeclaredMethod(methodName, argClasses); method.setAccessible(true); return method.invoke(targetObject, argObjects); For private fields, Field field = targetClass.getDeclaredField(fieldName); field.setAccessible(true); field.set(object, value); A: Simple example for reflection. In a chess game, you do not know what will be moved by the user at run time. reflection can be used to call methods which are already implemented at run time: public class Test { public void firstMoveChoice(){ System.out.println("First Move"); } public void secondMOveChoice(){ System.out.println("Second Move"); } public void thirdMoveChoice(){ System.out.println("Third Move"); } public static void main(String[] args) throws IllegalAccessException, IllegalArgumentException, InvocationTargetException { Test test = new Test(); Method[] method = test.getClass().getMethods(); //firstMoveChoice method[0].invoke(test, null); //secondMoveChoice method[1].invoke(test, null); //thirdMoveChoice method[2].invoke(test, null); } } A: Example: Take for example a remote application which gives your application an object which you obtain using their API Methods . Now based on the object you might need to perform some sort of computation . The provider guarantees that object can be of 3 types and we need to perform computation based on what type of object . So we might implement in 3 classes each containing a different logic .Obviously the object information is available in runtime so you cannot statically code to perform computation hence reflection is used to instantiate the object of the class that you require to perform the computation based on the object received from the provider . A: The name reflection is used to describe code which is able to inspect other code in the same system (or itself). For example, say you have an object of an unknown type in Java, and you would like to call a 'doSomething' method on it if one exists. Java's static typing system isn't really designed to support this unless the object conforms to a known interface, but using reflection, your code can look at the object and find out if it has a method called 'doSomething' and then call it if you want to. So, to give you a code example of this in Java (imagine the object in question is foo) : Method method = foo.getClass().getMethod("doSomething", null); method.invoke(foo, null); One very common use case in Java is the usage with annotations. JUnit 4, for example, will use reflection to look through your classes for methods tagged with the @Test annotation, and will then call them when running the unit test. There are some good reflection examples to get you started at http://docs.oracle.com/javase/tutorial/reflect/index.html And finally, yes, the concepts are pretty much similar in other statically typed languages which support reflection (like C#). In dynamically typed languages, the use case described above is less necessary (since the compiler will allow any method to be called on any object, failing at runtime if it does not exist), but the second case of looking for methods which are marked or work in a certain way is still common. Update from a comment: The ability to inspect the code in the system and see object types is not reflection, but rather Type Introspection. Reflection is then the ability to make modifications at runtime by making use of introspection. The distinction is necessary here as some languages support introspection, but do not support reflection. One such example is C++ A: As per my understanding: Reflection allows programmer to access entities in program dynamically. i.e. while coding an application if programmer is unaware about a class or its methods, he can make use of such class dynamically (at run time) by using reflection. It is frequently used in scenarios where a class name changes frequently. If such a situation arises, then it is complicated for the programmer to rewrite the application and change the name of the class again and again. Instead, by using reflection, there is need to worry about a possibly changing class name. A: Reflection is an API which is used to examine or modify the behaviour of methods, classes, interfaces at runtime. * *The required classes for reflection are provided under java.lang.reflect package. *Reflection gives us information about the class to which an object belongs and also the methods of that class which can be executed by using the object. *Through reflection we can invoke methods at runtime irrespective of the access specifier used with them. The java.lang and java.lang.reflect packages provide classes for java reflection. Reflection can be used to get information about – * *Class The getClass() method is used to get the name of the class to which an object belongs. *Constructors The getConstructors() method is used to get the public constructors of the class to which an object belongs. *Methods The getMethods() method is used to get the public methods of the class to which an objects belongs. The Reflection API is mainly used in: IDE (Integrated Development Environment) e.g. Eclipse, MyEclipse, NetBeans etc. Debugger and Test Tools etc. Advantages of Using Reflection: Extensibility Features: An application may make use of external, user-defined classes by creating instances of extensibility objects using their fully-qualified names. Debugging and testing tools: Debuggers use the property of reflection to examine private members on classes. Drawbacks: Performance Overhead: Reflective operations have slower performance than their non-reflective counterparts, and should be avoided in sections of code which are called frequently in performance-sensitive applications. Exposure of Internals: Reflective code breaks abstractions and therefore may change behaviour with upgrades of the platform. Ref: Java Reflection javarevisited.blogspot.in A: Reflection is a set of functions which allows you to access the runtime information of your program and modify it behavior (with some limitations). It's useful because it allows you to change the runtime behavior depending on the meta information of your program, that is, you can check the return type of a function and change the way you handle the situation. In C# for example you can load an assembly (a .dll) in runtime an examine it, navigating through the classes and taking actions according to what you found. It also let you create an instance of a class on runtime, invoke its method, etc. Where can it be useful? Is not useful every time but for concrete situations. For example you can use it to get the name of the class for logging purposes, to dynamically create handlers for events according to what's specified on a configuration file and so on... A: I just want to add some points to all that was listed. With Reflection API you can write a universal toString() method for any object. It could be useful for debugging. Here is some example: class ObjectAnalyzer { private ArrayList<Object> visited = new ArrayList<Object>(); /** * Converts an object to a string representation that lists all fields. * @param obj an object * @return a string with the object's class name and all field names and * values */ public String toString(Object obj) { if (obj == null) return "null"; if (visited.contains(obj)) return "..."; visited.add(obj); Class cl = obj.getClass(); if (cl == String.class) return (String) obj; if (cl.isArray()) { String r = cl.getComponentType() + "[]{"; for (int i = 0; i < Array.getLength(obj); i++) { if (i > 0) r += ","; Object val = Array.get(obj, i); if (cl.getComponentType().isPrimitive()) r += val; else r += toString(val); } return r + "}"; } String r = cl.getName(); // inspect the fields of this class and all superclasses do { r += "["; Field[] fields = cl.getDeclaredFields(); AccessibleObject.setAccessible(fields, true); // get the names and values of all fields for (Field f : fields) { if (!Modifier.isStatic(f.getModifiers())) { if (!r.endsWith("[")) r += ","; r += f.getName() + "="; try { Class t = f.getType(); Object val = f.get(obj); if (t.isPrimitive()) r += val; else r += toString(val); } catch (Exception e) { e.printStackTrace(); } } } r += "]"; cl = cl.getSuperclass(); } while (cl != null); return r; } } A: One of my favorite uses of reflection is the below Java dump method. It takes any object as a parameter and uses the Java reflection API to print out every field name and value. import java.lang.reflect.Array; import java.lang.reflect.Field; public static String dump(Object o, int callCount) { callCount++; StringBuffer tabs = new StringBuffer(); for (int k = 0; k < callCount; k++) { tabs.append("\t"); } StringBuffer buffer = new StringBuffer(); Class oClass = o.getClass(); if (oClass.isArray()) { buffer.append("\n"); buffer.append(tabs.toString()); buffer.append("["); for (int i = 0; i < Array.getLength(o); i++) { if (i < 0) buffer.append(","); Object value = Array.get(o, i); if (value.getClass().isPrimitive() || value.getClass() == java.lang.Long.class || value.getClass() == java.lang.String.class || value.getClass() == java.lang.Integer.class || value.getClass() == java.lang.Boolean.class ) { buffer.append(value); } else { buffer.append(dump(value, callCount)); } } buffer.append(tabs.toString()); buffer.append("]\n"); } else { buffer.append("\n"); buffer.append(tabs.toString()); buffer.append("{\n"); while (oClass != null) { Field[] fields = oClass.getDeclaredFields(); for (int i = 0; i < fields.length; i++) { buffer.append(tabs.toString()); fields[i].setAccessible(true); buffer.append(fields[i].getName()); buffer.append("="); try { Object value = fields[i].get(o); if (value != null) { if (value.getClass().isPrimitive() || value.getClass() == java.lang.Long.class || value.getClass() == java.lang.String.class || value.getClass() == java.lang.Integer.class || value.getClass() == java.lang.Boolean.class ) { buffer.append(value); } else { buffer.append(dump(value, callCount)); } } } catch (IllegalAccessException e) { buffer.append(e.getMessage()); } buffer.append("\n"); } oClass = oClass.getSuperclass(); } buffer.append(tabs.toString()); buffer.append("}\n"); } return buffer.toString(); } A: Reflection is to let object to see their appearance. This argument seems nothing to do with reflection. In fact, this is the "self-identify" ability. Reflection itself is a word for such languages that lack the capability of self-knowledge and self-sensing as Java and C#. Because they do not have the capability of self-knowledge, when we want to observe how it looks like, we must have another thing to reflect on how it looks like. Excellent dynamic languages such as Ruby and Python can perceive the reflection of their own without the help of other individuals. We can say that the object of Java cannot perceive how it looks like without a mirror, which is an object of the reflection class, but an object in Python can perceive it without a mirror. So that's why we need reflection in Java. A: I am using reflection to create an object based on class name(class name in String) and call the method of that class Object obj = Class.forName(config.getClassPath()) .getDeclaredConstructor() .newInstance(); Method method = obj.getClass().getMethod("getCustomer", SearchObject.class, ObjectConfig.class, HttpServletRequest.class); method.invoke(obj, searchObject, config, request); But one major problem is that if you Autowired something on that class that will re-initialized to null A: IMPORTANT Starting from Java 9 you can no longer use reflection, unless the package-info.java opens the module to reflection access. By default, "reflection" access is denied to all packages in the module. See Understanding Java 9 Modules
{ "language": "en", "url": "https://stackoverflow.com/questions/37628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2505" }
Q: Examining Berkeley DB files from the CLI I have a set of Berkeley DB files on my Linux file system that I'd like to examine. What useful tools exist for getting a quick overview of the contents? I can write Perl scripts that use BDB modules for examining them, but I'm looking for some CLI utility to be able to take a look inside without having to start writing scripts. A: Use the db_dump program. It is contained in the package core/db (Arch), db-util (Debian, Ubuntu), sys-libs/db (Gentoo, note that here the binary is called db4.8_dump or whatever version you are using). On some systems the man pages are not installed, in that case the documentation can be found here. By default, db_dump outputs some hex numbers, which is not really useful if you are trying to analyse the content of a database. Use the -p argument to change this. Show everything that’s in the file database.db: db_dump -p database.db List the databases in the file database.db: db_dump -l database.db Show only the content of the database mydb in the file database.db: db_dump -p -s mydb database.db A: The db_hotbackup utility creates "hot backup" or "hot failover" snapshots of Berkeley DB database environments. Install it with the following apt-get install db-util then run following command to take hot backup db_hotbackup [-cDEguVv] [-d data_dir ...] [-h home] [-l log_dir] [-P password] -b backup_dir A: Once you have installed the db utils you can simple do a db_dump on the db file. A: Note that the initial answer says to use "db-utils" package, but the example shows the correct "db-util" package. (with no "s") A: Check out the db-utils package. If you use apt, you can install it with the following: apt-get install db-util (or apt-get install db4.8-util or whatever version you have or prefer.) Additional links: * *http://rpmfind.net/linux/rpm2html/search.php?query=db-utils *https://packages.ubuntu.com/search?suite=default&section=all&arch=any&keywords=db-util&searchon=names *Man page of db4.4_dump A: I found @strickli's answer to be the most helpful, as I didn't want to add any new packages to the machine with the database I was on. However, the db file I was reading was of type btree, not hash, so I had to use bsddb # file foo.db foo.db: Berkeley DB (Btree, version 9, native byte-order) # python >>> import bsddb >>> for k, v in bsddb.btopen("*<db filename here...>*").iteritems(): ... print k,v ... A: As mentioned in the other answers, the db-utils package (db4-utils under RHEL) has some tools. However, db_dump can be unhelpful, since the output is 'bytevalue' format. For a quick'n'dirty viewer, use python: me@machine$ python Python 2.7.3 (default, Sep 26 2013, 20:03:06) >>> import dbhash >>> for k, v in dbhash.open( *<db filename here...>* ).iteritems(): print k, v ... Note that dbhash is deprecated since python 2.6. A: Under Amazon Linux you can install it with: yum install db43-utils A: Python 3 from bsddb3 import db import collections d = db.DB() d.open('./file.dat', 'dbname', db.DB_BTREE, db.DB_THREAD | db.DB_RDONLY) d.keys() collections.OrderedDict((k, d[k]) for k in d.keys())
{ "language": "en", "url": "https://stackoverflow.com/questions/37644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: Swapping column values in MySQL I have a MySQL table with coordinates, the column names are X and Y. Now I want to swap the column values in this table, so that X becomes Y and Y becomes X. The most apparent solution would be renaming the columns, but I don't want to make structure changes since I don't necessarily have permissions to do that. Is this possible to do with UPDATE in some way? UPDATE table SET X=Y, Y=X obviously won't do what I want. Edit: Please note that my restriction on permissions, mentioned above, effectively prevents the use of ALTER TABLE or other commands that change the table/database structure. Renaming columns or adding new ones are unfortunately not options. A: Ok, so just for fun, you could do this! (assuming you're swapping string values) mysql> select * from swapper; +------+------+ | foo | bar | +------+------+ | 6 | 1 | | 5 | 2 | | 4 | 3 | +------+------+ 3 rows in set (0.00 sec) mysql> update swapper set -> foo = concat(foo, "###", bar), -> bar = replace(foo, concat("###", bar), ""), -> foo = replace(foo, concat(bar, "###"), ""); Query OK, 3 rows affected (0.00 sec) Rows matched: 3 Changed: 3 Warnings: 0 mysql> select * from swapper; +------+------+ | foo | bar | +------+------+ | 1 | 6 | | 2 | 5 | | 3 | 4 | +------+------+ 3 rows in set (0.00 sec) A nice bit of fun abusing the left-to-right evaluation process in MySQL. Alternatively, just use XOR if they're numbers. You mentioned coordinates, so do you have lovely integer values, or complex strings? Edit: The XOR stuff works like this by the way: update swapper set foo = foo ^ bar, bar = foo ^ bar, foo = foo ^ bar; A: You could take the sum and subtract the opposing value using X and Y UPDATE swaptest SET X=X+Y,Y=X-Y,X=X-Y; Here is a sample test (and it works with negative numbers) mysql> use test Database changed mysql> drop table if exists swaptest; Query OK, 0 rows affected (0.03 sec) mysql> create table swaptest (X int,Y int); Query OK, 0 rows affected (0.12 sec) mysql> INSERT INTO swaptest VALUES (1,2),(3,4),(-5,-8),(-13,27); Query OK, 4 rows affected (0.08 sec) Records: 4 Duplicates: 0 Warnings: 0 mysql> SELECT * FROM swaptest; +------+------+ | X | Y | +------+------+ | 1 | 2 | | 3 | 4 | | -5 | -8 | | -13 | 27 | +------+------+ 4 rows in set (0.00 sec) mysql> Here is the swap being performed mysql> UPDATE swaptest SET X=X+Y,Y=X-Y,X=X-Y; Query OK, 4 rows affected (0.07 sec) Rows matched: 4 Changed: 4 Warnings: 0 mysql> SELECT * FROM swaptest; +------+------+ | X | Y | +------+------+ | 2 | 1 | | 4 | 3 | | -8 | -5 | | 27 | -13 | +------+------+ 4 rows in set (0.00 sec) mysql> Give it a Try !!! A: I believe have a intermediate exchange variable is the best practice in such way: update z set c1 = @c := c1, c1 = c2, c2 = @c First, it works always; second, it works regardless of data type. Despite of Both update z set c1 = c1 ^ c2, c2 = c1 ^ c2, c1 = c1 ^ c2 and update z set c1 = c1 + c2, c2 = c1 - c2, c1 = c1 - c2 are working usually, only for number data type by the way, and it is your responsibility to prevent overflow, you can not use XOR between signed and unsigned, you also can not use sum for overflowing possibility. And update z set c1 = c2, c2 = @c where @c := c1 is not working if c1 is 0 or NULL or zero length string or just spaces. We need change it to update z set c1 = c2, c2 = @c where if((@c := c1), true, true) Here is the scripts: mysql> create table z (c1 int, c2 int) -> ; Query OK, 0 rows affected (0.02 sec) mysql> insert into z values(0, 1), (-1, 1), (pow(2, 31) - 1, pow(2, 31) - 2) -> ; Query OK, 3 rows affected (0.00 sec) Records: 3 Duplicates: 0 Warnings: 0 mysql> select * from z; +------------+------------+ | c1 | c2 | +------------+------------+ | 0 | 1 | | -1 | 1 | | 2147483647 | 2147483646 | +------------+------------+ 3 rows in set (0.02 sec) mysql> update z set c1 = c1 ^ c2, c2 = c1 ^ c2, c1 = c1 ^ c2; ERROR 1264 (22003): Out of range value for column 'c1' at row 2 mysql> update z set c1 = c1 + c2, c2 = c1 - c2, c1 = c1 - c2; ERROR 1264 (22003): Out of range value for column 'c1' at row 3 mysql> select * from z; +------------+------------+ | c1 | c2 | +------------+------------+ | 0 | 1 | | 1 | -1 | | 2147483646 | 2147483647 | +------------+------------+ 3 rows in set (0.02 sec) mysql> update z set c1 = c2, c2 = @c where @c := c1; Query OK, 2 rows affected (0.00 sec) Rows matched: 2 Changed: 2 Warnings: 0 mysql> select * from z; +------------+------------+ | c1 | c2 | +------------+------------+ | 0 | 1 | | -1 | 1 | | 2147483647 | 2147483646 | +------------+------------+ 3 rows in set (0.00 sec) mysql> select * from z; +------------+------------+ | c1 | c2 | +------------+------------+ | 1 | 0 | | 1 | -1 | | 2147483646 | 2147483647 | +------------+------------+ 3 rows in set (0.00 sec) mysql> update z set c1 = @c := c1, c1 = c2, c2 = @c; Query OK, 3 rows affected (0.02 sec) Rows matched: 3 Changed: 3 Warnings: 0 mysql> select * from z; +------------+------------+ | c1 | c2 | +------------+------------+ | 0 | 1 | | -1 | 1 | | 2147483647 | 2147483646 | +------------+------------+ 3 rows in set (0.00 sec) mysql>update z set c1 = c2, c2 = @c where if((@c := c1), true, true); Query OK, 3 rows affected (0.02 sec) Rows matched: 3 Changed: 3 Warnings: 0 mysql> select * from z; +------------+------------+ | c1 | c2 | +------------+------------+ | 1 | 0 | | 1 | -1 | | 2147483646 | 2147483647 | +------------+------------+ 3 rows in set (0.00 sec) A: ALTER TABLE table ADD COLUMN tmp; UPDATE table SET tmp = X; UPDATE table SET X = Y; UPDATE table SET Y = tmp; ALTER TABLE table DROP COLUMN tmp; Something like this? Edit: About Greg's comment: No, this doesn't work: mysql> select * from test; +------+------+ | x | y | +------+------+ | 1 | 2 | | 3 | 4 | +------+------+ 2 rows in set (0.00 sec) mysql> update test set x=y, y=x; Query OK, 2 rows affected (0.00 sec) Rows matched: 2 Changed: 2 Warnings: 0 mysql> select * from test; +------+------+ | x | y | +------+------+ | 2 | 2 | | 4 | 4 | +------+------+ 2 rows in set (0.00 sec) A: The following code works for all scenarios in my quick testing: UPDATE swap_test SET x=(@temp:=x), x = y, y = @temp A: Two alternatives 1. Use a temporary table 2. Investigate the XOR algorithm A: As other answers point out, a simple swap won't work with MySQL because it caches the value of column 1 immediately before processing column 2, resulting in both columns being set to the value of column 2. Given that the order of operations is not guaranteed in MySQL, using a temporary variable is also not reliable. The only safe way to swap two columns without modifying the table structure is with an inner join, which requires a primary key (id in this case). UPDATE mytable t1, mytable t2 SET t1.column1 = t1.column2, t1.column2 = t2.column1 WHERE t1.id = t2.id; This will work without any issues. A: I just had to deal with the same and I'll summarize my findings. * *The UPDATE table SET X=Y, Y=X approach obviously doesn't work, as it'll just set both values to Y. *Here's a method that uses a temporary variable. Thanks to Antony from the comments of http://beerpla.net/2009/02/17/swapping-column-values-in-mysql/ for the "IS NOT NULL" tweak. Without it, the query works unpredictably. See the table schema at the end of the post. This method doesn't swap the values if one of them is NULL. Use method #3 that doesn't have this limitation. UPDATE swap_test SET x=y, y=@temp WHERE (@temp:=x) IS NOT NULL; *This method was offered by Dipin in, yet again, the comments of http://beerpla.net/2009/02/17/swapping-column-values-in-mysql/. I think it’s the most elegant and clean solution. It works with both NULL and non-NULL values. UPDATE swap_test SET x=(@temp:=x), x = y, y = @temp; *Another approach I came up with that seems to work: UPDATE swap_test s1, swap_test s2 SET s1.x=s1.y, s1.y=s2.x WHERE s1.id=s2.id; Essentially, the 1st table is the one getting updated and the 2nd one is used to pull the old data from. Note that this approach requires a primary key to be present. This is my test schema: CREATE TABLE `swap_test` ( `id` int(11) NOT NULL AUTO_INCREMENT, `x` varchar(255) DEFAULT NULL, `y` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB; INSERT INTO `swap_test` VALUES ('1', 'a', '10'); INSERT INTO `swap_test` VALUES ('2', NULL, '20'); INSERT INTO `swap_test` VALUES ('3', 'c', NULL); A: I've not tried it but UPDATE tbl SET @temp=X, X=Y, Y=@temp Might do it. Mark A: This surely works! I've just needed it to swap Euro and SKK price columns. :) UPDATE tbl SET X=Y, Y=@temp where @temp:=X; The above will not work (ERROR 1064 (42000): You have an error in your SQL syntax) A: In SQL Server, you can use this query: update swaptable set col1 = t2.col2, col2 = t2.col1 from swaptable t2 where id = t2.id A: UPDATE table SET X=Y, Y=X will do precisely what you want (edit: in PostgreSQL, not MySQL, see below). The values are taken from the old row and assigned to a new copy of the same row, then the old row is replaced. You do not have to resort to using a temporary table, a temporary column, or other swap tricks. @D4V360: I see. That is shocking and unexpected. I use PostgreSQL and my answer works correctly there (I tried it). See the PostgreSQL UPDATE docs (under Parameters, expression), where it mentions that expressions on the right hand side of SET clauses explicitly use the old values of columns. I see that the corresponding MySQL UPDATE docs contain the statement "Single-table UPDATE assignments are generally evaluated from left to right" which implies the behaviour you describe. Good to know. A: Assuming you have signed integers in your columns, you may need to use CAST(a ^ b AS SIGNED), since the result of the ^ operator is an unsigned 64-bit integer in MySQL. In case it helps anyone, here's the method I used to swap the same column between two given rows: SELECT BIT_XOR(foo) FROM table WHERE key = $1 OR key = $2 UPDATE table SET foo = CAST(foo ^ $3 AS SIGNED) WHERE key = $1 OR key = $2 where $1 and $2 are the keys of two rows and $3 is the result of the first query. A: You could change column names, but this is more of a hack. But be cautious of any indexes that may be on these columns A: Table name is customer. fields are a and b, swap a value to b;. UPDATE customer SET a=(@temp:=a), a = b, b = @temp I checked this is working fine. A: You can apply below query, It worked perfect for me. Table name: studentname only single column available: name update studentnames set names = case names when "Tanu" then "dipan" when "dipan" then "Tanu" end; or update studentnames set names = case names when "Tanu" then "dipan" else "Tanu" end; A: Swapping of column values using single query UPDATE my_table SET a=@tmp:=a, a=b, b=@tmp; cheers...! A: CREATE TABLE Names ( F_NAME VARCHAR(22), L_NAME VARCHAR(22) ); INSERT INTO Names VALUES('Ashutosh', 'Singh'),('Anshuman','Singh'),('Manu', 'Singh'); UPDATE Names N1 , Names N2 SET N1.F_NAME = N2.L_NAME , N1.L_NAME = N2.F_NAME WHERE N1.F_NAME = N2.F_NAME; SELECT * FROM Names; A: I had to just move value from one column to the other (like archiving) and reset the value of the original column. The below (reference of #3 from accepted answer above) worked for me. Update MyTable set X= (@temp:= X), X = 0, Y = @temp WHERE ID= 999; A: This example swaps start_date and end_date for records where the dates are the wrong way round (when performing ETL into a major rewrite, I found some start dates later than their end dates. Down, bad programmers!). In situ, I'm using MEDIUMINTs for performance reasons (like Julian days, but having a 0 root of 1900-01-01), so I was OK doing a condition of WHERE mdu.start_date > mdu.end_date. The PKs were on all 3 columns individually (for operational / indexing reasons). UPDATE monitor_date mdu INNER JOIN monitor_date mdc ON mdu.register_id = mdc.register_id AND mdu.start_date = mdc.start_date AND mdu.end_date = mdc.end_date SET mdu.start_date = mdu.end_date, mdu.end_date = mdc.start_date WHERE mdu.start_date > mdu.end_date; A: Let's say you want to swap the value of first and last name in tb_user. The safest would be: * *Copy tb_user. So you will have 2 tables: tb_user and tb_user_copy *Use UPDATE INNER JOIN query UPDATE tb_user a INNER JOIN tb_user_copy b ON a.id = b.id SET a.first_name = b.last_name, a.last_name = b.first_name A: if you want to swap all the columns where x is to y and y to x; use this query. UPDATE table_name SET column_name = CASE column_name WHERE 'value of col is x' THEN 'swap it to y' ELSE 'swap it to x' END; A: Let's imagine this table and let's try to swap the m and f from the 'sex' table: id name sex salary 1 A m 2500 2 B f 1500 3 C m 5500 4 D f 500 UPDATE sex SET sex = CASE sex WHEN 'm' THEN 'f' ELSE 'm' END; So the updated table becomes: id name sex salary 1 A f 2500 2 B m 1500 3 C f 5500 4 D m 500
{ "language": "en", "url": "https://stackoverflow.com/questions/37649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "155" }
Q: How to implement a file download in asp.net What is the best way to implement, from a web page a download action using asp.net 2.0? Log files for a action are created in a directory called [Application Root]/Logs. I have the full path and want to provide a button, that when clicked will download the log file from the IIS server to the users local pc. A: Does this help: http://www.west-wind.com/weblog/posts/76293.aspx Response.ContentType = "application/octet-stream"; Response.AppendHeader("Content-Disposition","attachment; filename=logfile.txt"); Response.TransmitFile( Server.MapPath("~/logfile.txt") ); Response.End(); Response.TransmitFile is the accepted way of sending large files, instead of Response.WriteFile. A: http://forums.asp.net/p/1481083/3457332.aspx string filename = @"Specify the file path in the server over here...."; FileInfo fileInfo = new FileInfo(filename); if (fileInfo.Exists) { Response.Clear(); Response.AddHeader("Content-Disposition", "attachment; filename=" + fileInfo.Name); Response.AddHeader("Content-Length", fileInfo.Length.ToString()); Response.ContentType = "application/octet-stream"; Response.Flush(); Response.TransmitFile(fileInfo.FullName); Response.End(); } Update: The initial code Response.AddHeader("Content-Disposition", "inline;attachment; filename=" + fileInfo.Name); has "inline;attachment" i.e. two values for Content Disposition. Don't know when exactly it started, but in Firefox only the proper file name was not appearing. The file download box appears with the name of the webpage and its extension (pagename.aspx). After download, if you rename it back to the actual name; file opens successfully. As per this page, it operates on First Come First Served basis. Changing the value to attachment only solved the issue. PS: I am not sure if this is the best practice but the issue is resolved.
{ "language": "en", "url": "https://stackoverflow.com/questions/37650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Is there an n-ary tree implementation in Perl? I'm writing a Perl script and would like to use a n-ary tree data structure. Is there a good implementation that is available as source code (rather than part of a Perl library) ? A: Adding to what Matthew already said, it looks like the following modules would be suitable: Tree::Nary Tree::Simple Tree A: I don't really understand why you want it was "source" rather than as a perl library, but you can download the source for any CPAN module. I haven't used it, but Tree looks to fill your requirements. A: Depending on what you need a tree structure for, you might not need any pre-built implementation. Perl already supports them using arrays of arrayrefs. For example, a simple representation of this tree t / \ a d / \ / \ b c e f could be represented by the following Perl code: $tree = [ t => [ a => [ b => [], c => [] ] d => [ e => [], f => [] ] ] ]; Here, the tree's representation is as nested pairs: first the element (in this case, the letter), then an anonymous array reference representing the children of that element. Note that => is just a fancy comma in Perl that exempts you having to put quotes around the token to the left of the comma, provided it is a single word. The above code could also have been written thus: $tree = [ 't', [ 'a' , [ 'b' , [], 'c' , [] ] 'd' , [ 'e' , [], 'f' , [] ] ] ]; Here's a simple depth-first accumulator of all the elements in the tree: sub elements { my $tree = shift; my @elements; my @queue = @$tree; while (@queue) { my $element = shift @queue; my $children = shift @queue; push @elements, $element; unshift @queue, @$children; } return @elements; } @elements = elements($tree) # qw(t a b c d e f) (For breadth first, change the line unshift @queue, @$children to push @queue, @$children) So, depending on what operations you want to perform on your tree, the simplest thing might be just to use Perl's built-in support for arrays and array references.
{ "language": "en", "url": "https://stackoverflow.com/questions/37662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Auto defines in C editors... Why? When Eclipse creates a new file (.c or .h file) in a C project the editor always auto creates a #define at the top of the file like this: If the file is named 'myCFile.c' there will be a #define at the start of the file like this #ifndef MYCFILE_C_ #define MYCFILE_C_ I have seen other editors do this as well (Codewright and SlikEdit I think). The #defines don't seem to do anything for the editor as I can just delete them without any problem, and I can't think of a reason why I would want to use them. Does anyone know why they are there? A: It's to guard against multiple definitions. A: Sometimes people include a whole .c file in other .c files (or even .h files), so it has the exact same purpose of preventing an include file from getting included multiple times and the compiler spitting out multiple definition errors. It is strange, though, that it would be the default behavior of an editor to put this in anything but a .h file. This would be a rarely needed feature. A: A more modern version of this is to use: #pragma once It is quite unusual to see this in a .c file, normally it is in the header files only. A: I think it's a throwback of C include issues, where multiple copies of the source would get included - unless you are meticulous with include chains (One file includes n others). Checking if a symbol is defined and including only if the symbol is defined - was a way out of this.
{ "language": "en", "url": "https://stackoverflow.com/questions/37665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What tool to use for automatic nightly builds? I have a few Visual Studio Solutions/Projects that are being worked on in my company, which now require a scheme for automatic nightly builds. Such a scheme needs to be able to check the latest versions from SVN, build the solutions, create the appropriate downloadable files (including installers, documentation, etc.), send e-mails to the developers upon errors and all sorts of other nifty things. What tool, or tool-set, should I use for this? I used to use FinalBuilder a few years ago and I liked that a lot but I'm not sure if they support such features as nightly-builds and email messages. A: FinalBuilder does support emailing and just executing FinalBuilder each night will give you nightly builds. You don't really need other software for that if you don't want to. You could also use CCNet to trigger a build when needed and have it execute FinalBuilder for the build. You can then decide if FinalBuilder or CCNet should email. Finally FinalBuilder has a Server version which is sorta like CCNet in that it is a continues integration tool using FinalBuilder. See http://www.finalbuilder.com/finalbuilder-server.aspx Of course the biggest advantage of CCNet is that it is free and open source. A: Although it costs, I highly recommend Visual Build. It works with MSBuild, and old tools like Visual Basic. It is scriptable, and can do everything from making installers to simple Continuous Integration. A: We just started using Hudson here at the office. Its free and open-source, it has a very user friendly UI. Plus there are tons of options and plugins available. I was up and running in a matter of minutes after installing it. All the other devs here are loving it. All in all, its a very elegant solution for Continuous Integration or Nightly Builds. A: At my work we use CCNET, but with builds on check-in more than nightly - although it's easily configured for either or both. You can very easily set up unit testing to run on every checkin as well, FXCop testing, and a slew of other products. I would also advise checking out Team City as an option, because it has a free version, and the reporting and setup is reportedly much simpler (it does look nice to me). It does have a limit of somewhere around 20 team members/projects, before it hits a pay-for window. That said, we started with CCNET, and have grown several products too large to look at Team City on the free version and are very happy with what we have. Features that help with CCNET include: * *XML based configuration - you can usually copy and paste most of what you need. *More or less you'll be able to plug your treesurgeon script in as your build script, and point CCNET at that as an executable task to run the compilation. *Lots of documentation and very easy to set up nunit, ncover, fxcop, etc. *Taskbar app that will let you know the status of your projects at any time, and it can also fire off an email or keep an RSS feed with the same information. But I'd definitely go with running a CI build on every check-in - for the most part will run the unit tests before checking in, but let the CCNET server handle run any applications/assemblies that would have dependencies on the assembly we're checking in, and they get re-built, and re-tested on every checkin. Given that CCNET is free free and takes very little time to set up - I'd highly recommend just going for it and seeing if it suits you, then expanding from there. (There's another thread here where I posted pretty much the same/with a few alterations - but some of the other comments may help too! Automated Builds) Edit to add: You can easily set up your own deployment scheme for CCNET, and there are a tonne of blog posts out there to assist, and email notifications can really be set up fairly granularly, either on all successes, all failures, when it changes from success to fail, etc. There's also built in RSS, and you could even set up your own notifiers for other systems. A: I've recently started using CruiseControl.NET (http://confluence.public.thoughtworks.org/display/CCNET/Welcome+to+CruiseControl.NET). It works reasonably well, although configuration could be easier. CruiseControl.NET is free and open source, and seems to integrate with most standard tools, although I've personally only used it with CVS, SVN, NUnit and MSBuild. A: Luntbuild Supports a wide variety of source control and build systems. Very customizable. Open Source. Setup takes some time, but it's not too horrible. A: Buildbot is open source and very powerful too. You should take a look at it. A: Cascade supports doing a build on every single change committed to the repository. I would not recommend doing only nightly builds -- that's a pretty long window where a build break can slip in before it's reported.
{ "language": "en", "url": "https://stackoverflow.com/questions/37666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Create DB in SQL Server based on Visio Data Model I have created a database model in Visio Professional (2003). I know that the Enterprise version has the ability to create a DB in SQL Server based on the data in Visio. I do not have the option to install Enterprise. Aside from going through the entire thing one table and relationship at a time and creating the whole database from scratch, by hand, can anyone recommend any tool/utility/method for converting the visio database model into a SQL Script that can be used to create a new DB in SQL Server? A: I have not done this, but here it goes. * *Convert Visio file to Visio XML format. *Use Dia for Windows and Dia VDX plug-in to convert Visio XML into Dia. *Use tedia2sql to generate SQL. A: For visio 2010 there is nice plugin Visio Forward Engineer: http://forwardengineer.codeplex.com/ A: If you can somehow obtain the type library from the enterprise version you can use VBA to get out the definitions. Secondhand enterprise architect versions of VS 2002 and VS 2003 can be brought from ebay for a few hundred dollars.
{ "language": "en", "url": "https://stackoverflow.com/questions/37672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to replace plain URLs with links? I am using the function below to match URLs inside a given text and replace them for HTML links. The regular expression is working great, but currently I am only replacing the first match. How I can replace all the URL? I guess I should be using the exec command, but I did not really figure how to do it. function replaceURLWithHTMLLinks(text) { var exp = /(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/i; return text.replace(exp,"<a href='$1'>$1</a>"); } A: /** * Convert URLs in a string to anchor buttons * @param {!string} string * @returns {!string} */ function URLify(string){ var urls = string.match(/(((ftp|https?):\/\/)[\-\w@:%_\+.~#?,&\/\/=]+)/g); if (urls) { urls.forEach(function (url) { string = string.replace(url, '<a target="_blank" href="' + url + '">' + url + "</a>"); }); } return string.replace("(", "<br/>("); } simple example A: Made some optimizations to Travis' Linkify() code above. I also fixed a bug where email addresses with subdomain type formats would not be matched (i.e. [email protected]). In addition, I changed the implementation to prototype the String class so that items can be matched like so: var text = '[email protected]'; text.linkify(); 'http://stackoverflow.com/'.linkify(); Anyway, here's the script: if(!String.linkify) { String.prototype.linkify = function() { // http://, https://, ftp:// var urlPattern = /\b(?:https?|ftp):\/\/[a-z0-9-+&@#\/%?=~_|!:,.;]*[a-z0-9-+&@#\/%=~_|]/gim; // www. sans http:// or https:// var pseudoUrlPattern = /(^|[^\/])(www\.[\S]+(\b|$))/gim; // Email addresses var emailAddressPattern = /[\w.]+@[a-zA-Z_-]+?(?:\.[a-zA-Z]{2,6})+/gim; return this .replace(urlPattern, '<a href="$&">$&</a>') .replace(pseudoUrlPattern, '$1<a href="http://$2">$2</a>') .replace(emailAddressPattern, '<a href="mailto:$&">$&</a>'); }; } A: The best script to do this: http://benalman.com/projects/javascript-linkify-process-lin/ A: This solution works like many of the others, and in fact uses the same regex as one of them, however in stead of returning a HTML String this will return a document fragment containing the A element and any applicable text nodes. function make_link(string) { var words = string.split(' '), ret = document.createDocumentFragment(); for (var i = 0, l = words.length; i < l; i++) { if (words[i].match(/[-a-zA-Z0-9@:%_\+.~#?&//=]{2,256}\.[a-z]{2,4}\b(\/[-a-zA-Z0-9@:%_\+.~#?&//=]*)?/gi)) { var elm = document.createElement('a'); elm.href = words[i]; elm.textContent = words[i]; if (ret.childNodes.length > 0) { ret.lastChild.textContent += ' '; } ret.appendChild(elm); } else { if (ret.lastChild && ret.lastChild.nodeType === 3) { ret.lastChild.textContent += ' ' + words[i]; } else { ret.appendChild(document.createTextNode(' ' + words[i])); } } } return ret; } There are some caveats, namely with older IE and textContent support. here is a demo. A: If you need to show shorter link (only domain), but with same long URL, you can try my modification of Sam Hasler's code version posted above function replaceURLWithHTMLLinks(text) { var exp = /(\b(https?|ftp|file):\/\/([-A-Z0-9+&@#%?=~_|!:,.;]*)([-A-Z0-9+&@#%?\/=~_|!:,.;]*)[-A-Z0-9+&@#\/%=~_|])/ig; return text.replace(exp, "<a href='$1' target='_blank'>$3</a>"); } A: Reg Ex: /(\b((https?|ftp|file):\/\/|(www))[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|]*)/ig function UriphiMe(text) { var exp = /(\b((https?|ftp|file):\/\/|(www))[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|]*)/ig; return text.replace(exp,"<a href='$1'>$1</a>"); } Below are some tested string: * *Find me on to www.google.com *www *Find me on to www.http://www.com *Follow me on : http://www.nishantwork.wordpress.com *http://www.nishantwork.wordpress.com *Follow me on : http://www.nishantwork.wordpress.com *https://stackoverflow.com/users/430803/nishant Note: If you don't want to pass www as valid one just use below reg ex: /(\b((https?|ftp|file):\/\/|(www))[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/ig A: The warnings about URI complexity should be noted, but the simple answer to your question is: To replace every match you need to add the /g flag to the end of the RegEx: /(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/gi A: Try the below function : function anchorify(text){ var exp = /(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/ig; var text1=text.replace(exp, "<a href='$1'>$1</a>"); var exp2 =/(^|[^\/])(www\.[\S]+(\b|$))/gim; return text1.replace(exp2, '$1<a target="_blank" href="http://$2">$2</a>'); } alert(anchorify("Hola amigo! https://www.sharda.ac.in/academics/")); A: First off, rolling your own regexp to parse URLs is a terrible idea. You must imagine this is a common enough problem that someone has written, debugged and tested a library for it, according to the RFCs. URIs are complex - check out the code for URL parsing in Node.js and the Wikipedia page on URI schemes. There are a ton of edge cases when it comes to parsing URLs: international domain names, actual (.museum) vs. nonexistent (.etc) TLDs, weird punctuation including parentheses, punctuation at the end of the URL, IPV6 hostnames etc. I've looked at a ton of libraries, and there are a few worth using despite some downsides: * *Soapbox's linkify has seen some serious effort put into it, and a major refactor in June 2015 removed the jQuery dependency. It still has issues with IDNs. *AnchorMe is a newcomer that claims to be faster and leaner. Some IDN issues as well. *Autolinker.js lists features very specifically (e.g. "Will properly handle HTML input. The utility will not change the href attribute inside anchor () tags"). I'll thrown some tests at it when a demo becomes available. Libraries that I've disqualified quickly for this task: * *Django's urlize didn't handle certain TLDs properly (here is the official list of valid TLDs. No demo. *autolink-js wouldn't detect "www.google.com" without http://, so it's not quite suitable for autolinking "casual URLs" (without a scheme/protocol) found in plain text. *Ben Alman's linkify hasn't been maintained since 2009. If you insist on a regular expression, the most comprehensive is the URL regexp from Component, though it will falsely detect some non-existent two-letter TLDs by looking at it. A: Keep it simple! Say what you cannot have, rather than what you can have :) As mentioned above, URLs can be quite complex, especially after the '?', and not all of them start with a 'www.' e.g. maps.bing.com/something?key=!"£$%^*()&lat=65&lon&lon=20 So, rather than have a complex regex that wont meet all edge cases, and will be hard to maintain, how about this much simpler one, which works well for me in practise. Match http(s):// (anything but a space)+ www. (anything but a space)+ Where 'anything' is [^'"<>\s] ... basically a greedy match, carrying on to you meet a space, quote, angle bracket, or end of line Also: Remember to check that it is not already in URL format, e.g. the text contains href="..." or src="..." Add ref=nofollow (if appropriate) This solution isn't as "good" as the libraries mentioned above, but is much simpler, and works well in practise. if html.match( /(href)|(src)/i )) { return html; // text already has a hyper link in it } html = html.replace( /\b(https?:\/\/[^\s\(\)\'\"\<\>]+)/ig, "<a ref='nofollow' href='$1'>$1</a>" ); html = html.replace( /\s(www\.[^\s\(\)\'\"\<\>]+)/ig, "<a ref='nofollow' href='http://$1'>$1</a>" ); html = html.replace( /^(www\.[^\s\(\)\'\"\<\>]+)/ig, "<a ref='nofollow' href='http://$1'>$1</a>" ); return html; A: Correct URL detection with international domains & astral characters support is not trivial thing. linkify-it library builds regex from many conditions, and final size is about 6 kilobytes :) . It's more accurate than all libs, currently referenced in accepted answer. See linkify-it demo to check live all edge cases and test your ones. If you need to linkify HTML source, you should parse it first, and iterate each text token separately. A: Replacing URLs with links (Answer to the General Problem) The regular expression in the question misses a lot of edge cases. When detecting URLs, it's always better to use a specialized library that handles international domain names, new TLDs like .museum, parentheses and other punctuation within and at the end of the URL, and many other edge cases. See the Jeff Atwood's blog post The Problem With URLs for an explanation of some of the other issues. The best summary of URL matching libraries is in Dan Dascalescu's Answer (as of Feb 2014) "Make a regular expression replace more than one match" (Answer to the specific problem) Add a "g" to the end of the regular expression to enable global matching: /ig; But that only fixes the problem in the question where the regular expression was only replacing the first match. Do not use that code. A: Thanks, this was very helpful. I also wanted something that would link things that looked like a URL -- as a basic requirement, it'd link something like www.yahoo.com, even if the http:// protocol prefix was not present. So basically, if "www." is present, it'll link it and assume it's http://. I also wanted emails to turn into mailto: links. EXAMPLE: www.yahoo.com would be converted to www.yahoo.com Here's the code I ended up with (combination of code from this page and other stuff I found online, and other stuff I did on my own): function Linkify(inputText) { //URLs starting with http://, https://, or ftp:// var replacePattern1 = /(\b(https?|ftp):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/gim; var replacedText = inputText.replace(replacePattern1, '<a href="$1" target="_blank">$1</a>'); //URLs starting with www. (without // before it, or it'd re-link the ones done above) var replacePattern2 = /(^|[^\/])(www\.[\S]+(\b|$))/gim; var replacedText = replacedText.replace(replacePattern2, '$1<a href="http://$2" target="_blank">$2</a>'); //Change email addresses to mailto:: links var replacePattern3 = /(\w+@[a-zA-Z_]+?\.[a-zA-Z]{2,6})/gim; var replacedText = replacedText.replace(replacePattern3, '<a href="mailto:$1">$1</a>'); return replacedText } In the 2nd replace, the (^|[^/]) part is only replacing www.whatever.com if it's not already prefixed by // -- to avoid double-linking if a URL was already linked in the first replace. Also, it's possible that www.whatever.com might be at the beginning of the string, which is the first "or" condition in that part of the regex. This could be integrated as a jQuery plugin as Jesse P illustrated above -- but I specifically wanted a regular function that wasn't acting on an existing DOM element, because I'm taking text I have and then adding it to the DOM, and I want the text to be "linkified" before I add it, so I pass the text through this function. Works great. A: I've wrote yet another JavaScript library, it might be better for you since it's very sensitive with the least possible false positives, fast and small in size. I'm currently actively maintaining it so please do test it in the demo page and see how it would work for you. link: https://github.com/alexcorvi/anchorme.js A: Identifying URLs is tricky because they are often surrounded by punctuation marks and because users frequently do not use the full form of the URL. Many JavaScript functions exist for replacing URLs with hyperlinks, but I was unable to find one that works as well as the urlize filter in the Python-based web framework Django. I therefore ported Django's urlize function to JavaScript: https://github.com/ljosa/urlize.js An example: urlize('Go to SO (stackoverflow.com) and ask. <grin>', {nofollow: true, autoescape: true}) => "Go to SO (<a href="http://stackoverflow.com" rel="nofollow">stackoverflow.com</a>) and ask. &lt;grin&gt;" The second argument, if true, causes rel="nofollow" to be inserted. The third argument, if true, escapes characters that have special meaning in HTML. See the README file. A: I've made some small modifications to Travis's code (just to avoid any unnecessary redeclaration - but it's working great for my needs, so nice job!): function linkify(inputText) { var replacedText, replacePattern1, replacePattern2, replacePattern3; //URLs starting with http://, https://, or ftp:// replacePattern1 = /(\b(https?|ftp):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/gim; replacedText = inputText.replace(replacePattern1, '<a href="$1" target="_blank">$1</a>'); //URLs starting with "www." (without // before it, or it'd re-link the ones done above). replacePattern2 = /(^|[^\/])(www\.[\S]+(\b|$))/gim; replacedText = replacedText.replace(replacePattern2, '$1<a href="http://$2" target="_blank">$2</a>'); //Change email addresses to mailto:: links. replacePattern3 = /(([a-zA-Z0-9\-\_\.])+@[a-zA-Z\_]+?(\.[a-zA-Z]{2,6})+)/gim; replacedText = replacedText.replace(replacePattern3, '<a href="mailto:$1">$1</a>'); return replacedText; } A: I searched on google for anything newer and ran across this one: $('p').each(function(){ $(this).html( $(this).html().replace(/((http|https|ftp):\/\/[\w?=&.\/-;#~%-]+(?![\w\s?&.\/;#~%"=-]*>))/g, '<a href="$1">$1</a> ') ); }); demo: http://jsfiddle.net/kachibito/hEgvc/1/ Works really well for normal links. A: I made a change to Roshambo String.linkify() to the emailAddressPattern to recognize [email protected] addresses if(!String.linkify) { String.prototype.linkify = function() { // http://, https://, ftp:// var urlPattern = /\b(?:https?|ftp):\/\/[a-z0-9-+&@#\/%?=~_|!:,.;]*[a-z0-9-+&@#\/%=~_|]/gim; // www. sans http:// or https:// var pseudoUrlPattern = /(^|[^\/])(www\.[\S]+(\b|$))/gim; // Email addresses *** here I've changed the expression *** var emailAddressPattern = /(([a-zA-Z0-9_\-\.]+)@[a-zA-Z_]+?(?:\.[a-zA-Z]{2,6}))+/gim; return this .replace(urlPattern, '<a target="_blank" href="$&">$&</a>') .replace(pseudoUrlPattern, '$1<a target="_blank" href="http://$2">$2</a>') .replace(emailAddressPattern, '<a target="_blank" href="mailto:$1">$1</a>'); }; } A: I had to do the opposite, and make html links into just the URL, but I modified your regex and it works like a charm, thanks :) var exp = /<a\s.*href=['"](\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])['"].*>.*<\/a>/ig; source = source.replace(exp,"$1"); A: The e-mail detection in Travitron's answer above did not work for me, so I extended/replaced it with the following (C# code). // Change e-mail addresses to mailto: links. const RegexOptions o = RegexOptions.Multiline | RegexOptions.IgnoreCase; const string pat3 = @"([a-zA-Z0-9_\-\.]+)@([a-zA-Z0-9_\-\.]+)\.([a-zA-Z]{2,6})"; const string rep3 = @"<a href=""mailto:$1@$2.$3"">$1@$2.$3</a>"; text = Regex.Replace(text, pat3, rep3, o); This allows for e-mail addresses like "[email protected]". A: After input from several sources I've now a solution that works well. It had to do with writing your own replacement code. Answer. Fiddle. function replaceURLWithHTMLLinks(text) { var re = /(\(.*?)?\b((?:https?|ftp|file):\/\/[-a-z0-9+&@#\/%?=~_()|!:,.;]*[-a-z0-9+&@#\/%=~_()|])/ig; return text.replace(re, function(match, lParens, url) { var rParens = ''; lParens = lParens || ''; // Try to strip the same number of right parens from url // as there are left parens. Here, lParenCounter must be // a RegExp object. You cannot use a literal // while (/\(/g.exec(lParens)) { ... } // because an object is needed to store the lastIndex state. var lParenCounter = /\(/g; while (lParenCounter.exec(lParens)) { var m; // We want m[1] to be greedy, unless a period precedes the // right parenthesis. These tests cannot be simplified as // /(.*)(\.?\).*)/.exec(url) // because if (.*) is greedy then \.? never gets a chance. if (m = /(.*)(\.\).*)/.exec(url) || /(.*)(\).*)/.exec(url)) { url = m[1]; rParens = m[2] + rParens; } } return lParens + "<a href='" + url + "'>" + url + "</a>" + rParens; }); } A: Here's my solution: var content = "Visit https://wwww.google.com or watch this video: https://www.youtube.com/watch?v=0T4DQYgsazo and news at http://www.bbc.com"; content = replaceUrlsWithLinks(content, "http://"); content = replaceUrlsWithLinks(content, "https://"); function replaceUrlsWithLinks(content, protocol) { var startPos = 0; var s = 0; while (s < content.length) { startPos = content.indexOf(protocol, s); if (startPos < 0) return content; let endPos = content.indexOf(" ", startPos + 1); if (endPos < 0) endPos = content.length; let url = content.substr(startPos, endPos - startPos); if (url.endsWith(".") || url.endsWith("?") || url.endsWith(",")) { url = url.substr(0, url.length - 1); endPos--; } if (ROOTNS.utils.stringsHelper.validUrl(url)) { let link = "<a href='" + url + "'>" + url + "</a>"; content = content.substr(0, startPos) + link + content.substr(endPos); s = startPos + link.length; } else { s = endPos + 1; } } return content; } function validUrl(url) { try { new URL(url); return true; } catch (e) { return false; } } A: Try Below Solution function replaceLinkClickableLink(url = '') { let pattern = new RegExp('^(https?:\\/\\/)?'+ '((([a-z\\d]([a-z\\d-]*[a-z\\d])*)\\.?)+[a-z]{2,}|'+ '((\\d{1,3}\\.){3}\\d{1,3}))'+ '(\\:\\d+)?(\\/[-a-z\\d%_.~+]*)*'+ '(\\?[;&a-z\\d%_.~+=-]*)?'+ '(\\#[-a-z\\d_]*)?$','i'); let isUrl = pattern.test(url); if (isUrl) { return `<a href="${url}" target="_blank">${url}</a>`; } return url; } A: Replace URLs in text with HTML links, ignore the URLs within a href/pre tag. https://github.com/JimLiu/auto-link A: worked for me : var urlRegex =/(\b((https?|ftp|file):\/\/)?((([a-z\d]([a-z\d-]*[a-z\d])*)\.)+[a-z]{2,}|((\d{1,3}\.){3}\d{1,3}))(\:\d+)?(\/[-a-z\d%_.~+]*)*(\?[;&a-z\d%_.~+=-]*)?(\#[-a-z\d_]*)?)/ig; return text.replace(urlRegex, function(url) { var newUrl = url.indexOf("http") === -1 ? "http://" + url : url; return '<a href="' + newUrl + '">' + url + '</a>'; });
{ "language": "en", "url": "https://stackoverflow.com/questions/37684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "488" }
Q: Eclipse Plugin Dev: How do I get the paths for the currently selected project? I'm writing a plugin that will parse a bunch of files in a project. But for the moment I'm stuck searching through the Eclipse API for answers. The plugin works like this: Whenever I open a source file I let the plugin parse the source's corresponding build file (this could be further developed with caching the parse result). Getting the file is simple enough: public void showSelection(IWorkbenchPart sourcePart) { // Gets the currently selected file from the editor IFile file = (IFile) workbenchPart.getSite().getPage().getActiveEditor() .getEditorInput().getAdapter(IFile.class); if (file != null) { String path = file.getProjectRelativePath(); /** Snipped out: Rip out the source path part * and replace with build path * Then parse it. */ } } The problem I have is I have to use hard coded strings for the paths where the source files and build files go. Anyone know how to retrieve the build path from Eclipse? (I'm working in CDT by the way). Also is there a simple way to determine what the source path is (e.g. one file is under the "src" directory) of a source file? A: You should take a look at ICProject, especially the getOutputEntries and getAllSourceRoots operations. This tutorial has some brief examples too. I work with JDT so thats pretty much what I can do. Hope it helps :)
{ "language": "en", "url": "https://stackoverflow.com/questions/37692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Concatenate several fields into one with SQL I have three tables tag, page, pagetag With the data below page ID NAME 1 page 1 2 page 2 3 page 3 4 page 4 tag ID NAME 1 tag 1 2 tag 2 3 tag 3 4 tag 4 pagetag ID PAGEID TAGID 1 2 1 2 2 3 3 3 4 4 1 1 5 1 2 6 1 3 I would like to get a string containing the correspondent tag names for each page with SQL in a single query. This is my desired output. ID NAME TAGS 1 page 1 tag 1, tag 2, tag 3 2 page 2 tag 1, tag 3 3 page 3 tag 4 4 page 4 Is this possible with SQL? I am using MySQL. Nonetheless, I would like a database vendor independent solution if possible. A: Yep, you can do it across the 3 something like the below: SELECT page_tag.id, page.name, group_concat(tags.name) FROM tag, page, page_tag WHERE page_tag.page_id = page.page_id AND page_tag.tag_id = tag.id; Has not been tested, and could be probably be written a tad more efficiently, but should get you started! Also, MySQL is assumed, so may not play so nice with MSSQL! And MySQL isn't wild about hyphens in field names, so changed to underscores in the above examples. A: Sergio del Amo: However, I am not getting the pages without tags. I guess i need to write my query with left outer joins. SELECT pagetag.id, page.name, group_concat(tag.name) FROM ( page LEFT JOIN pagetag ON page.id = pagetag.pageid ) LEFT JOIN tag ON pagetag.tagid = tag.id GROUP BY page.id; Not a very pretty query, but should give you what you want - pagetag.id and group_concat(tag.name) will be null for page 4 in the example you've posted above, but the page shall appear in the results. A: As far as I'm aware SQL92 doesn't define how string concatenation should be done. This means that most engines have their own method. If you want a database independent method, you'll have to do it outside of the database. (untested in all but Oracle) Oracle SELECT field1 | ', ' | field2 FROM table; MS SQL SELECT field1 + ', ' + field2 FROM table; MySQL SELECT concat(field1,', ',field2) FROM table; PostgeSQL SELECT field1 || ', ' || field2 FROM table; A: I got a solution playing with joins. The query is: SELECT page.id AS id, page.name AS name, tagstable.tags AS tags FROM page LEFT OUTER JOIN ( SELECT pagetag.pageid, GROUP_CONCAT(distinct tag.name) AS tags FROM tag INNER JOIN pagetag ON tagid = tag.id GROUP BY pagetag.pageid ) AS tagstable ON tagstable.pageid = page.id GROUP BY page.id And this will be the output: id name tags --------------------------- 1 page 1 tag2,tag3,tag1 2 page 2 tag1,tag3 3 page 3 tag4 4 page 4 NULL Is it possible to boost the query speed writing it another way? A: I think you may need to use multiple updates. Something like (not tested): select ID as 'PageId', Name as 'PageName', null as 'Tags' into #temp from [PageTable] declare @lastOp int set @lastOp = 1 while @lastOp > 0 begin update p set p.tags = isnull(tags + ', ', '' ) + t.[Tagid] from #temp p inner join [TagTable] t on p.[PageId] = t.[PageId] where p.tags not like '%' + t.[Tagid] + '%' set @lastOp == @@rowcount end select * from #temp Ugly though. That example's T-SQL, but I think MySql has equivalents to everything used. A: pagetag.id and group_concat(tag.name) will be null for page 4 in the example you've posted above, but the page shall appear in the results. You can use the COALESCE function to remove the Nulls if you need to: select COALESCE(pagetag.id, '') AS id ... It will return the first non-null value from it's list of parameters.
{ "language": "en", "url": "https://stackoverflow.com/questions/37696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: True random number generator Sorry for this not being a "real" question, but Sometime back i remember seeing a post here about randomizing a randomizer randomly to generate truly random numbers, not just pseudo random. I dont see it if i search for it. Does anybody know about that article? A: According to Wikipedia /dev/random, in Unix-like operating systems, is a special file that serves as a true random number generator. The /dev/random driver gathers environmental noise from various non-deterministic sources including, but not limited to, inter-keyboard timings and inter-interrupt timings that occur within the operating system environment. The noise data is sampled and combined with a CRC-like mixing function into a continuously updating ``entropy-pool''. Random bit strings are obtained by taking a MD5 hash of the contents of this pool. The one-way hash function distills the true random bits from pool data and hides the state of the pool from adversaries. The /dev/random routine maintains an estimate of true randomness in the pool and decreases it every time random strings are requested for use. When the estimate goes down to zero, the routine locks and waits for the occurrence of non-deterministic events to refresh the pool. The /dev/random kernel module also provides another interface, /dev/urandom, that does not wait for the entropy-pool to re-charge and returns as many bytes as requested. As a result /dev/urandom is considerably faster at generation compared to /dev/random which is used only when very high quality randomness is desired. A: John von Neumann once said something to the effect of "anyone attempting to generate random numbers via algorithmic means is, of course, living in sin." Not even /dev/random is random, in a mathematician's or a physicist's sense of the word. Not even radioisotope decay measurement is random. (The decay rate is. The measurement isn't. Geiger counters have a small reset time after each detected event, during which time they are unable to detect new events. This leads to subtle biases. There are ways to substantially mitigate this, but not completely eliminate it.) Stop looking for true randomness. A good pseudorandom number generator is really what you're looking for. A: If you believe in a deterministic universe, true randomness doesn't exist. :-) For example, someone has suggested that radioactive decay is truly random, but IMHO, just because scientists haven't yet worked out the pattern, doesn't mean that there isn't a pattern there to be worked out. Usually, when you want "random" numbers, what you need are numbers for encryption that no one else will be able to guess. The closest you can get to random is to measure something natural that no enemy would also be able to measure. Usually you throw away the most significant bits, from your measurement, leaving numbers with are more likely to be evenly spread. Hard core random number users get special hardware that measures radioactive events, but you can get some randomness from the human using the computer from things like keypress intervals and mouse movements, and if the computer doesn't have direct users, from CPU temperature sensors, and from network traffic. You could also use things like web cams and microphones connected to sound cards, but I don't know if anyone does. A: To summarize some of what has been said, our working definition of what a secure source of randomness is is similar to our definition of cryptographically secure: it appears random if smart folks have looked at it and weren't able to show that it isn't completely unpredictable. There is no system for generating random numbers which couldn't conceivably be predicted, just as there is no cryptographic cipher that couldn't conceivably be cracked. The trusted solutions used for important work are merely those which have proven to be difficult to defeat so far. If anyone tells you otherwise, they're selling you something. Cleverness is rarely rewarded in cryptography. Go with tried and true solutions. A: A computer usually has many readily available physical sources of random noise: * *Microphone (hopefully in a noisy place) *Compressed video from a webcam (pointed to something variable, like a lava lamp or a street) *Keyboard & mouse timing *Network packet content and timing (the whole world contributes) And sometimes * *Clock drift based hardware *Geiger counters and other detectors of rare events *All sorts of sensors attached to A/D converters What's difficult is estimating the entropy of these sources, which is in most cases low despite the high data rates and very variable; but entropy can be estimated with conservative assumptions, or at least not wasted, to feed systems like Yarrow or Fortuna. A: I have to disagree with a lot of the answers to this question. It is possible to collect random data on a computer. SSL, SSH and VPNs would not be secure if you couldn't. The way software random number generator work is there is a pool of random data that is gathered from many different places, such as clock drift, interrupt timings, etc. The trick to these schemes is in correctly estimating the entropy (the posh name for the randomness). It doesn't matter whether the source is bias, as long as you estimate the entropy correctly. To illustrate this, the chance of me hitting the letter e in this comment is much higher than that of z , so if I were to use key interrupts as a source of entropy it would be bias - but there is still some randomness to be had in that input. You can't predict exactly which sequence of letters will come next in this paragraph. You can extract entropy from this uncertainty and use it part of a random byte. Good quality real-random generators like Yarrow have quite sophisticated entropy estimation built in to them and will only emit as many bytes as it can reliably say it has in its "randomness pool." A: It's not possible to obtain 'true' random numbers, a computer is a logical construct that can't possibly create 'truly' random anything, only pseudo-random. There are better and worse pseudo-random algorithms out there, however. In order to obtain a 'truly' random number you need a physical random source, some gambling machines actually have these built in - often it's a radioactive source, the radioactive decay (which as far as I know is truly random) is used to generate the numbers. A: I believe that was on thedailywtf.com - ie. not something that you want to do. It is not possible to get a truly random number from pseudorandom numbers, no matter how many times you call randomize(). You can get "true" random numbers from special hardware. You could also collect entropy from mouse movements and things like that. A: At the end of the post, I will answer your question of why you might want to use multiple random number generators for "more randomness". There are philosophical debates about what randomness means. Here, I will mean "indistinguishable in every respect from a uniform(0,1) iid distribution over the samples drawn" I am totally ignoring philosophical questions of what random is. Knuth volume 2 has an analysis where he attempts to create a random number generator as you suggest, and then analyzes why it fails, and what true random processes are. Volume 2 examines RNGs in detail. The others recommend you using random physical processes to generate random numbers. However, as we can see in the Espo/vt interaction, these processes can have subtle periodic elements and other non-random elements, in part due to outside factors with deterministic behavior. In general, it is best never to assume randomness, but always to test for it, and you usually can correct for such artifacts if you are aware of them. It is possible to create an "infinite" stream of bits that appears completely random, deterministically. Unfortunately, such approaches grow in memory with the number of bits asked for (as they would have to, to avoid repeating cycles), so their scope is limited. In practice, you are almost always better off using a pseudo-random number generator with known properties. The key numbers to look for is the phase-space dimension (which is roughly offset between samples you can still count on being uniformally distributed) and the bit-width (the number of bits in each sample which are uniformally random with respect to each other), and the cycle size (the number of samples you can take before the distribution starts repeating). However, since random numbers from a given generator are deterministically in a known sequence, your procedure might be exposed by someone searching through the generator and finding an aligning sequence. Therefore, you can likely avoid your distribution being immediately recognized as coming from a particular random number generator if you maintain two generators. From the first, you sample i, and then map this uniformally over one to n, where n is at most the phase dimension. Then, in the second you sample i times, and return the ith result. This will reduce your cycle size to (orginal cycle size/n) in the worst case, but for that cycle will still generate uniform random numbers, and do so in a way that makes the search for alignment exponential in n. It will also reduce the independent phase length. Don't use this method unless you understand what reduced cycle and independent phase lengths mean to your application. A: An algorithm for truly random numbers cannot exist as the definition of random numbers is: Having unpredictable outcomes and, in the ideal case, all outcomes equally probable; resulting from such selection; lacking statistical correlation. There are better or worse pseudorandom number generators (PRNGs), i.e. completely predictable sequences of numbers that are difficult to predict without knowing a piece of information, called the seed. Now, PRNGs for which it is extremely hard to infer the seed are cryptographically secure. You might want to look them up in Google if that is what you seek. Another way (whether this is truly random or not is a philosophical question) is to use random sources of data. For example, unpredictable physical quantities, such as noise, or measuring radioactive decay. These are still subject to attacks because they can be independently measured, have biases, and so on. So it's really tricky. This is done with custom hardware, which is usually quite expensive. I have no idea how good /dev/random is, but I would bet it is not good enough for cryptography (most cryptography programs come with their own RNG and Linux also looks for a hardware RNG at start-up). A: One of the best method to generate a random number is through Clock Drift. This primarily works with two oscillators. An analogy of how this works is imagine a race car on a simple oval circuit with a while line at the start of the lap and also a while line on one of the tyres. When the car completes a lap, a number will be generated based on the difference between the position of the white line on the road and on the tyre. Very easy to generate and impossible to predict.
{ "language": "en", "url": "https://stackoverflow.com/questions/37702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: To use views or not to use views I seem right now to be embroiled in a debate with another programmer on this project who thinks that views have no merits. He proposes a system that PHP looks something like this: $draw = new Draw; $nav = $draw->wideHeaderBox(). $draw->left(). $draw->image(). Image::get($image,60,array('id'=>'header_image')). $draw->imageEnd(). $draw->leftEnd(). $draw->left(10). '<div id="header_text">'. self::defaultSectionText(). '</div>'. $draw->leftEnd(). and so on (this is in the controller btw). Now his arguments for this actually make some sense, he claims that if there is a redesign all we need to do is change the HTML in one place and it changes everywhere automatically. For some reason however, this method still rubs me the wrong way, is there any merit to views over this method? I mean besides not having to retype HTML by hand. A: HTML time-savers are useful, but they're only useful when they're intuitive and easy-to-understand. Having to instantiate a new Draw just doesn't sound very natural. Furthermore, wideHeaderBox and left will only have significance to someone who intimately knows the system. And what if there is a redesign, like your co-worker muses? What if the wideHeaderBox becomes very narrow? Will you change the markup (and styles, presumable) generated by the PHP method but leave a very inaccurate method name to call the code? If you guys just have to use HTML generation, you should use it interspersed in view files, and you should use it where it's really necessary/useful, such as something like this: HTML::link("Wikipedia", "http://en.wikipedia.org"); HTML::bulleted_list(array( HTML::list_item("Dogs"), HTML::list_item("Cats"), HTML::list_item("Armadillos") )); In the above example, the method names actually make sense to people who aren't familiar with your system. They'll also make more sense to you guys when you go back into a seldom-visited file and wonder what the heck you were doing. A: The argument he uses is the argument you need to have views. Both result in only changing it in one place. However, in his version, you are mixing view markup with business code. I would suggest using more of a templated design. Do all your business logic in the PHP, setup all variables that are needed by your page. Then just have your page markup reference those variables (and deal with no business logic whatsoever). Have you looked at smarty? http://smarty.php.net A: I've done something like that in the past, and it was a waste of time. For instance, you basically have to write wrappers for everything you can already with HTML and you WILL forget some things. When you need to change something in the layout you will think "Shoot, I forgot about that..now I gotta code another method or add another parameter". Ultimately, you will have a huge collection of functions/classes that generate HTML which nobody will know or remember how to use months from now. New developers will curse you for using this system, since they will have to learn it before changing anything. In contrast, more people probably know HTML than your abstract HTML drawing classes...and sometimes you just gotta get your hands dirty with pure HTML! A: It looks pretty verbose and hard to follow to be honest and some of the code looks like it is very much layout information. We always try to split the logic from the output as much as possible. However, it is often the case that the view and data are very tightly linked with both part dictating how the other should be (eg, in a simple e-commerce site, you may decide you want to start showing stock levels next to each product, which would obviously involve changing the view to add appropriate html for this, and the business logic to go and figure out a value for the stock). If the thought of maintaining 2 files to do this is too much to handle, try splitting things into a "Gather data" part and a "Display View" part, getting you most of the benefits without increasing the number of files you need to manage. A: I always find it much easier to work directly with html. Theres one less abstraction layer (html -> actual webpage / php function -> html -> actual webpage) to deal with then you just work in HTML. I really think the 'just have to change it in one place' thing wont work in this case. This is because they'll be so many times when you want to change the output of a function, but only in just one place. Sure you can use arguments but you'll soon end up with some functions having like a dozen arguments. Yuck. Bear in mind templating languages / systems often let you include sub templates, allowing you to have some reusable blocks of html. The bottom line is if I had just started at your company and saw code like that everywhere, my first thought would be, 'Damn it! Need a new job again.'
{ "language": "en", "url": "https://stackoverflow.com/questions/37731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is the regex pattern for datetime (2008-09-01 12:35:45 )? What is the RegEx pattern for DateTime (2008-09-01 12:35:45 ) ? I get this error: No ending delimiter '^' found Using: preg_match('(?n:^(?=\d)((?<day>31(?!(.0?[2469]|11))|30(?!.0?2)|29(?(.0?2)(?=.{3,4}(1[6-9]|[2-9]\d)(0[48]|[2468][048]|[13579][26])|(16|[2468][048]|[3579][26])00))|0?[1-9]|1\d|2[0-8])(?<sep>[/.-])(?<month>0?[1-9]|1[012])\2(?<year>(1[6-9]|[2-9]\d)\d{2})(?:(?=\x20\d)\x20|$))?(?<time>((0?[1-9]|1[012])(:[0-5]\d){0,2}(?i:\ [AP]M))|([01]\d|2[0-3])(:[0-5]\d){1,2})?$)', '2008-09-01 12:35:45'); Gives this error: Warning: preg_match() [function.preg-match]: Compilation failed: nothing to repeat at offset 0 in E:\www\index.php on line 19 A: @Espo: I just have to say that regex is incredible. I'd hate to have to write the code that did something useful with the matches, such as if you wanted to actually find out what date and time the user typed. It seems like Tom's solution would be more tenable, as it is about a zillion times simpler and with the addition of some parentheses you can easily get at the values the user typed: (\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}) If you're using perl, then you can get the values out with something like this: $year = $1; $month = $2; $day = $3; $hour = $4; $minute = $5; $second = $6; Other languages will have a similar capability. Note that you will need to make some minor mods to the regex if you want to accept values such as single-digit months. A: http://regexlib.com/REDetails.aspx?regexp_id=610 ^(?=\d)(?:(?:31(?!.(?:0?[2469]|11))|(?:30|29)(?!.0?2)|29(?=.0?2.(?:(?:(?:1[6-9]|[2-9]\d)?(?:0[48]|[2468][048]|[13579][26])|(?:(?:16|[2468][048]|[3579][26])00)))(?:\x20|$))|(?:2[0-8]|1\d|0?[1-9]))([-./])(?:1[012]|0?[1-9])\1(?:1[6-9]|[2-9]\d)?\d\d(?:(?=\x20\d)\x20|$))?(((0?[1-9]|1[012])(:[0-5]\d){0,2}(\x20[AP]M))|([01]\d|2[0-3])(:[0-5]\d){1,2})?$ This RE validates both dates and/or times patterns. Days in Feb. are also validated for Leap years. Dates: in dd/mm/yyyy or d/m/yy format between 1/1/1600 - 31/12/9999. Leading zeroes are optional. Date separators can be either matching dashes(-), slashes(/) or periods(.) Times: in the hh:MM:ss AM/PM 12 hour format (12:00 AM - 11:59:59 PM) or hh:MM:ss military time format (00:00:00 - 23:59:59). The 12 hour time format: 1) may have a leading zero for the hour. 2) Minutes and seconds are optional for the 12 hour format 3) AM or PM is required and case sensitive. Military time 1) must have a leading zero for all hours less than 10. 2) Minutes are manditory. 3) seconds are optional. Datetimes: combination of the above formats. A date first then a time separated by a space. ex) dd/mm/yyyy hh:MM:ss Edit: Make sure you copy the RegEx from the regexlib.com website as StackOverflow sometimes removes/destroys special chars. A: $date = "2014-04-01 12:00:00"; preg_match('/(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/',$date, $matches); print_r($matches); $matches will be: Array ( [0] => 2014-04-01 12:00:00 [1] => 2014 [2] => 04 [3] => 01 [4] => 12 [5] => 00 [6] => 00 ) An easy way to break up a datetime formated string. A: ^([2][0]\d{2}\/([0]\d|[1][0-2])\/([0-2]\d|[3][0-1]))$|^([2][0]\d{2}\/([0]\d|[1][0-2])\/([0-2]\d|[3][0-1])\s([0-1]\d|[2][0-3])\:[0-5]\d\:[0-5]\d)$ A: A simple version that will work for the format mentioned, but not all the others as per @Espos: (\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) A: regarding to Imran answer from Sep 1th 2008 at 12:33 there is a missing : in the pattern the correct patterns are preg_match('/\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}/', '2008-09-01 12:35:45', $m1); print_r( $m1 ); preg_match('/\d{4}-\d{2}-\d{2} \d{1,2}:\d{2}:\d{2}/', '2008-09-01 12:35:45', $m2); print_r( $m2 ); preg_match('/^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}$/', '2008-09-01 12:35:45', $m3); print_r( $m3 ); this returns Array ( [0] => 2008-09-01 12:35:45 ) Array ( [0] => 2008-09-01 12:35:45 ) Array ( [0] => 2008-09-01 12:35:45 ) A: Adding to @Greg Hewgill answer: if you want to be able to match both date-time and only date, you can make the "time" part of the regex optional: (\d{4})-(\d{2})-(\d{2})( (\d{2}):(\d{2}):(\d{2}))? this way you will match both 2008-09-01 12:35:42 and 2008-09-01 A: simple regex datetime with validation ^(\d{4})-(0[1-9]|1[0-2])-(0[1-9]|[12][0-9]|3[01]) ([01][0-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])$ but it can't validation month which not include 31day (ex. 2022-11-31, 2022-02-30) A: Here is my solution: /^(2[0-9]{3})-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01]) (0[0-9]|1[0-9]|2[0123])\:([012345][0-9])\:([012345][0-9])$/u A: I have modified the regex pattern from http://regexlib.com/REDetails.aspx?regexp_id=610. The following pattern should match your case. ^(?=\d)(?:(?:1[6-9]|[2-9]\d)?\d\d([-.\/])(?:1[012]|0?[1-9])\1(?:31(?<!.(?:0[2469]|11))|(?:30|29)(?<!.02)|29(?=.0?2.(?:(?:(?:1[6-9]|[2-9]\d)?(?:0[48]|[2468][048]|[13579][26])|(?:(?:16|[2468][048]|[3579][26])00)))(?:\x20|$))|(?:2[0-8]|1\d|0?[1-9]))(?:(?=\x20\d)\x20|$))?(((0?[1-9]|1[012])(:[0-5]\d){0,2}(\x20[AP]M))|([01]\d|2[0-3])(:[0-5]\d){1,2})?$ YYYY-MM-DD HH:MM:SS A: This is my soloution: [1-9][0-9][0-9][0-9]-(0[1-9]|1[0-2])-(0[1-9]|1[0-9]|2[0-9]|3[0-1]) A: Here is a simplified version (originated from Espo's answer). It checks the correctness of date (even leap year), and hh:mm:ss is optional Examples that work: - 31/12/2003 11:59:59 - 29-2-2004 ^(?=\d)(?:(?:31(?!.(?:0?[2469]|11))|(?:30|29)(?!.0?2)|29(?=.0?2.(?:(?:(?:1[6-9]|[2-9]\d)?(?:0[48]|[2468][048]|[13579][26])|(?:(?:16|[2468][048]|[3579][26])00)))(?:\x20|$))|(?:2[0-8]|1\d|0?[1-9]))([-./])(?:1[012]|0?[1-9])\1(?:1[6-9]|[2-9]\d)?\d\d(?:(?=\x20\d)\x20|$))(|([01]\d|2[0-3])(:[0-5]\d){1,2})?$ A: PHP preg functions needs your regex to be wrapped with a delimiter character, which can be any character. You can't use this delimiter character without escaping inside the regex. This should work (here the delimiter character is /): preg_match('/\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}/', '2008-09-01 12:35:45'); // or this, to allow matching 0:00:00 time too. preg_match('/\d{4}-\d{2}-\d{2} \d{1,2}:\d{2}:\d{2}/', '2008-09-01 12:35:45'); If you need to match lines that contain only datetime, add ^ and $ at the beginning and end of the regex. preg_match('/^\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}$/', '2008-09-01 12:35:45'); Link to PHP Manual's preg_match() A: Here is my solution: [12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]) ([01][0-9]|2[0-3]):[0-5]\d Debuggex Demo https://regex101.com/r/lbthaT/4
{ "language": "en", "url": "https://stackoverflow.com/questions/37732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67" }
Q: SQL query to get the top "n" scores out of a list I'd like to find the different ways to solve a real life problem I had: imagine to have a contest, or a game, during which the users collect points. You have to build a query to show the list of users with the best "n" scores. I'm making an example to clarify. Let's say that this is the Users table, with the points earned: UserId - Points 1 - 100 2 - 75 3 - 50 4 - 50 5 - 50 6 - 25 If I want the top 3 scores, the result will be: UserId - Points 1 - 100 2 - 75 3 - 50 4 - 50 5 - 50 This can be realized in a view or a stored procedure, as you want. My target db is Sql Server. Actually I solved this, but I think there are different way to obtain the result... faster or more efficent than mine. A: Here's one that works - I don't know if it's more efficient, and it's SQL Server 2005+ with scores as ( select 1 userid, 100 points union select 2, 75 union select 3, 50 union select 4, 50 union select 5, 50 union select 6, 25 ), results as ( select userid, points, RANK() over (order by points desc) as ranking from scores ) select userid, points, ranking from results where ranking <= 3 Obviously the first "with" is to set up the values, so you can test the second with, and final select work - you could start at "with results as..." if you were querying against an existing table. A: Untested, but should work: select * from users where points in (select distinct top 3 points from users order by points desc) A: How about: select top 3 with ties points from scores order by points desc Not sure if "with ties" works on anything other the SQL Server. On SQL Server 2005 and up, you can pass the "top" number as an int parameter: select top (@n) with ties points from scores order by points desc A: Actually a modification to the WHERE IN, utilizing an INNER JOIN will be much faster. SELECT userid, points FROM users u INNER JOIN ( SELECT DISTINCT TOP N points FROM users ORDER BY points DESC ) AS p ON p.points = u.points A: @bosnic, I don't think that will work as requested, I'm not that familiar with MS SQL but I would expect it to return only 3 rows, and ignore the fact that 3 users are tied for 3rd place. Something like this should work: select userid, points from scores where points in (select top 3 points from scores order by points desc) order by points desc A: @Espo thanks for the reality check - added the sub-select to correct for that. I think the easiest response is to: select userid, points from users where points in (select distinct top N points from users order by points desc) If you want to put that in a stored proc which takes N as a parameter, then you'll either have to do read the SQL into a variable then execute it, or do the row count trick: declare @SQL nvarchar(2000) set @SQL = "select userID, points from users " set @SQL = @SQL + " where points in (select distinct top " + @N set @SQL = @SQL + " points from users order by points desc)" execute @SQL or SELECT UserID, Points FROM (SELECT ROW_NUMBER() OVER (ORDER BY points DESC) AS Row, UserID, Points FROM Users) AS usersWithPoints WHERE Row between 0 and @N Both examples assume SQL Server and haven't been tested. A: @Rob#37760: select top N points from users order by points desc This query will only select 3 rows if N is 3, see the question. "Top 3" should return 5 rows. A: @Matt Hamilton Your answer works with the example above but would not work if the data set was 100, 75, 75, 50, 50 (where it would return only 3 rows). TOP WITH TIES only includes the ties of the last row returned... A: Crucible got it (assuming SQL 2005 is an option). A: Try this select top N points from users order by points desc A: Hey I found all the other answers bit long and inefficient My answer would be: select * from users order by points desc limit 0,5 this will render top 5 points
{ "language": "en", "url": "https://stackoverflow.com/questions/37743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you prevent the IIS default site web.config file being inherited by virtual directories? I have the following code in a web.config file of the default IIS site. <httpModules> <add type="MDL.BexWebControls.Charts.ChartStreamHandler,Charts" name="ChartStreamHandler"/> </httpModules> Then when I setup and browse to a virtual directory I get this error Could not load file or assembly 'Charts' or one of its dependencies. The system cannot find the file specified. The virtual directory is inheriting the modules from the default web.config. How do you stop this inheritance? A: I've found the answer. Wrap the HttpModule section in location tags and set the inheritInChildApplications attribute to false. <location path="." inheritInChildApplications="false"> <system.web> <httpModules> <add type="MDL.BexWebControls.Charts.ChartStreamHandler,Charts" name="ChartStreamHandler"/> </httpModules> </system.web> </location> Now any virtual directories will not inherit the settings in this location section. @GateKiller This isn't another website, its a virtual directory so inheritance does occur. @petrich I've had hit and miss results using <remove />. I have to remember to add it to every virtual directory which is a pain. A: Add the following to the virtual directory's web.config file: <httpModules> <remove name="ChartStreamHandler"/> </httpModules> A: According to Microsoft, other websites do not inherit settings from the Default Website. Do you mean you are editing the default web.config which is located in the same folder as the machine.config?
{ "language": "en", "url": "https://stackoverflow.com/questions/37759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: SugarCRM 5 - Create a sub-panel for invoices in Account Panel I'm customizing a SugarCRM 5, and in my SugarCRM database I have all invoices which were imported from our ERP. Now, I would like to know if it is possible to create a new sub-panel in the Accounts Panel without editing the original SugarCRM files, so that my client invoices index are visible in that interface. A: Last time I checked, you could use the module builder to extend the interface. From 5.0 (or maybe 4.x) on, Sugar added all those APIs, which should enable you to extend SugarCRM without hacking it in and losing it with the next upgrade. Hope that helps! A: You can create a new module - Invoices using Module Builder and then add relations between Accounts and Invoices. The subpanels will appear for both - Accounts and Invoices without any coding. You should just customize the columns again using Module Builder. A: as stated above, create invoices module to hold all your invoices, but before doing import make relationship with accounts and map the account field when importing so the invoice is automatically connect in subpanel and shown A: Basically, the Account name should be a related field in your new invoices module (base the module creation on something like QUOTES that has similar fields. Once you create the module (so simple you can almost guess your way through it in the ADMIN section) and the fields you like (using Studio) just add the RELATED field Account Name and the sub-panel will be established in your ACCOUNTS module and the invoice will magically populate, especially if you re-install them using the import feature from a CSV file (spreadsheet). A: You can create sub-panels in account modules details view by just giving relationship within two modules. Create a one-to-many relationship from Account module to Invoices module.
{ "language": "en", "url": "https://stackoverflow.com/questions/37764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Are there any guidelines for designing user interface for mobile devices? I am creating an application for a Windows Mobile computer. The catch is that the device (Motorola MC17) does not have a touch screen or universal keys - there are only six programmable hardware keys. Fitt's law is not applicable here, most Microsoft guidelines are also moot. For now I'm mimicking Nokia's S60 keyboard layout as close as possible, since it's the most popular phone platform among my target audience. Are there any guidelines for creating a simple, discoverable user interface on such a constrained device? What fonts and colours should I use to make my UI readable? How do I measure if the items on-screen are big enough? What conventions should I follow? A: Guidelines for Handheld & Mobile Device User Interface: While there has been much successful work in developing rules to guide the design and implementation of interfaces for desktop machines and their applications, the design of mobile device interfaces is still relatively unexplored and unproven. This paper discusses the characteristics and limitations of current mobile device interfaces, especially compared to the desktop environment. Using existing interface guidelines as a starting point, a set of practical design guidelines for mobile device interface is proposed. A: Microsoft has an official set of Guidelines for getting the "Designed for Windows Mobile" logo. These are a reasonable start as they not only cover one-handed (no touchscreen) operation, they also help your app to maintain familiarity for users. Some other resources discussing the topic: * *The WinMo team blog entry on one-handed navigation *Mark Arteaga's article on stylus-free apps
{ "language": "en", "url": "https://stackoverflow.com/questions/37783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Can a fixture be changed dynamically between test methods in CakePHP? Is it possible to have a fixture change between test methods? If so, how can I do this? My syntax for this problem : In the cakephp framework i am building tests for a behavior that is configured by adding fields to the table. This is intended to work in the same way that adding the "created" and "modified" fields will auto-populate these fields on save. To test this I could create dozens of fixtures/model combos to test the different setups, but it would be a hundred times better, faster and easier to just have the fixture change "shape" between test methods. If you are not familiar with the CakePHP framework, you can maybe still help me as it uses SimpleTest Edit: rephrased question to be more general A: I'm not familiar specifically with CakePHP, but this kind of thing seems to happen anywhere with fixtures. There is no built in way in rails at least for this to happen, and I imagine not in cakePHP or anywhere else either because the whole idea of a fixture, is that it is fixed There are 2 'decent' workarounds I'm aware of * *Write a changefixture method, and just before you do your asserts/etc, run it with the parameters of what to change. It should go and update the database or whatever needs to be done. *Don't use fixtures at all, and use some kind of object factory or object generator to create your objects each time A: This is not an answer to my quetion, but a solution to my issue example. Instead of using multiple fixtures or changing the fixtures, I edit the Model::_schema arrays by removing the fields that I wanted to test without. This has the effect that the model acts as if the fields was not there, but I am unsure if this is a 100% test. I do not think it is for all cases, but it works for my example.
{ "language": "en", "url": "https://stackoverflow.com/questions/37785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you manage SQL Queries At the moment my code (PHP) has too many SQL queries in it. eg... // not a real example, but you get the idea... $results = $db->GetResults("SELECT * FROM sometable WHERE iUser=$userid"); if ($results) { // Do something } I am looking into using stored procedures to reduce this and make things a little more robust, but I have some concerns.. I have hundreds of different queries in use around the web site, and many of them are quite similar. How should I manage all these queries when they are removed from their context (the code that uses the results) and placed in a stored procedure on the database? A: First up, you should use placeholders in your query instead of interpolating the variables directly. PDO/MySQLi allow you to write your queries like: SELECT * FROM sometable WHERE iUser = ? The API will safely substitute the values into the query. I also prefer to have my queries in the code instead of the database. It's a lot easier to work with an RCS when the queries are with your code. I have a rule of thumb when working with ORM's: if I'm working with one entity at a time, I'll use the interface. If I'm reporting/working with records in aggregate, I typically write SQL queries to do it. This means there's very few queries in my code. A: The best course of action for you will depend on how you are approaching your data access. There are three approaches you can take: * *Use stored procedures *Keep the queries in the code (but put all your queries into functions and fix everything to use PDO for parameters, as mentioned earlier) *Use an ORM tool If you want to pass your own raw SQL to the database engine then stored procedures would be the way to go if all you want to do is get the raw SQL out of your PHP code but keep it relatively unchanged. The stored procedures vs raw SQL debate is a bit of a holy war, but K. Scott Allen makes an excellent point - albeit a throwaway one - in an article about versioning databases: Secondly, stored procedures have fallen out of favor in my eyes. I came from the WinDNA school of indoctrination that said stored procedures should be used all the time. Today, I see stored procedures as an API layer for the database. This is good if you need an API layer at the database level, but I see lots of applications incurring the overhead of creating and maintaining an extra API layer they don't need. In those applications stored procedures are more of a burden than a benefit. I tend to lean towards not using stored procedures. I've worked on projects where the DB has an API exposed through stored procedures, but stored procedures can impose some limitations of their own, and those projects have all, to varying degrees, used dynamically generated raw SQL in code to access the DB. Having an API layer on the DB gives better delineation of responsibilities between the DB team and the Dev team at the expense of some of the flexibility you'd have if the query was kept in the code, however PHP projects are less likely to have sizable enough teams to benefit from this delineation. Conceptually, you should probably have your database versioned. Practically speaking, however, you're far more likely to have just your code versioned than you are to have your database versioned. You are likely to be changing your queries when you are making changes to your code, but if you are changing the queries in stored procedures stored against the database then you probably won't be checking those in when you check the code in and you lose many of the benefits of versioning for a significant area of your application. Regardless of whether or not you elect not to use stored procedures though, you should at the very least ensure that each database operation is stored in an independent function rather than being embedded into each of your page's scripts - essentially an API layer for your DB which is maintained and versioned with your code. If you're using stored procedures, this will effectively mean you have two API layers for your DB, one with the code and one with the DB, which you may feel unnecessarily complicates things if your project does not have separate teams. I certainly do. If the issue is one of code neatness, there are ways to make code with SQL jammed in it more presentable, and the UserManager class shown below is a good way to start - the class only contains queries which relate to the 'user' table, each query has its own method in the class and the queries are indented into the prepare statements and formatted as you would format them in a stored procedure. // UserManager.php: class UserManager { function getUsers() { $pdo = new PDO(...); $stmt = $pdo->prepare(' SELECT u.userId as id, u.userName, g.groupId, g.groupName FROM user u INNER JOIN group g ON u.groupId = g.groupId ORDER BY u.userName, g.groupName '); // iterate over result and prepare return value } function getUser($id) { // db code here } } // index.php: require_once("UserManager.php"); $um = new UserManager; $users = $um->getUsers(); foreach ($users as $user) echo $user['name']; However, if your queries are quite similar but you have huge numbers of permutations in your query conditions like complicated paging, sorting, filtering, etc, an Object/Relational mapper tool is probably the way to go, although the process of overhauling your existing code to make use of the tool could be quite complicated. If you decide to investigate ORM tools, you should look at Propel, the ActiveRecord component of Yii, or the king-daddy PHP ORM, Doctrine. Each of these gives you the ability to programmatically build queries to your database with all manner of complicated logic. Doctrine is the most fully featured, allowing you to template your database with things like the Nested Set tree pattern out of the box. In terms of performance, stored procedures are the fastest, but generally not by much over raw sql. ORM tools can have a significant performance impact in a number of ways - inefficient or redundant querying, huge file IO while loading the ORM libraries on each request, dynamic SQL generation on each query... all of these things can have an impact, but the use of an ORM tool can drastically increase the power available to you with a much smaller amount of code than creating your own DB layer with manual queries. Gary Richardson is absolutely right though, if you're going to continue to use SQL in your code you should always be using PDO's prepared statements to handle the parameters regardless of whether you're using a query or a stored procedure. The sanitisation of input is performed for you by PDO. // optional $attrs = array(PDO::ATTR_PERSISTENT => true); // create the PDO object $pdo = new PDO("mysql:host=localhost;dbname=test", "user", "pass", $attrs); // also optional, but it makes PDO raise exceptions instead of // PHP errors which are far more useful for debugging $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $stmt = $pdo->prepare('INSERT INTO venue(venueName, regionId) VALUES(:venueName, :regionId)'); $stmt->bindValue(":venueName", "test"); $stmt->bindValue(":regionId", 1); $stmt->execute(); $lastInsertId = $pdo->lastInsertId(); var_dump($lastInsertId); Caveat: assuming that the ID is 1, the above script will output string(1) "1". PDO->lastInsertId() returns the ID as a string regardless of whether the actual column is an integer or not. This will probably never be a problem for you as PHP performs casting of strings to integers automatically. The following will output bool(true): // regular equality test var_dump($lastInsertId == 1); but if you have code that is expecting the value to be an integer, like is_int or PHP's "is really, truly, 100% equal to" operator: var_dump(is_int($lastInsertId)); var_dump($lastInsertId === 1); you could run into some issues. Edit: Some good discussion on stored procedures here A: I had to clean up a project wich many (duplicate/similar) queries riddled with injection vulnerabilities. The first steps I took were using placeholders and label every query with the object/method and source-line the query was created. (Insert the PHP-constants METHOD and LINE into a SQL comment-line) It looked something like this: -- @Line:151 UserClass::getuser(): SELECT * FROM USERS; Logging all queries for a short time supplied me with some starting points on which queries to merge. (And where!) A: I'd move all the SQL to a separate Perl module (.pm) Many queries could reuse the same functions, with slightly different parameters. A common mistake for developers is to dive into ORM libraries, parametrized queries and stored procedures. We then work for months in a row to make the code "better", but it's only "better" in a development kind of way. You're not making any new features! Use complexity in your code only to address customer needs. A: Use a ORM package, any half decent package will allow you to * *Get simple result sets *Keep your complex SQL close to the data model If you have very complex SQL, then views are also nice to making it more presentable to different layers of your application. A: We were in a similar predicament at one time. We queried a specific table in a variety of ways, over 50+. What we ended up doing was creating a single Fetch stored procedure that includes a parameter value for the WhereClause. The WhereClause was constructed in a Provider object, we employed the Facade design pattern, where we could scrub it for any SQL injection attacks. So as far as maintenance goes, it is easy to modify. SQL Server is also quite the chum and caches the execution plans of dynamic queries so the the overall performance is pretty good. You'll have to determine the performance drawbacks based on your own system and needs, but all and all, this works very well for us. A: There are some libraries, such as MDB2 in PEAR that make querying a bit easier and safer. Unfortunately, they can be a bit wordy to set up, and you sometimes have to pass them the same info twice. I've used MDB2 in a couple of projects, and I tended to write a thin veneer around it, especially for specifying the types of fields. I generally make an object that knows about a particular table and its columns, and then a helper function in that fills in field types for me when I call an MDB2 query function. For instance: function MakeTableTypes($TableName, $FieldNames) { $Types = array(); foreach ($FieldNames as $FieldName => $FieldValue) { $Types[] = $this->Tables[$TableName]['schema'][$FieldName]['type']; } return $Types; } Obviously this object has a map of table names -> schemas that it knows about, and just extracts the types of the fields you specify, and returns an matching type array suitable for use with an MDB2 query. MDB2 (and similar libraries) then handle the parameter substitution for you, so for update/insert queries, you just build a hash/map from column name to value, and use the 'autoExecute' functions to build and execute the relevant query. For example: function UpdateArticle($Article) { $Types = $this->MakeTableTypes($table_name, $Article); $res = $this->MDB2->extended->autoExecute($table_name, $Article, MDB2_AUTOQUERY_UPDATE, 'id = '.$this->MDB2->quote($Article['id'], 'integer'), $Types); } and MDB2 will build the query, escaping everything properly, etc. I'd recommend measuring performance with MDB2 though, as it pulls in a fair bit of code that might cause you problems if you're not running a PHP accelerator. As I say, the setup overhead seems daunting at first, but once it's done the queries can be simpler/more symbolic to write and (especially) modify. I think MDB2 should know a bit more about your schema, which would simpify some of the commonly used API calls, but you can reduce the annoyance of this by encapsulating the schema yourself, as I mentioned above, and providing simple accessor functions that generate the arrays MDB2 needs to perform these queries. Of course you can just do flat SQL queries as a string using the query() function if you want, so you're not forced to switch over to the full 'MDB2 way' - you can try it out piecemeal, and see if you hate it or not. A: This other question also has some useful links in it... A: I try to use fairly generic functions and just pass the differences in them. This way you only have one function to handle most of your database SELECT's. Obviously you can create another function to handle all your INSERTS. eg. function getFromDB($table, $wherefield=null, $whereval=null, $orderby=null) { if($wherefield != null) { $q = "SELECT * FROM $table WHERE $wherefield = '$whereval'"; } else { $q = "SELECT * FROM $table"; } if($orderby != null) { $q .= " ORDER BY ".$orderby; } $result = mysql_query($q)) or die("ERROR: ".mysql_error()); while($row = mysql_fetch_assoc($result)) { $records[] = $row; } return $records; } This is just off the top of my head, but you get the idea. To use it just pass the function the necessary parameters: eg. $blogposts = getFromDB('myblog', 'author', 'Lewis', 'date DESC'); In this case $blogposts will be an array of arrays which represent each row of the table. Then you can just use a foreach or refer to the array directly: echo $blogposts[0]['title']; A: Use a ORM framework like QCodo - you can easily map your existing database
{ "language": "en", "url": "https://stackoverflow.com/questions/37791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: GCOV for multi-threaded apps Is it possible to use gcov for coverage testing of multi-threaded applications? I've set some trivial tests of our code-base up, but it would be nice to have some idea of the coverage we're achieving. If gcov isn't appropriate can anyone recommend an alternative tool (possible oprofile), ideally with some good documentation on getting started. A: We've certainly used gcov to get coverage information on our multi-threaded application. You want to compile with gcc 4.3 which can do coverage on dynamic code. You compile with the -fprofile-arcs -ftest-coverage options, and the code will generate .gcda files which gcov can then process. We do a separate build of our product, and collect coverage on that, running unit tests and regression tests. Finally we use lcov to generate HTML results pages. A: Gcov works fine for multi-threaded apps. The instrumentation architecture is properly serialized so you will get coverage data of good fidelity. I would suggest using gcov in conjunction with lcov. This will give you great reports scoped from full project down to individual source files. lcov also gives you a nicely color coded HTML version of your source so you can quickly evaluate your coverage lapses. A: I have not used gcov for multi-threaded coverage work. However, on MacOS the Shark tool from Apple handles multiple threads. It's primarily a profiler, but can do coverage info too. http://developer.apple.com/tools/sharkoptimize.html
{ "language": "en", "url": "https://stackoverflow.com/questions/37799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Link to samba shares in html First off if you're unaware, samba or smb == Windows file sharing, \\computer\share etc. I have a bunch of different files on a bunch of different computers. It's mostly media and there is quite a bit of it. I'm looking into various ways of consolidating this into something more manageable. Currently there are a few options I'm looking at, the most insane of which is some kind of samba share indexer that would generate a list of things shared on the various samba servers I tell it about and upload them to a website which could then be searched and browsed. It's a cheap solution, OK? Ignoring the fact that the idea is obviously a couple of methods short of a class, do you chaps know of any way to link to samba file shares in html in a cross-browser way? In windows one does \\computer\share, in linux one does smb://computer/share, neither of which work afaik from browsers that aren't also used as file managers (e.g. any browser that isn't Internet Explorer). Some Clarifications * *The computers used to access this website are a mixture of WIndows (XP) and Linux (Ubuntu) with a mixture of browsers (Opera and Firefox). *In linux entering smb://computer/share only seems to work in Nautilus (and presumably Konqueror / Dolphin for you KDE3.5/4 people). It doesn't work in Firefox or Opera (Firefox does nothing, Opera complains the URL is invalid). *I don't have a Windows box handy atm so I'm unsure if \\computer\share works in anything apart from IE (e.g. Firefox / Opera). *If you have a better idea for consolidating a bunch of random samba shares (it certainly can't get much worse than mine ;-)) it's worth knowing that there is no guarantee that any of the servers I would be wanting to index / consolidate would be up at any particular moment. Moreover, I wouldn't want the knowledge of what they have shared lost or hidden just because they weren't available. I would want to know that they share 'foo' but they are currently down. A: Hmm, protocol handlers look interesting. As Mark said, in Windows protocol handlers can be dealt with at the OS level Protocol handlers can also be done at the browser level (which is preferred, as it is cross platform and doesn't involve installing anything). Summary of how it works in Firefox Summary of how it works in Opera A: I'd probably just setup Apache on the SAMBA servers and let it serve the files via HTTP. That'd give you a nice autoindex default page too, and you could just wget and concatenate each index for your master list. A couple of other thoughts: * *file://server/share/file is the defacto Windows way of doing it *You can register protocol handlers in Windows, so you could register smb and redirect it to file://. I'd suspect GNOME/KDE/etc. would offer the same. A: To make the links work cross platform you could look at the User Agent either in a CGI script or in JavaScript and update your URLs appropriately. Alternatively, if you want to consolidate SMB shares you could try using Microsoft DFS (which also works with Samba). You set up a DFS root and tell it about all the other SMB/Samba shares you have in your environment. Clients then connect to the root and see all the shares as if they were hosted on that single root machine; the root silently redirects clients to the correct system when they open a share. Think of it as like symbolic links or a virtual file system for SMB. It would solve your browsing problem. I'm not sure if it would solve your searching one.
{ "language": "en", "url": "https://stackoverflow.com/questions/37804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Filter linq list on property value I have a List<int> and a List<customObject>. The customObject class has an ID property. How can I get a List<customObject> containing only the objects where the ID property is in the List<int> using LINQ? Edit: I accepted Konrads answer because it is easier/more intuitive to read. A: Untested, but it'll be something like this: var matches = from o in objList join i in intList on o.ID equals i select o; @Konrad just tested it, and it does work - I just had a typo where I'd written "i.ID" rather than "i". A: Just for completeness (and maybe it's easier to read?), using a "where" similar to Matt's "join": var matches = from o in customObjectList from i in intList where o.ID == i select o; A: var result = from o in objList where intList.Contains(o.ID) select o A: using System.Linq; objList.Where(x => intList.Contains(x.id)); A: I have had a similar problem just now and used the below solution. If you already have the list of objects you can remove all not found in the int list, leaving just matches in objList. objList.RemoveAll(x => !intList.Contains(x.id)); A: Please note that using the join instead of contains DOES NOT work when the count of items in the list exceeds 49! You will get the error: Some part of your SQL statement is nested too deeply. Rewrite the query or break it up into smaller queries.
{ "language": "en", "url": "https://stackoverflow.com/questions/37805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Examples of using semantic web technologies in real world applications Are you working on a (probably commercial) product which uses RDF/OWL/SPARQL technologies? If so, can you please describe your product? A: O'Reilly's Practical RDF has a chatper titled Commercial Uses of RDF/XML. The table at the left lists the subsections: Chandler, RDF Gateway, Seamark, and Adobe's XMP stuff. A: Three of Garlik's (www.garlik.com) services, DataPatrol, QDOS and a FOAF viewer all use RDF and SPARQL extensively. DataPatrol in particular and has tens of thousands of users in the UK. The dataset size is around ten billion RDF triples. A: At Yahoo! Search we use RDF to crawl for semantic data and power our Rich Results. Check out searches for "thai chili" and "paul tarjan facebook". If you want to see all the semantic data we pull out of pages, install the "Structured Data Display" SearchMonkey plugin and under every result you will see an inforbar full of the RDF serialized as RDFa. (I can't post links since I'm new here). A: Metatomix uses semantic technologies (RDF, ontologies, etc) in a few of their applications: www.metatomix.com A: The fedora commons digital repository project uses Dublin Core as a central part of describing the individual objects in the repository. Additionally, they have created a rdfs ontology of the internal relationsships between the objects, called RELS-EXT. All this information is accessible through sparql or itql queries, both programmatically and through a web interface. A: We are serving up rdf at biodiversity.org.au, and are planning to put a SPARQL engine over it. The bioinformatics community is very interested in RDF in general. See: http://biodiversity.org.au/name/Dodonaea%20viscosa.rdf The html search interface is at http://biodiversity.org.au/name/ Also see http://rs.tdwg.org/ontology/voc/TaxonConcept Note that what you see in the web browser is OWL run through a stylesheet. Do a "view source" to see the OWL. A: The flexibility of the semantic web data model enables lots of applications that are difficult to deliver using traditional, relational technologies. The responses so far on this list tend to focus on Web-centered applications, rather than those in the enterprise, probably for the reason that one can actually link to them, but semantic web technology is quietly taking off behind the firewall as well. My employer, Cambridge Semantics, produces a semantic web platform for enterprise application development, with customers including: * *Johnson & Johnson *Merck *GroupM *Chevron *Biogen Idec A: Have a look at the Calais Viewer for a real world application. A: Ontology-aware search engines: * *GoPubMed (http://gopubmed.com) *Anatomy Lens (http://services.alphaworks.ibm.com/anatomylens/) Mobile applications: * *IYOUIT (http://www.iyouit.eu) does OWL reasoning A: You can find a lot of good use case examples at the w3c's Semantic Web Use case site [1] which has links to many write-ups by companies who built actual systems using semantic web technologies. Cheers, Michael [1] http://www.w3.org/2001/sw/sweo/public/UseCases/ A: Microsoft Interactive Media Manager is a metadata management system developed on the Microsoft SharePoint platform that heavily leverages RDF, OWL, and SPARQL. It has some big customers in the broadcast space and is an excellent example of enterprise use of these technologies.
{ "language": "en", "url": "https://stackoverflow.com/questions/37808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How do I generate a Friendly URL in C#? How can I go about generating a Friendly URL in C#? Currently I simple replace spaces with an underscore, but how would I go about generating URL's like Stack Overflow? For example how can I convert: How do I generate a Friendly URL in C#? Into how-do-i-generate-a-friendly-url-in-C A: This gets part of the way there (using a whitelist of valid characters): new Regex("[^a-zA-Z-_]").Replace(s, "-") It does, however, give you a string that ends with "--". So perhaps a second regex to trim those from the beginning/end of the string, and maybe replace any internal "--" to "-". A: There are several things that could be improved in Jeff's solution, though. if (String.IsNullOrEmpty(title)) return ""; IMHO, not the place to test this. If the function gets passed an empty string, something went seriously wrong anyway. Throw an error or don't react at all. // remove any leading or trailing spaces left over … muuuch later: // remove trailing dash, if there is one Twice the work. Considering that each operation creates a whole new string, this is bad, even if performance is not an issue. // replace spaces with single dash title = Regex.Replace(title, @"\s+", "-"); // if we end up with multiple dashes, collapse to single dash title = Regex.Replace(title, @"\-{2,}", "-"); Again, basically twice the work: First, use regex to replace multiple spaces at once. Then, use regex again to replace multiple dashes at once. Two expressions to parse, two automata to construct in memory, iterate twice over the string, create two strings: All these operations can be collapsed to a single one. Off the top of my head, without any testing whatsoever, this would be an equivalent solution: // make it all lower case title = title.ToLower(); // remove entities title = Regex.Replace(title, @"&\w+;", ""); // remove anything that is not letters, numbers, dash, or space title = Regex.Replace(title, @"[^a-z0-9\-\s]", ""); // replace spaces title = title.Replace(' ', '-'); // collapse dashes title = Regex.Replace(title, @"-{2,}", "-"); // trim excessive dashes at the beginning title = title.TrimStart(new [] {'-'}); // if it's too long, clip it if (title.Length > 80) title = title.Substring(0, 79); // remove trailing dashes title = title.TrimEnd(new [] {'-'}); return title; Notice that this method uses string functions instead of regex functions and char functions instead of string functions whenever possible. A: Here's how we do it. Note that there are probably more edge conditions than you realize at first glance.. if (String.IsNullOrEmpty(title)) return ""; // remove entities title = Regex.Replace(title, @"&\w+;", ""); // remove anything that is not letters, numbers, dash, or space title = Regex.Replace(title, @"[^A-Za-z0-9\-\s]", ""); // remove any leading or trailing spaces left over title = title.Trim(); // replace spaces with single dash title = Regex.Replace(title, @"\s+", "-"); // if we end up with multiple dashes, collapse to single dash title = Regex.Replace(title, @"\-{2,}", "-"); // make it all lower case title = title.ToLower(); // if it's too long, clip it if (title.Length > 80) title = title.Substring(0, 79); // remove trailing dash, if there is one if (title.EndsWith("-")) title = title.Substring(0, title.Length - 1); return title; A: here is a simple function which can convert your string to Url, you just need to pass title or string it will convert it to user friendly Url. public static string GenerateUrl(string Url) { string UrlPeplaceSpecialWords = Regex.Replace(Url, @"&quot;|['"",&?%\.!()@$^_+=*:#/\\-]", " ").Trim(); string RemoveMutipleSpaces = Regex.Replace(UrlPeplaceSpecialWords, @"\s+", " "); string ReplaceDashes = RemoveMutipleSpaces.Replace(" ", "-"); string DuplicateDashesRemove = ReplaceDashes.Replace("--", "-"); return DuplicateDashesRemove.ToLower(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/37809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: MSMQ monitoring Is there anything which can help with msmq monitoring? I'd like to get some event/monit when a message appears in queue and the same on leave. A: Check out the Windows Management Performance counters. If you look in your Administrative Tools and find "Performance Counters", you will be able to dig through there and find detailed metrics on what is happening on each message queue. This can also work for remote servers. Should you wish to create some sort of automation around the monitoring, check out the .NET libraries for reading these performance counters. There is a very rich and comprehensive API which should give you everything you need! A: You can achieve this by using MSMQ triggers
{ "language": "en", "url": "https://stackoverflow.com/questions/37812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Viewing event log via a web interface I'd like to be able to view the event log for a series of asp.net websites running on IIS. Can I do this externally, for example, through a web interface? A: No, but there are two solutions I would recommend: * *Adiscon EventLogger is a third-party product that will send your Windows EventLog to a SQL database. You can either send all events or create filters. Of course, once the events are in a SQL database, you can use any of the usual tools to create a web interface. *You can use ASP.NET's HealthMonitoring configuration section to configure .NET to send all ASP.NET-related events directly to a SQL database. This covers exceptions, heartbeats, and a host of other event types. The SqlWebEventProvider is a cinch to setup. A: Do you want to know if you can home-roll something or are you looking for an app you can get off the shelf? I'm not a Windows guy, but I think Microsoft's MOM/SCOM solution will probably let you view the event log over a web UI - probably really heavy and expensive if that's all you need though. A quick google found http://www.codeproject.com/KB/XML/Event_Logger.aspx which shows that you can get in if you want to roll your own... also an MS tool on msdn Sorry I can't be more help
{ "language": "en", "url": "https://stackoverflow.com/questions/37821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is it just the iPhone simulator that is restricted to Intel only Mac's? I have read that the iPhone SDK (part of Xcode 3) is restricted to Mac's with the intel chipset. Does this restriction apply to only the simulator part of the SDK or the complete shebang? I have a Powerbook G4 running Leopard and would very much like to do dev on it rather than fork out for a new machine. It is also worth clarifying that I am interested in development for personal reasons and therefore accept that I would need a certified platform to create a submission for the App Store. A: As things have moved on since the original post on 3by9.com, here are the steps that I had to follow to get the environment working on my PowerBook G4. BTW, I would like to say that I realise that this is not a supported environment and I share this for purely pedagogic reasons. * *Download and install the iPhoneSDK (final version) *After the install finishes, navigate to the packages directory in the mounted DMG *Install all of the pkg's that start with iPhone *Copy the contents of /Platforms to /Developer/Platforms (should be two folders starting with iPhone) *Locate 'iPhone Simulator Architectures.xcspec' in /Developer/Platforms/iPhoneSimulator.platform/Developer/Library/Xcode/Specifications and open in a text editor. *Change line 12 to: Name = "Standard (iPhone Simulator: i386 ppc)"; *Change line 16 to: RealArchitectures = ( i386, ppc ); *Add the following to line 40 onwards: // PowerPC { Type = Architecture; Identifier = ppc; Name = "PowerPC"; Description = "32-bit PowerPC"; PerArchBuildSettingName = "PowerPC"; ByteOrder = big; ListInEnum = NO; SortNumber = 106; }, *Save the file and start Xcode *You should see under the New Project Folder the ability to create iPhone applications. *To get an app to work in the simulator (and using the WhichWayIsUp example) open Edit Project Settings under the Project menu *On the Build tab change the Architectures to: Standard (iPhone Simulator:i386 ppc) *Change Base SDK to Simulator - iPhone OS 2.0 *Build and go should now see the app build and run in the simulator A: The iPhone SDK is documented to require an Intel-based Mac. Even if some people may be able to have gotten it to run on some other hardware doesn't mean that it will run correctly, that Apple will fix bugs you report, or that it is a supported environment. A: I have a Powerbook G4 running Leopard and would very much like to do dev on it Not sure what sort of application you are developing, but if you jailbreak your iPhone, you can: * *develop applications using Ruby/Python/Java which won't require compiling at all *compile on the phone(!), as there is an GCC/Toolchain install in Cydia - although I've no idea how long that'll take, or if you can simply take a regular iPhone SDK project and SSH it to the phone, and run xcodebuild) You should be able to compile iPhone applications from a PPC machine, as you can compile PPC applications from an Intel Mac, and vice-versa, there shouldn't be any reason you can't compile an ARM binary from PPC.. Wether or not Apple include the necessary stuff with Xcode to allow this is a different matter.. The steps that Ingmar posted seem to imply you can..? A: If you actually want to run your binary on the device, not just the simulator, you need the advice from the following page: http://discussions.apple.com/thread.jspa?messageID=7958611 It involves a Perl script that does a bit of 'magic' to get the code signing to work on PowerPC. Also you need to install Developer Disk Image from the SDK packages. When all is said and done you can use a G4 to develop on the real device and even the debugger works. But I think Instruments doesn't work.
{ "language": "en", "url": "https://stackoverflow.com/questions/37822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Good reasons NOT to use a relational database? Can you please point to alternative data storage tools and give good reasons to use them instead of good-old relational databases? In my opinion, most applications rarely use the full power of SQL--it would be interesting to see how to build an SQL-free application. A: Try Prevayler: http://www.prevayler.org/wiki/ Prevayler is alternative to RDBMS. In the site have more info. A: Custom (hand-written) storage engine / Potentially very high performance in required uses cases http://www.hdfgroup.org/ If you have enormous data sets, instead of rolling your own, you might use HDF, the Hierarchical Data Format. http://en.wikipedia.org/wiki/Hierarchical_Data_Format: HDF supports several different data models, including multidimensional arrays, raster images, and tables. It's also hierarchical like a file system, but the data is stored in one magic binary file. HDF5 is a suite that makes possible the management of extremely large and complex data collections. Think petabytes of NASA/JPL remote sensing data. A: If you don't need ACID, you probably don't need the overhead of an RDBMS. So, determine whether you need that first. Most of the non-RDBMS answers provided here do not provide ACID. A: G'day, One case that I can think of is when the data you are modelling cannot be easily represented in a relational database. Once such example is the database used by mobile phone operators to monitor and control base stations for mobile telephone networks. I almost all of these cases, an OO DB is used, either a commercial product or a self-rolled system that allows heirarchies of objects. I've worked on a 3G monitoring application for a large company who will remain nameless, but whose logo is a red wine stain (-: , and they used such an OO DB to keep track of all the various attributes for individual cells within the network. Interrogation of such DBs is done using proprietary techniques that are, usually, completely free from SQL. HTH. cheers, Rob A: Object databases are not relational databases. They can be really handy if you just want to stuff some objects in a database. They also support versioning and modify classes for objects that already exist in the database. db4o is the first one that comes to mind. A: In some cases (financial market data and process control for example) you might need to use a real-time database rather than a RDBMS. See wiki link A: There was a RAD tool called JADE written a few years ago that has a built-in OODBMS. Earlier incarnations of the DB engine also supported Digitalk Smalltalk. If you want to sample application building using a non-RDBMS paradigm this might be a start. Other OODBMS products include Objectivity, GemStone (You will need to get VisualWorks Smalltalk to run the Smalltalk version but there is also a java version). There were also some open-source research projects in this space - EXODUS and its descendent SHORE come to mind. Sadly, the concept seemed to die a death, probably due to the lack of a clearly visible standard and relatively poor ad-hoc query capability relative to SQL-based RDMBS systems. An OODBMS is most suitable for applications with core data structures that are best represented as a graph of interconnected nodes. I used to say that the quintessential OODBMS application was a Multi-User Dungeon (MUD) where rooms would contain players' avatars and other objects. A: Matt Sheppard's answer is great (mod up), but I would take account these factors when thinking about a spindle: * *Structure : does it obviously break into pieces, or are you making tradeoffs? *Usage : how will the data be analyzed/retrieved/grokked? *Lifetime : how long is the data useful? *Size : how much data is there? One particular advantage of CSV files over RDBMSes is that they can be easy to condense and move around to practically any other machine. We do large data transfers, and everything's simple enough we just use one big CSV file, and easy to script using tools like rsync. To reduce repetition on big CSV files, you could use something like YAML. I'm not sure I'd store anything like JSON or XML, unless you had significant relationship requirements. As far as not-mentioned alternatives, don't discount Hadoop, which is an open source implementation of MapReduce. This should work well if you have a TON of loosely structured data that needs to be analyzed, and you want to be in a scenario where you can just add 10 more machines to handle data processing. For example, I started trying to analyze performance that was essentially all timing numbers of different functions logged across around 20 machines. After trying to stick everything in a RDBMS, I realized that I really don't need to query the data again once I've aggregated it. And, it's only useful in it's aggregated format to me. So, I keep the log files around, compressed, and then leave the aggregated data in a DB. Note I'm more used to thinking with "big" sizes. A: Plain text files in a filesystem * *Very simple to create and edit *Easy for users to manipulate with simple tools (i.e. text editors, grep etc) *Efficient storage of binary documents XML or JSON files on disk * *As above, but with a bit more ability to validate the structure. Spreadsheet / CSV file * *Very easy model for business users to understand Subversion (or similar disk based version control system) * *Very good support for versioning of data Berkeley DB (Basically, a disk based hashtable) * *Very simple conceptually (just un-typed key/value) *Quite fast *No administration overhead *Supports transactions I believe Amazon's Simple DB * *Much like Berkeley DB I believe, but hosted Google's App Engine Datastore * *Hosted and highly scalable *Per document key-value storage (i.e. flexible data model) CouchDB * *Document focus *Simple storage of semi-structured / document based data Native language collections (stored in memory or serialised on disk) * *Very tight language integration Custom (hand-written) storage engine * *Potentially very high performance in required uses cases I can't claim to know anything much about them, but you might also like to look into object database systems. A: The filesystem's prety handy for storing binary data, which never works amazingly well in relational databases. A: You can go a long way just using files stored in the file system. RDBMSs are getting better at handling blobs, but this can be a natural way to handle image data and the like, particularly if the queries are simple (enumerating and selecting individual items.) Other things that don't fit very well in a RDBMS are hierarchical data structures and I'm guessing geospatial data and 3D models aren't that easy to work with either. Services like Amazon S3 provide simpler storage models (key->value) that don't support SQL. Scalability is the key there. Excel files can be useful too, particularly if users need to be able to manipulate the data in a familiar environment and building a full application to do that isn't feasible. A: There are a large number of ways to store data - even "relational databse" covers a range of alternatives from a simple library of code that manipulates a local file (or files) as if it were a relational database on a single user basis, through file based systems than can handle multiple-users to a generous selection of serious "server" based systems. We use XML files a lot - you get well structured data, nice tools for querying same the ability to do edits if appropriate, something that's human readable and you don't then have to worry about the db engine working (or the workings of the db engine). This works well for stuff that's essentially read only (in our case more often than not generated from a db elsewhere) and also for single user systems where you can just load the data in and save it out as required - but you're creating opportunities for problems if you want multi-user editing - at least of a single file. For us that's about it - we're either going to use something that will do SQL (MS offer a set of tools that run from a .DLL to do single user stuff all the way through to enterprise server and they all speak the same SQL (with limitations at the lower end)) or we're going to use XML as a format because (for us) the verbosity is seldom an issue. We don't currently have to manipulate binary data in our apps so that question doesn't arise. Murph A: One might want to consider the use of an LDAP server in the place of a traditional SQL database if the application data is heavily key/value oriented and hierarchical in nature. A: BTree files are often much faster than relational databases. SQLite contains within it a BTree library which is in the public domain (as in genuinely 'public domain', not using the term loosely). Frankly though, if I wanted a multi-user system I would need a lot of persuading not to use a decent server relational database. A: Full-text databases, which can be queried with proximity operators such as "within 10 words of," etc. Relational databases are an ideal business tool for many purposes - easy enough to understand and design, fast enough, adequate even when they aren't designed and optimized by a genius who could "use the full power," etc. But some business purposes require full-text indexing, which relational engines either don't provide or tack on as an afterthought. In particular, the legal and medical fields have large swaths of unstructured text to store and wade through. A: Also: * Embedded scenarios - Where usually it is required to use something smaller then a full fledged RDBMS. Db4o is an ODB that can be easily used in such case. * Rapid or proof-of-concept development - where you wish to focus on the business and not worry about persistence layer A: K.I.S.S: Keep It Small and Simple A: CAP theorem explains it succinctly. SQL mainly provides "Strong Consistency: all clients see the same view, even in presence of updates". A: I would offer RDBMS :) If you do not wont to have troubles with set up/administration go for SQLite. Built in RDBMS with full SQL support. It even allows you to store any type of data in any column. Main advantage against for example log file: If you have huge one, how are you going to search in it? With SQL engine you just create index and speed up operation dramatically. About full text search: SQLite has modules for full text search too.. Just enjoy nice standard interface to your data :) A: One good reason not to use a relational database would be when you have a massive data set and want to do massively parallel and distributed processing on the data. The Google web index would be a perfect example of such a case. Hadoop also has an implementation of the Google File System called the Hadoop Distributed File System. A: I would strongly recommend Lua as an alternative to SQLite-kind of data storage. Because: * *The language was designed as a data description language to begin with *The syntax is human readable (XML is not) *One can compile Lua chunks to binary, for added performance This is the "native language collection" option of the accepted answer. If you're using C/C++ as the application level, it is perfectly reasonable to throw in the Lua engine (100kB of binary) just for the sake of reading configs/data or writing them out.
{ "language": "en", "url": "https://stackoverflow.com/questions/37823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "139" }
Q: How do I implement a chromeless window with WPF? I want to show a chromeless modal window with a close button in the upper right corner. Is this possible? A: You'll pretty much have to roll your own Close button, but you can hide the window chrome completely using the WindowStyle attribute, like this: <Window WindowStyle="None"> That will still have a resize border. If you want to make the window non-resizable then add ResizeMode="NoResize" to the declaration. A: Check out this blog post on kirupa. A: <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300" WindowStyle="None" ResizeMode="NoResize"> <Button HorizontalAlignment="Right" Name="button1" VerticalAlignment="Top" >Close</Button> </Window>
{ "language": "en", "url": "https://stackoverflow.com/questions/37830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: UI and event testing So I know that unit testing is a must. I get the idea that TDD is the way to go when adding new modules. Even if, in practice, I don't actually do it. A bit like commenting code, really. The real thing is, I'm struggling to get my head around how to unit-test the UI and more generally objects that generate events: user controls, asynchronous database operations, etc. So much of my code relates to UI events that I can't quite see how to even start the unit testing. There must be some primers and starter docs out there? Some hints and tips? I'm generally working in C# (2.0 and 3.5) but I'm not sure that this is strictly relevant to the question. A: the thing to remember is that unit testing is about testing the units of code you write. Your unit tests shouldn't test that clicking a button raises an event, but that the code being executed by that click event does as it's supposed to. What you're really wanting to do is test the underlying code does what it should so that your UI layers can execute that code with confidence. A: Read this if you're struggling with UI Testing Manually test UI stuff where benefit to cost in automating it is minimal. Test everything under the UI skin ruthlessly. Use Humble Dialog, MVC or variants to keep logic and UI distinct and loosely coupled. A: You should separate logic and presentation. Using MVP(Model-View-Presenter)/MVC (Model-View-Controller) patterns you can unit test you logic without relying on UI events. Also you can use White framework to simulate user input. I would highly recommend you to visit Microsoft's Patterns&Practices developer center, especially take a look at composite application block and Prism - you can get a lot of information on test driven design. A: The parts of your application that talk to the outside world (ie UI, database etc.) are always a problem when unit-testing. The way around this is actually not to test those layers but make them as thin as possible. For the UI you can use a humble dialog or a view that doesn't do anything worth testing and then put all the logic in a controller or presenter class. You can then use a mocking framework or write your own mock objects to make fake versions of the views to test the logic in the presenters or controller. On the database side you can do something similar. Testing events is not impossible. You can for example subscribe an anonymous method to the event that throws an exception if the event is thrown or counts the number of times the event is thrown.
{ "language": "en", "url": "https://stackoverflow.com/questions/37832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What exactly is WPF? I have seen lots of questions recently about WPF... * *What is it? *What does it stand for? *How can I begin programming WPF? A: WPF is the next frontier with Windows UIs. * *Built on top of DirectX, it opens up hardware acceleration support for your .Net 3.0+ user-interfaces. *Emphasis on Vector Graphics - UIs scale and render better *Composable UIs. You could nest animated buttons in combo boxes.. the world's your oyster. *Is a rewrite with only minimal core components written in unmanaged code VS GDI-User Dll based Winforms approach which is a thin managed layer over largely unmanaged code. *Declarative approach to UI programming, User Interfaces are largely specified in a XML variant called XAML (eXtensible Application markup language) pronounced Zammel. This opens up WPF to designer folks who can specialized tools to craft UIs that the developers can then code up. No translation losses between wireframes to final product. *MS 'allegedly' will not provide any future updates to Winforms. Heavily invested in WPF as the way forward *Oh yeah, before I forget. Works best on Vista :) You can get either Adam Nathan's WPF Unleashed Book or Chris Sells Programming WPF .. those seem to be the way to go. I just read the first chapter of Adam's (Lead for WPF at MS) book. Hence the WPF praise fountains :) A: WPF is a new technology that will supersede Windows Forms. WPF stands for Windows Presentation Foundation Here are some useful topics on SO: * *What WPF books would you recommend *What real world WPF applications are out there From my practice I can say that WPF is a truly amazing technology however it takes some time to get used to because it's totally different from the WinForms. I would recommend you to take a look at this demo. A: Take a look here http://windowsclient.net/ and here Windows Presentation Foundation (WPF) Basically WPF is created to make windows form easier to design because of the use of XAML, designers can work on the design and programmers on the underlying code A: WPF is the Windows Presentation Foundation. It is Microsoft's newest API for building applications with User Interfaces (UIs), working for both standalone and web-based applications. Unsurprisingly, there is a very detailed but not all that helpful Windows Presentation Foundation page at Wikipedia. The WPF Getting Started Page at the Microsoft MSDN site is probably a better place to start. A: Is the new Windows Gui system. I don't believe its aim is to make development easier per se but more to address fundamental issues with WinForm, such as transparency and scaling, neither of which WinForm can effectively address. Furthermore it seeks to address the "one resolution only" paradigm of WinForm by mapping sizes to real-pixel sizes and making flow layout easier and more fundamental. It's also based on an XML derivative making it easier to change the UI and forcing a separation of the UI and the core code (although technically you can still badly hack it together in this manner). This separation also drives a desire to be able to divide the work into two camps, the designers taking charge of the XAML and layout and the programmers taking care of developing the objects used in the XAML. A: Check out Eric Sink's Twelve days of WPF 3D. A: Windows Presentation Foundation. It's basically Microsoft's latest attempt to make development easier, and provide a whole heap of nice functionality out of the box. I'm not sure where to start, but googling "WPF 101" should throw up a few useful links. A: WPF is part of the .net 3.0 stack. Its microsoft's next generation Graphical User Interface system. All the information you need can be found on wikipedia and msdn's wpf site To Get Started programming I guess check out the essential downloads on windows client
{ "language": "en", "url": "https://stackoverflow.com/questions/37843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Application Control Scripts on Unix I'm looking for some software that allows me to control a server based application, that is, there are bunch of interdependent processes that I'd like to be able to start up, shut down and monitor in a controller manner. I've come across programs like Autosys, but that's expensive and very much over the top for what I want. I've also seen AppCtl, but that seems not to handle dependencies. Maybe it would be possible to repurpose the init scripts? Oh, and as an added complication it should be able to run on a Solaris 10 or Linux box without installing any new binaries. On the boxes I've seen recently, that means shell scripts and Perl but not Python. Do any such programs exist or do I need to dust off my copy of Programming Perl? A: G'day, Have a look in /etc/init.d for something similar and use that as a basis. See also crontab, or maybe at, to run on a regular basis. cheers, Rob A: Try Supervise, which is what qmail uses to keep track of it's services/startup applications: http://cr.yp.to/daemontools/supervise.html A: Solaris-only as far as I know, but wouldn't Solaris 10's SMF do what you want? A: Try GNU Batch. It looks like it supports what you need. http://www.gnu.org/software/gnubatch/
{ "language": "en", "url": "https://stackoverflow.com/questions/37851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Roaming settings with LocalFileSettingsProvider On my way through whipping up a Windows Forms application I thought it might have been a good idea to use the settings file to store miscellaneous application options (instead of the registry) and user parameters (window positions, column orderings, etc.). Out of the box, quick and simple, or so I thought. All works as per MSDN using the default SettingsProvider (LocalFileSettingsProvider), but I do have concerns about where it gets stored and hopefully somebody can provide a solution. Essentially the file ends up in the local application data and in an unsavoury sub-directory structure. (AppData / Local / company / namespace_StrongName_gibberish / version ). Is there a way to tell the LocalFileSettingsProvider to store the configuration file so the data will roam and perhaps in a less crazy folder structure? (or maybe an implementation of SettingsProvider that already does this?) A: You can use SettingsManageabilityAttribute for storing settings in roaming directory: [SettingsManageability(SettingsManageability.Roaming)]
{ "language": "en", "url": "https://stackoverflow.com/questions/37871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can you easily reorder columns in LINQ to SQL designer? When designing LINQ classes using the LINQ to SQL designer I've sometimes needed to reorder the classes for the purposes of having the resultant columns in a DataGridView appear in a different order. Unfortunately this seems to be exceedingly difficult; you need to cut and paste properties about, or delete them and re-insert them manually. I know you can reorder columns fairly easily in a DataGridView, however that would result in a lot of hardcoding and I want the designer to match up to the grid. Does anyone know of any easier way of achieving this or is cutting/pasting the only available method? I tried manually editing the .designer.cs file, but reordering properties there doesn't appear to do anything! Edit: Just to make it clear - I want to reorder what's in the LINQ to SQL designer, not what's in the table. I haven't made an error in ordering requiring a reversion to the original table layout; rather I have a table which I want to possess a different ordering in Visual Studio than in SQL Server. A: Open the [DataClasses].dbml file in your favorite XML editor, and reorder the [Column] elements for the table. Save, and reopen (or reload) the designer in Visual studio. The order of the columns displayed in the designer will be fixed. A: Using Linq-to-Sql, you can have columns in the DataGridView appear different than in the original table by: * *In your Linq query, extract the columns that you want, in the order than you want, and store them in a var. Then the autogenerate columns should show them in that order in the DataGridView *Use Template columns in your DataGridView *Do not use drag-and-drop on the Linq-to-Sql design surface to create your entities. Rather, create them by hand and associate them with the database table using table and column properties As far as I know, there is no drag-and-drop column reorder in the designer itself A: If you are in the scenario where you have reordered the columns in the database, and you now want to have this new order be reflected in the designer, I think that you have to delete the table from the designer and then put it in again. Or if you use SqlMetal to generate your Linq-to-Sql classes, rerun it on your database and use the newly generated file.
{ "language": "en", "url": "https://stackoverflow.com/questions/37882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you update your web application on the server? I am aware of Capistrano, but it is a bit too heavyweight for me. Personally, I set up two Mercurial repositories, one on the production server and another on my local dev machine. Regularly, when a new feature is ready, I push changes from repository on my local machine to repository on the server, then update on the server. This is a pretty simple and quick way to keep files in sync on several computers, but does not help to update databases. What is your solution to the problem? A: I used to use git push to publish to my web server but lately I've just been using rsync. I try to make my site as agnostic about where it's running as possible (using relative paths, etc) and so far it's worked pretty well. The only challenge is keeping databases in sync, and for that I usually use the production database as the master and make regular backups and imports into my testing database. A: Or Fabric, if you prefer Python. A: what's heavyweight about capistrano? if you want to sync files then sure rsync is great. but if you're then going to need to do db updates maybe cap isn't so bad ? A: @Andrew To use git push to deploy your site you will need to do first set up a remote server in your .git/config file to push to. Then you need to configure a hook that will basically perform a git reset --hard to copy the code you just copied to the repository to the working directory. I know this is a little vague, but I actually deleted the server-side .git folder once I switched to rsync, so I don't have the exact scripts that I used to make the magic happen. That might be a good candidate for a full question though, so you might get more responses that way. edit: I know it's been a while, but I eventually found what I was using again: Deploy a project using Git push A: I'm assuming you're speaking of Ruby on Rails. Check out the HowTo wiki: http://wiki.rubyonrails.com/rails/pages/Howtos#deployment
{ "language": "en", "url": "https://stackoverflow.com/questions/37887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you use WebServiceMessageDrivenBean in Spring-WS? How do you use the the org.springframework.ws.transport.jms.WebServiceMessageDrivenBean class from the Java Spring Framework - Spring-WS project? There is very little documentation or examples available on the web. A: From what I gather from reading the javadocs it looks like this allows a Spring WebServiceMessageReceiver to be invoked using a JMS client instead of a web services client. Hopefully that's right, because the rest of this is based on that assumption. The basics of is should match with how you create a regular Spring message driven bean. There is a little bit of documentation on how to do that in the Spring Reference Manual. Also see the AbstractEnterpriseBean Javadoc for some additional information about how the Spring context is retrieved. The extra configuration required for a WebServiceMessageDrivenBean appear to be a ConnectionFactory, a WebServiceMessageFactory, and your WebServiceMessageReceiver. These need to use the bean names specified in the Javadoc for the WebServiceMessageDrivenBean. The bean names are "connectionFactory", "messageFactory", and "messageReceiver" respectively. A: Using the WebServiceMessageDrivenBean is very similar to the Spring support for Message Driven Beans (MDBS). First you create a MDB: public class HelloWorldMessageDrivenBean extends WebServiceMessageDrivenBean { private static final long serialVersionUID = -2905491432314736668L; } That is it as far as the MDB goes! Next you configure the MDB by adding the following following to the MDB definition in the ejb-jar.xml: <env-entry> <description></description> <env-entry-name>ejb/BeanFactoryPath</env-entry-name> <env-entry-type>java.lang.String</env-entry-type> <env-entry-value> application-context.xml </env-entry-value> </env-entry> This tells the Spring MDB support classes where to pick up your Spring configuration file. You can now configure your endpoints either in the application-context.xml file or in addition using the annotation support.
{ "language": "en", "url": "https://stackoverflow.com/questions/37912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you display a dialog from a hidden window application? I have developed a COM component (dll) that implements an Edit() method displaying a WTL modal dialog. The complete interface to this COM component corresponds to a software standard used in the chemical process industry (CAPE-OPEN) and as a result this COM component is supposed to be usable by a range of 3rd party executables that are out of my control. My component works as expected in many of these EXEs, but for one in particular the Edit() method just hangs without the dialog appearing. However, if I make a call to ::MessageBox() immediately before DoModal() the dialog displays and behaves correctly after first showing the MessageBox. I have a suspicion that the problem may be something to do with this particular EXE running as a 'hidden window application'. I have tried using both NULL and the return value from ::GetConsoleWindow() as the dialog's parent, neither have worked. The dialog itself is an ATL/WTL CPropertySheetImpl. The parent application (EXE) in question is out of my control as it is developed by a (mildly hostile) 3rd party. I do know that I can successfully call ::MessageBox() or display the standard Windows File Dialog from my COM component, and that after doing so I am then able to display my custom dialog. I'm just unable to display my custom dialog without first displaying a 'standard' dialog. Can anyone suggest how I might get it to display the dialog without first showing an unnecessary MessageBox? I know it is possible because I've seen this EXE display the dialogs from other COM components corresponding to the same interface. A: Are you using a parent for the Dialog? e.g. MyDialog dialog(pParent); dialog.DoModal(); If you are, try removing the parent. Especially if the parent is the desktop window. A: Depending on how the "hidden window" application works, it might not be able to display a window. For example, services don't have a "main message loop", and thus are not able to process messages sent to windows in the process. i.e, the application displaying the window should have something like this: while(GetMessage(&msg, NULL, 0, 0)) { if(!TranslateAccelerator(msg.hwnd, hAccelTable, &msg)) { TranslateMessage(&msg); DispatchMessage(&msg); } } in WinMain. A: This isn't supposed to be reliable - but try ::GetDesktopWindow() as the parent (it returns a HWND). Be warned - if your app crashes, it will bring down the desktop with it. But i'd be interested to see if it works. A: It turns out I was mistaken: * *If I create my dialog with a NULL parent then it is not displayed, and hangs the parent application *However if I create my dialog with ::GetConsoleWindow() as the parent then the dialog is displayed; it just fooled me because it was displayed behind the window of the application that launched the parent application So now I just have to find out how to bring my dialog to the front. Thanks for the answers ;-) A: Whatever you do, do not use the desktop window as the parent for your modal dialog box. See here for explanation: http://blogs.msdn.com/b/oldnewthing/archive/2004/02/24/79212.aspx To quote the rationale: Put this together: If the owner of a modal dialog is the desktop, then the desktop becomes disabled, which disables all of its descendants. In other words, it disables every window in the system. Even the one you're trying to display!
{ "language": "en", "url": "https://stackoverflow.com/questions/37920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to test java application for performance bottlenecks? I am reviewing a big java application to see if there are any performance bottlenecks. The real problem is that I cannot pinpoint the performance issues to any single module. The whole application is slow as such. Is there some tool/technique I can use to help me out in this? A: Try using a profiler on your running code. It should help you identify the bottlenecks. Try jprofiler or Netbeans profiler A: I'm often happy enough using Java -Xprof. This gives you a sorted list of the functions your code spends most of its time in. A: If you are running on Java 6 you can use the supplied monitoring tools A: For testing/development purposes, you can download Oracle JRockit Mission Control for free from this site. (Requires Login, but accounts can be set up with any email adress) Docs Here. It will allow you to find hotspots, memory leaks and much more. A: YourKit is a excelent java profiler (not free). A: As we see from How can I profile C++ code running in Linux?, the most statistically significant approach is to use a stack profiler. Well, Java runs in the JVM, so getting a stack useful for C code won't be useful for us (it'll get the JVM stuff, not your code). Fortunately, Java has jstack! http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jstack.html It'll give you a bunch of threads, like the GarbageCollector. Don't worry about those, just look at where your threads are.
{ "language": "en", "url": "https://stackoverflow.com/questions/37929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What's the use of value types in .Net? The official guidelines suggest that there can be very few practical uses for these. Does anyone have examples of where they've put them to good use? A: Au Contrare... you'll find C/C++ people flocking to structs a.k.a. value types. An example would be data packets. If you have a large number of data packets to transfer/transmit, you'd use value structs to model your data packets. reason: Turning something into a class adds an overhead of (approx 8-16 Bytes I forget) of overhead in the object header in addition to the instance data. In scenarios where this is unacceptable, value types are your safest bet Another use would be situations where you need value type semantics - once you create-initialize a object, it is readonly/immutable and can be passed around to n clients. A: For the most part, it's good to emulate the behaviour of the framework. Many elementary data types such as ints are value types. If you have types that have similar properties, use value types. For example, when writing a Complex data type or a BigInteger, value types are the logical solution. The same goes for the other cases where the framework used value types: DateTime, Point, etc. When in doubt, use a reference type instead. A: Enums are first class citizens of .NET world. As for structures I found that in most cases classes can be used, however for memory-intense scenarios consider using structures. As a practical example I used structures as data structures for OSCAR (ICQ) protocols primitives. A: I tend to use enum for avoiding magic numbers, this can be overcome by const I guess, but enum allows you to group them up. i.e enum MyWeirdType { TypeA, TypeB, TypeC}; switch(value){ case MyWeirdType.TypeA: ... A: You should use a value type whenever: * *The use of a class isn't necessary (no need for inheritance) *You want to make sure there's no need to initialize the type. *You have a reason to want the type to be allocated in stack space *You want the type to be a complete independent entity on assigment instead of a "link" to the instance as it is in reference types. A: Exactly what most other people use them for.. Fast and light data/value access. As well as being ideal for making grouping properties (where it makes sense of course) into an object. For example: * *Display/Data value differences, such as String pairs of image names and a path for a control (or whatever). You want the path for the work under the hood, but the name to be visible to the user. *Obvious grouping of values for the metrics of objects. We all know Size etc but there may be plenty of situations where the base "metric" types are not enough for you. *"Typing" of enum values, being more than a fixed enum, but less that a full blown class (already has been mentioned, just want to advocate). Its important to remember the differences between value and reference types. Used properly, they can really improve efficiency of your code as well as make the object model more robust. A: Value types, specifically, structs and enums, and have proper uses in object-oriented programming. Enums are, as aku said, first class citizens in .NET, which can be used from all sorts of things from Colors to DialogBox options to various types of flags. Structs, as far as my experience goes, are great as Data Transfer Objects; logicless containers of data especially when they comprise mostly of primitive types. And of course, primitive types are all value types, which resolve to System.Object (unlike in Java where primitive types aren't related to structs and need some sort of wrapper). A: Actually prior to .net 3.5 SP1 there has been a performance issue with the intensive use of value types as mentioned here in Vance Morrison's blog. As far as I can see the vast majority of the time you should be using classes and the JITter should guarantee a good level of performance. structs have 'value type semantics', so will pass by value rather than by reference. We can see this difference in behaviour in the following example:- using System; namespace StructClassTest { struct A { public string Foobar { get; set; } } class B { public string Foobar { get; set; } } class Program { static void Main() { A a = new A(); a.Foobar = "hi"; B b = new B(); b.Foobar = "hi"; StructTest(a); ClassTest(b); Console.WriteLine("a.Foobar={0}, b.Foobar={1}", a.Foobar, b.Foobar); Console.ReadKey(true); } static void StructTest(A a) { a.Foobar = "hello"; } static void ClassTest(B b) { b.Foobar = "hello"; } } } The struct will be passed by value so StructTest() will get it's own A struct and when it changes a.Foobar will change the Foobar of its new type. ClassTest() will receive a reference to b and thus the .Foobar property of b will be changed. Thus we'd obtain the following output:- a.Foobar=hi, b.Foobar=hello So if you desire value type semantics then that would be another reason to declare something as a struct. Note interestingly that the DateTime type in .net is a value type, so the .net architects decided that it was appropriate to assign it as such, it'd be interesting to determine why they did that :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/37931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: XML in C# - Read from Resources, Manipulate and Display I'd like to do the following and can't find an elegant way: * *Read an XML template into a System.Xml.XmlDocument *Populate it with data from my UI *Transform it with an XSLT I've written *Apply a CSS Stylesheet *Render it to a WebBrowser control I'm currently reading it from a file on disk, populating it, then saving it back out to disk after populating it. I reference the XSLT in the template, and the CSS in the XSLT and then use the WebBrowser.Navigate([filename]) method to display the XML file. Obviously, when I come to deploy this app, it'll break horribly as the file won't exist on disk, and I won't be able to reference the XSLT and CSS files in the XML file as they'll be resources. I'm planning to include the template as a resource, but can't find a neat way to proceed from there. Any help much appreciated A: Quick question. Why do you need an Xml template? If you already know the schema before hand, then simply generate the complete Xml in your code. There shouldn't be a need for loading a template file. A: Thanks for the link Keith, I'll have a look at that out of interest, as LINQ is on my list of things to learn. Unfortunately, I need to target .Net 2.0 for this app, so I think (please correct me if I'm wrong!) that LINQ is out of the question. I've now included the CSS in the header of the XSLT, and I've found a way to use a System.Xml.Xsl.XslCompiledTransform object to transform the XML in memory. I'm currently using the WebBrowser.DocumentText property to pass the formatted XML into the WebBrowser componenet and this seems to work. I can't help thinking that this isn't the best way of doing things, so any comments on better ways would be appreciated. In particular, if I were using LINQ, would I need a schema to bind to, and also, should I have a schema full-stop? I'm not great with XML, but I like Vaibhav's idea of generating straight from a schema rather than using a template. I'm just not sure where to start, so any pointers appreciated! A: Check out Linq to XML - it's really good way to write and read Xml based data. Easier than the System.Xml.XmlDocument mechanisms. Given that you are supplying the XSLT and the CSS, why not build the page in XHTML and inline the CSS? Alternatively just add the XSLT and CSS files as content in your installer. Compiled help actually does something like what you're describing - you can copy a link from a CHM file and visit it with a normal browser. I suppose you could embed your display resources in a help file, depending on how they're used. A: Because you are using the WebBrowser control and not WPF you must depend on the disk for loading the CSS file you mention in Step 4. The only way around this is to use a tool like Resource Hacker to load "unmanaged" resources and use the res:// protocol (http://msdn.microsoft.com/en-us/library/aa767740(VS.85).aspx) for some real Microsoft 1990s nostalgia. You will still need to get your data into HTML elements. The code sample that follows might help to answer something: void WireUpBrowserEvents() { HtmlElement table = this._browser.Document.GetElementById( "UnitFormsTable" ); if ( table != null ) { HtmlElementCollection thead = table.GetElementsByTagName( "thead" ); if ( ( thead != null ) && ( thead.Count == 1 ) ) { HtmlElementCollection links = thead[0].GetElementsByTagName( "a" ); if ( ( links != null ) && ( links.Count > 0 ) ) { foreach ( HtmlElement a in links ) { a.Click += new HtmlElementEventHandler( XslSort_Click ); } } } } } void XslSort_Click( object sender, HtmlElementEventArgs e ) { e.ReturnValue = false; if ( this._xslSortWorker.IsBusy ) return; if ( sender is HtmlElement ) { HtmlElement a = sender as HtmlElement; this._browser.Hide(); this._browserMessage.Visible = true; this._browserMessage.Refresh(); this._xslSortWorker.RunWorkerAsync( a.Id ); } } You may already be aware that HtmlElement and HtmlElementCollection are in the System.Windows.Forms namespace. These remarks may not be helpful but I tried:)
{ "language": "en", "url": "https://stackoverflow.com/questions/37932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Handling XSD Dataset ConstraintExceptions Does anyone have any tips for dealing with ConstraintExceptions thrown by XSD datasets? This is the exception with the cryptic message: System.Data.ConstraintException : Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. A: A couple of tips that I've found lately. * *It's much better to use the TableAdapter FillByDataXXXX() methods instead of GetDataByXXXX() methods because the DataTable passed into the fill method can be interrogated for clues: * *DataTable.GetErrors() returns an array of DataRow instances in error *DataRow.RowError contains a description of the row error *DataRow.GetColumnsInError() returns an array of DataColumn instances in error *Recently, I wrapped up some interrogation code into a subclass of ConstraintException that's turned out to be a useful starting point for debugging. C# Example usage: Example.DataSet.fooDataTable table = new DataSet.fooDataTable(); try { tableAdapter.Fill(table); } catch (ConstraintException ex) { // pass the DataTable to DetailedConstraintException to get a more detailed Message property throw new DetailedConstraintException("error filling table", table, ex); } Output: DetailedConstraintException : table fill failed Errors reported for ConstraintExceptionHelper.DataSet+fooDataTable [foo] Columns in error: [1] [PRODUCT_ID] - total rows affected: 1085 Row errors: [4] [Column 'PRODUCT_ID' is constrained to be unique. Value '1' is already present.] - total rows affected: 1009 [Column 'PRODUCT_ID' is constrained to be unique. Value '2' is already present.] - total rows affected: 20 [Column 'PRODUCT_ID' is constrained to be unique. Value '4' is already present.] - total rows affected: 34 [Column 'PRODUCT_ID' is constrained to be unique. Value '6' is already present.] - total rows affected: 22 ----> System.Data.ConstraintException : Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. I don't know if this is too much code to include in a Stack Overflow answer but here's the C# class in full. Disclaimer: this works for me, please feel free to use/modify as appropriate. using System; using System.Collections.Generic; using System.Text; using System.Data; namespace ConstraintExceptionHelper { /// <summary> /// Subclass of ConstraintException that explains row and column errors in the Message property /// </summary> public class DetailedConstraintException : ConstraintException { private const int InitialCountValue = 1; /// <summary> /// Initialises a new instance of DetailedConstraintException with the specified string and DataTable /// </summary> /// <param name="message">exception message</param> /// <param name="ErroredTable">DataTable in error</param> public DetailedConstraintException(string message, DataTable erroredTable) : base(message) { ErroredTable = erroredTable; } /// <summary> /// Initialises a new instance of DetailedConstraintException with the specified string, DataTable and inner Exception /// </summary> /// <param name="message">exception message</param> /// <param name="ErroredTable">DataTable in error</param> /// <param name="inner">the original exception</param> public DetailedConstraintException(string message, DataTable erroredTable, Exception inner) : base(message, inner) { ErroredTable = erroredTable; } private string buildErrorSummaryMessage() { if (null == ErroredTable) { return "No errored DataTable specified"; } if (!ErroredTable.HasErrors) { return "No Row Errors reported in DataTable=[" + ErroredTable.TableName + "]"; } foreach (DataRow row in ErroredTable.GetErrors()) { recordColumnsInError(row); recordRowsInError(row); } StringBuilder sb = new StringBuilder(); appendSummaryIntro(sb); appendErroredColumns(sb); appendRowErrors(sb); return sb.ToString(); } private void recordColumnsInError(DataRow row) { foreach (DataColumn column in row.GetColumnsInError()) { if (_erroredColumns.ContainsKey(column.ColumnName)) { _erroredColumns[column.ColumnName]++; continue; } _erroredColumns.Add(column.ColumnName, InitialCountValue); } } private void recordRowsInError(DataRow row) { if (_rowErrors.ContainsKey(row.RowError)) { _rowErrors[row.RowError]++; return; } _rowErrors.Add(row.RowError, InitialCountValue); } private void appendSummaryIntro(StringBuilder sb) { sb.AppendFormat("Errors reported for {1} [{2}]{0}", Environment.NewLine, ErroredTable.GetType().FullName, ErroredTable.TableName); } private void appendErroredColumns(StringBuilder sb) { sb.AppendFormat("Columns in error: [{1}]{0}", Environment.NewLine, _erroredColumns.Count); foreach (string columnName in _erroredColumns.Keys) { sb.AppendFormat("\t[{1}] - rows affected: {2}{0}", Environment.NewLine, columnName, _erroredColumns[columnName]); } } private void appendRowErrors(StringBuilder sb) { sb.AppendFormat("Row errors: [{1}]{0}", Environment.NewLine, _rowErrors.Count); foreach (string rowError in _rowErrors.Keys) { sb.AppendFormat("\t[{1}] - rows affected: {2}{0}", Environment.NewLine, rowError, _rowErrors[rowError]); } } /// <summary> /// Get the DataTable in error /// </summary> public DataTable ErroredTable { get { return _erroredTable; } private set { _erroredTable = value; } } /// <summary> /// Get the original ConstraintException message with extra error information /// </summary> public override string Message { get { return base.Message + Environment.NewLine + buildErrorSummaryMessage(); } } private readonly SortedDictionary<string, int> _rowErrors = new SortedDictionary<string, int>(); private readonly SortedDictionary<string, int> _erroredColumns = new SortedDictionary<string, int>(); private DataTable _erroredTable; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/37936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How popular is WPF as a technology? I had a discussion with some colleagues mentioning that there are not too many projects that we do which make use of WPF for creating UI for a windows application (we almost always use Windows Forms instead). * *Are your experiences the same - i.e. there is not too much adoption of this technology? *Why do you think that is? And will we have a time when we see much more of WPF? A: WPF Rocks in terms of what the technology can do. It's simply the best UI building technology on the planet (my opinion). But, but there is a huge initial and long term investment in learning and getting your head around it. Also from tooling perspective it has barriers like unusable Cider designer, you've to get blend to do styling etc. I am sure it will become popular, but would take time. But right now it's not so popular. * *Check this thread on WPF *Check this conversation on msdn forums about WPF for LOB apps A: Absolutely - the adoption is tiny. There was lots of hype but it does not seem to have materialized. I used WPF for one project and I can certainly say it left a very unfinished taste in my mouth. It was far too difficult to acheive simple things and the whole thing had very many rough edges - the reasons behind them certainly existed but were not obvious or even visible at all. The Visual Studio designer completely bugged out for most of my pages and I never did figure out why... All in all, I'd say WPF is barely beta-quality from a developer friendliness standpoint. A: WPF has a steep learning curve, and the development tools for it (expression studio/web) are expensive, so, I'm not surprised that the industry has not jumped on it. However, in terms of Windows programming, it is much more robust and powerful than Windows Forms, so I would like to see its popularity grow over the next few years as Microsoft makes improvements to WPF, WCF, and .NET in general. If MS would decrease the price of its Expression products, I would expect to see the framework's popularity grow much faster. Another troubling thing about WPF is the total lack of good online tutorials for the framework. I'm trying to learn WPF at home, and I have found it to be a pain in the neck. I had to fork over a ton of cash for the development tools, and then I had to pay more money for a good book because there just isn't enough online to really get me into the framework and its languages. I can learn quite a bit about Java for free just by visiting the Sun website, but for WPF, I have to get a book. There is also a terrible lack of reference materials, in my experience, for WPF. To me, it reminded me of programming in VB6. Unless these things are remedied, I wouldn't expect to see any rapid growth in the framework. I do believe the main driving force behind the industry's adoption of the WPF and WCF technologies is MS support. A: Have a look at this survey it was done by a Windows Forms Contol Vendor in Australia. Personally I have worked on two commercial projects in the last year that were using WPF to varying degrees. The adoption of WPF is on the rise. Microsoft I believe is putting all their eggs into the WPF basket. A: Though WPF was introduced few years ago it was too raw to use it in the real world apps. Major problem that stops WPF from wide adoption is a lack of RAD tools and out-of-box components. Currently we have Blend, more or less working Cider, but usable versions of these tools came not so long ago. Another reason is a completely different architecture which leads to longer development time as compared with WinForms due to prolonged learning\adoption period for developers. I think we will see rising of WPF in the next few years. A: People usually jump the technology bandwagon when there is a a real productivity to gain. Something to compensate for all the productivity loss that normally occurs when you adopt a new platform. WPF is just not there yet. It still takes more effort and more time to build a WPF app than a Forms app, and by a long shot. Combine this with less documentation on the net about WPF than Windows Forms, less people with WPF experience, less blogs on WPF, less books on WPF, less tips/tricks,etc. And don't get me started on XAML. Is it XML? is it a script? is it a code? Why did they decide that a hyperlink is just a label property? A lot of things still need to be ironed out there. I cannot afford to build my next project in WPF, it will cost me a lot more to do it (in manpower and time), with nothing to show for in return. At the moment all we do in WPF is pure-research-inhouse-hobby projects. A: I'm currently working on a WPF project - my first one. The learning curve has been incredibly steep, but in the end I think WPF is a great technology. The potential is fantastic, especially for advancing the state of data visualisation. I really like the data binding features, and the potential of styling. But it really does take a while to get your head around this. I think that Silverlight adoption will eventually drive WPF adoption back on the desktop - or maybe there won't be a desktop as much of what can be accomplished with Silverlight will replace many previously desktop applications. A: I am playing around with WPF and I must say I am not impressed. I seek a technology which will help me be productive in creating business applications. I remember building my first classic ASP website and being disgusted at the spaghetti code required to build a simple app. Viewing a single page I found HTML and java script mixed with vbscript with include files and calls to com objects--in short, a bloody illogical mess. In my view, it is important to have a simple and VISUAL development model with standards. I built many VB6 and .Net windows apps and they have a simple metaphor for development, making them easy to debug and modify by developers who did not write the original app. Forms encapsulate presentation logic, modules and classes in referenced assemblies encapsulate business logic and data logic. ADO.Net and other tools make data access robust, scalable, dynamic and customizable. Resizing windows controls and graphics to suit monitor resolution or client preference is easily done with Win Forms. It may be that WPF has many advanced features in graphics, but for most business apps, form should follow function--in other words, I am not putting goofy animated graphics on my banking windows app. One of the reasons I have not liked web development is because of the wide variety of ever-changing and complex technologies required for relatively simple applications which don't deliver enough significant change in actual functional results. Oh well, that's my two cents. ' ) A: We deployed a pretty major WPF application for a large investment bank I worked for. It turned out extremely successful, involving 3d visualization of OLAP data that allowed quicker trend analysis. It's being used extensively.
{ "language": "en", "url": "https://stackoverflow.com/questions/37944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: C++ : What's the easiest library to open video file I would like to open a small video file and map every frames in memory (to apply some custom filter). I don't want to handle the video codec, I would rather let the library handle that for me. I've tried to use Direct Show with the SampleGrabber filter (using this sample http://msdn.microsoft.com/en-us/library/ms787867(VS.85).aspx), but I only managed to grab some frames (not every frames!). I'm quite new in video software programming, maybe I'm not using the best library, or I'm doing it wrong. I've pasted a part of my code (mainly a modified copy/paste from the msdn example), unfortunately it doesn't grabb the 25 first frames as expected... [...] hr = pGrabber->SetOneShot(TRUE); hr = pGrabber->SetBufferSamples(TRUE); pControl->Run(); // Run the graph. pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done. // Find the required buffer size. long cbBuffer = 0; hr = pGrabber->GetCurrentBuffer(&cbBuffer, NULL); for( int i = 0 ; i < 25 ; ++i ) { pControl->Run(); // Run the graph. pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done. char *pBuffer = new char[cbBuffer]; hr = pGrabber->GetCurrentBuffer(&cbBuffer, (long*)pBuffer); AM_MEDIA_TYPE mt; hr = pGrabber->GetConnectedMediaType(&mt); VIDEOINFOHEADER *pVih; pVih = (VIDEOINFOHEADER*)mt.pbFormat; [...] } [...] Is there somebody, with video software experience, who can advise me about code or other simpler library? Thanks Edit: Msdn links seems not to work (see the bug) A: Currently these are the most popular video frameworks available on Win32 platforms: * *Video for Windows: old windows framework coming from the age of Win95 but still widely used because it is very simple to use. Unfortunately it supports only AVI files for which the proper VFW codec has been installed. *DirectShow: standard WinXP framework, it can basically load all formats you can play with Windows Media Player. Rather difficult to use. *Ffmpeg: more precisely libavcodec and libavformat that comes with Ffmpeg open- source multimedia utility. It is extremely powerful and can read a lot of formats (almost everything you can play with VLC) even if you don't have the codec installed on the system. It's quite complicated to use but you can always get inspired by the code of ffplay that comes shipped with it or by other implementations in open-source software. Anyway I think it's still much easier to use than DS (and much faster). It needs to be comipled by MinGW on Windows, but all the steps are explained very well here (in this moment the link is down, hope not dead). *QuickTime: the Apple framework is not the best solution for Windows platform, since it needs QuickTime app to be installed and also the proper QuickTime codec for every format; it does not support many formats, but its quite common in professional field (so some codec are actually only for QuickTime). Shouldn't be too difficult to implement. *Gstreamer: latest open source framework. I don't know much about it, I guess it wraps over some of the other systems (but I'm not sure). All of this frameworks have been implemented as backend in OpenCv Highgui, except for DirectShow. The default framework for Win32 OpenCV is using VFW (and thus able only to open some AVI files), if you want to use the others you must download the CVS instead of the official release and still do some hacking on the code and it's anyway not too complete, for example FFMPEG backend doesn't allow to seek in the stream. If you want to use QuickTime with OpenCV this can help you. A: I have used OpenCV to load video files and process them. It's also handy for many types of video processing including those useful for computer vision. A: Using the "Callback" model of SampleGrabber may give you better results. See the example in Samples\C++\DirectShow\Editing\GrabBitmaps. There's also a lot of info in Samples\C++\DirectShow\Filters\Grabber2\grabber_text.txt and readme.txt. A: I know it is very tempting in C++ to get a proper breakdown of the video files and just do it yourself. But although the information is out there, it is such a long winded process building classes to hand each file format, and make it easily alterable to take future structure changes into account, that frankly it just is not worth the effort. Instead I recommend ffmpeg. It got a mention above, but says it is difficult, it isn't difficult. There are a lot more options than most people would need which makes it look more difficult than it is. For the majority of operations you can just let ffmpeg work it out for itself. For example a file conversion ffmpeg -i inputFile.mp4 outputFile.avi Decide right from the start that you will have ffmpeg operations run in a thread, or more precisely a thread library. But have your own thread class wrap it so that you can have your own EventAgs and methods of checking the thread is finished. Something like :- ThreadLibManager() { List<MyThreads> listOfActiveThreads; public AddThread(MyThreads); } Your thread class is something like:- class MyThread { public Thread threadForThisInstance { get; set; } public MyFFMpegTools mpegTools { get; set; } } MyFFMpegTools performs many different video operations, so you want your own event args to tell your parent code precisely what type of operation has just raised and event. enum MyFmpegArgs { public int thisThreadID { get; set; } //Set as a new MyThread is added to the List<> public MyFfmpegType operationType {get; set;} //output paths etc that the parent handler will need to find output files } enum MyFfmpegType { FF_CONVERTFILE = 0, FF_CREATETHUMBNAIL, FF_EXTRACTFRAMES ... } Here is a small snippet of my ffmpeg tool class, this part collecting information about a video. I put FFmpeg in a particular location, and at the start of the software running it makes sure that it is there. For this version I have moved it to the Desktop, I am fairly sure I have written the path correctly for you (I really hate MS's special folders system, so I ignore it as much as I can). Anyway, it is an example of using windowless ffmpeg. public string GetVideoInfo(FileInfo fi) { outputBuilder.Clear(); string strCommand = string.Concat(" -i \"", fi.FullName, "\""); string ffPath = System.Environment.GetFolderPath(Environment.SpecialFolder.Desktop) + "\\ffmpeg.exe"; string oStr = ""; try { Process build = new Process(); //build.StartInfo.WorkingDirectory = @"dir"; build.StartInfo.Arguments = strCommand; build.StartInfo.FileName = ffPath; build.StartInfo.UseShellExecute = false; build.StartInfo.RedirectStandardOutput = true; build.StartInfo.RedirectStandardError = true; build.StartInfo.CreateNoWindow = true; build.ErrorDataReceived += build_ErrorDataReceived; build.OutputDataReceived += build_ErrorDataReceived; build.EnableRaisingEvents = true; build.Start(); build.BeginOutputReadLine(); build.BeginErrorReadLine(); build.WaitForExit(); string findThis = "start"; int offset = 0; foreach (string str in outputBuilder) { if (str.Contains("Duration")) { offset = str.IndexOf(findThis); oStr = str.Substring(0, offset); } } } catch { oStr = "Error collecting file information"; } return oStr; } private void build_ErrorDataReceived(object sender, DataReceivedEventArgs e) { string strMessage = e.Data; if (outputBuilder != null && strMessage != null) { outputBuilder.Add(string.Concat(strMessage, "\n")); } } A: Try using the OpenCV library. It definitely has the capabilities you require. This guide has a section about accessing frames from a video file. A: If it's for AVI files I'd read the data from the AVI file myself and extract the frames. Now use the video compression manager to decompress it. The AVI file format is very simple, see: http://msdn.microsoft.com/en-us/library/dd318187(VS.85).aspx (and use google). Once you have the file open you just extract each frame and pass it to ICDecompress() to decompress it. It seems like a lot of work but it's the most reliable way. If that's too much work, or if you want more than AVI files then use ffmpeg. A: OpenCV is the best solution if video in your case only needs to lead to a sequence of pictures. If you're willing to do real video processing, so ViDeo equals "Visual Audio", you need to keep up track with the ones offered by "martjno". New windows solutions also for Win7 include 3 new possibilities additionally: * *Windows Media Foundation: Successor of DirectShow; cleaned-up interface *Windows Media Encoder 9: It does not only include the programm, it also ships libraries for coding *Windows Expression 4: Successor of 2. Last 2 are commercial-only solutions, but the first one is free. To code WMF, you need to install the Windows SDK. A: I would recommend FFMPEG or GStreamer. Try and stay away from openCV unless you plan to utilize some other functionality than just streaming video. The library is a beefy build and a pain to install from source to configure FFMPEG/+GStreamer options.
{ "language": "en", "url": "https://stackoverflow.com/questions/37956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Tool for posting test messages onto a JMS queue? Can anyone recommend a tool for quickly posting test messages onto a JMS queue? Description: * *The tool should allow the user to enter some data, perhaps an XML payload, and then submit it to a queue. *I should be able to test consumer without producer. A: Apache JMeter is a tool (written for the Java platform) which allows: * *sending messages to a queue ( point to point) *publishing/subscribing to a topic *sending both persistent and non persistent messages *sending text , map and object messages Apache ActiveMQ includes a ProducerTool and a ConsumerTool example sources (Java) with many command-line configuration options. As it is based on the JMS API, using it with other message brokers should be easy with minor modifications. A: IBM provide a free, powerful command line tool called perfharness. Although aimed at benchmarking JMS providers, it's really good at generating (and consuming) test messages. You can use data either generated randomly or taken from a file. The power features include sending and consuming messages at a fixed rate, using a specific number of threads, using either JMS or native MQ, etc. It generates statistics telling you exactly how fast your queue is performing (hence the name). The only down side is that it's not super intuitive, given the number of operations it supports. A: I recommend the approach of @Will and using the Web Console of ActiveMQ which lets you post messages and browse queues or delete messages easily. Another approach I often use is to use a directory of files as sample data and use a Camel route to move the messages from the directory to a JMS queue - or to take them from a queue and save them to disk etc e.g. from("file://someDirectory"). to("activemq:MyQueue"); This would move all the files from someDirectory and send them to an ActiveMQ queue called MyQueue. If you'd rather leave the files in place you can use the URI "file://someDirectory?noop=true". For more details see * *the file endpoint in Camel *a sample Camel example routing from files to JMS *the various enterprise integration patterns Camel supports A: Also if the JMS broker supports JMX like ActiveMQ does you can use JConsole to post message and do a lot more. A: This answer doesn't apply to all JMS brokers, but if you happen to be using Apache ActiveMQ, the web-based admin console (by default at http://localhost:8161/admin) allows you to manually send text messages to topics or queues. It's handy for debugging. A: HermesJMS seems to be a rather powerful client for interacting with JMS providers. In my opinion, it is pretty unintuitive and hard to set up, though. (At least I'm mostly failing at it...) Other, more user-friendly clients are often vendor-specific. Sonic Message Manager is a very nice and simple-to-use open-source JMS client for SonicMQ. It would be great to have a client like that working with different providers. A: The ActiveMQ's web-based admin console has a big deficiency - one cannot specify any headers / custom properties when posting a message. I came across a neat FOSS tool that can post a message and also specify headers/properties: http://sourceforge.net/projects/activemqbrowser/ HTH A: ActiveMQ has a web console for sending test messages (like mentioned above), but if your provider doesn't have this, it might be easiest to just write a console app/web page to post test messages. Sending a message in JMS isn't too hard, you might get the most benefit just writing your own test client. If you can use Spring in Java, it has some really powerful utilities, check out the JmsTemplate. A: I'm not aware of a simple client. I remember looking for one a long time ago when I researched different queue systems and trying JMS I couldn't find one then, and I couldn't find one now. One thing though - there are a ton of tutorials that get you started and you could do a simple form to achieve that. Sorry to be not more helpful. A: I have built a GUI tool for administering Open Source JMS Servers (Currently Activemq and Hornetq). It can send and receive messages and most of the usual stuff, as well as aggregate queues and topics into logical "groups". Its a commercial product but the BETA is free and is fully functional. try it out at http://www.rockeyesoftware.com/ A: For ActiveMQ the examples directory holds scripts. For Rubyists, look at example/ruby/stompcat.rb and catstomp.rb for subscribing and publishing. A: I'm a brazilian developer and I made a Java program for Post HTTP and JMS Messages his available for download at: https://sites.google.com/site/felipeglino/softwares/posttool In thath page you can found english instructions.
{ "language": "en", "url": "https://stackoverflow.com/questions/37969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How do I change XML indentation in IntelliJ IDEA? By default IntelliJ IDEA 7.0.4 seems to use 4 spaces for indentation in XML files. The project I'm working on uses 2 spaces as indentation in all it's XML. Is there a way to configure the indentation in IntelliJ's editor? A: In IntelliJ IDEA 10.0.3 it's File > Settings > Code Style > General A: Sure there is. This is all you need to do: * *Go to File -> Settings -> Global Code Style -> General * *Disable the checkbox next to 'Use same settings for all file types' *The 'XML' tab should become enabled. Click it and set the 'tab' (and probably 'indent') size to 2. A: Note: make sure to not use the same file in two project, or your settings might revert to the default (4 spaces), instead of the custom XML tab indent size. See bug IDEA-130878, for the latest IntelliJ IDEA 14 (Oct 2014)
{ "language": "en", "url": "https://stackoverflow.com/questions/37976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Recommendations for implementations of ActiveRecord Does anyone have any recommendations for implementations of ActiveRecord in PHP? I've been using CBL ActiveRecord, but I was wondering if there were any viable alternatives. A: I realize this is old, but there is an absolutely fabulous PHP Activecord library called, appropriately, PHP Activerecord. I've used it for several months and it blows away the other libraries. Check it out: http://www.phpactiverecord.org/ A: Depends! ;) For example there is ADODB's Active Record implementation, then there is Zend_Db_DataTable and Doctrine. Those are the ones I know of, I am sure there are more implementations. Out of those three I'd recommend Doctrine. Last time I checked Adodb carried a lot of extra weight for PHP4 and Zend_Db_* is generally not known to be the best in terms of completeness and performance (most likely due to its young age). Doctrine aside from Active Table and the general database abstraction thing (aka DBAL) has so many things (e.g. migrations) which make it worth checking out, so if you haven't set your mind on a DBAL yet, you need to check it out. A: I found a few examples of other implementations: Luke Baker has one he is calling Active Record in PHP. Flinn has a post about why it is not possible in PHP because in Ruby everything is an object. With a followup here I know a few people who have looked at ZF have you tried that? CakePHP? A: This is more of a how-to-implement tip, but I started dabbling with creating an ActiveRecord/DataMapper implementation in PHP and quickly ran into many hurdles with array-like access. Eventually I found the SPL extensions to PHP, and particularly ArrayObject and ArrayIterator. These began to make my life a lot easier. Unfortunately I haven't had much time to devote to it, but anyone who tries something like this should check those out. A: Whilst not strictly ActiveRecord, Zend_Db_Table is pretty good.
{ "language": "en", "url": "https://stackoverflow.com/questions/37979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using MS Access & ODBC to connect to a remote PostgreSQL I currently have an MS Access application that connects to a PostgreSQL database via ODBC. This successfully runs on a LAN with 20 users (each running their own version of Access). Now I am thinking through some disaster recovery scenarios, and it seems that a quick and easy method of protecting the data is to use log shipping to create a warm-standby. This lead me to think about putting this warm-standby at a remote location, but then I have the question: Is Access connecting to a remote database via ODBC usable? I.e. the remote database is maybe in the same country with ok ping times and I have a 1mbit SDSL line. A: onnodb, The PostgreSQL ODBC driver is actively developed and an Access front-end combined with PostgreSQL server, in my opinion makes a great option on a LAN for rapid development. I have been involved in a reasonably big system (100+ PostgreSQL tables, 200+ Access forms, 1000+ Access queries & reports) and it has run excellently for a few years, with ~20 users. Any queries running slow because Access is doing something stupid can generally just be solved by using views, and any really data-intensive code can easily be moved into PostgreSQL functions and then called from Access. The only main ODBC-related issue we have is that there is no way to kill a slow running query from Access, so we do often get users just killing Access and then massive queries are just left executing on the server. A: Yes. I don't have any experience using Access to hit PostgreSQL from a remote location but I have successfully used Access as a front-end to SQL Server & DB2 from a remote location with success. Ironically, what you don't want to do is use Access to front-end an Access database (mdb) from a remote location over a high-latency link. Since hitting the MDB uses file-based operations it's pretty easy to end up with a corrupt database if you have anything more than a trivial db. A: It depends a lot on the database you're using as a back-end. I've had rather terrible experiences with MySQL as a back-end. Make sure the ODBC link you're using is actively developed, stable and complete --- this was definitely not the case for MySQL. You may also want to check for any compatibility issues between Access and Postgre. And, of course, it won't hurt to test extensively. Oh, and I think it'd be absolutely great if you could post back here later with your experiences! A: PostgreSQL works great as a backend for MS Access, there are a couple of support functions you should use to make things easier. See here for more info on this: http://www.amsoftwaredesign.com/smf/index.php?board=8.0
{ "language": "en", "url": "https://stackoverflow.com/questions/37991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to Get attachments Associated with artifacts in SourceForge Enterprise Edition We are using SourceForge Enterprise Edition 4.4 in one of our project. My question is, in CollabNet SFEE (SourceForge Enterprise Edition 4.4), how will we get attachments associated with an Artifacts Using SFEE SOAP API? We have made our own .net 2.0 client. We are not using .net SDK provided by Collabnet, A: If you commit with a message you can add "[artf1000]" (where artf1000 is your artifact number) to the beginning or end of your commit message. Then it will associate to that artifact you can also do this with documents using doc1000, to get the id of the item you can use the URL it is what is after the http://sfeeserver/sf/go/. Documents and artifacts are the only item I have used this for so I am not sure about other types of links, but I would imagine anything that has a /go/ID could be referenced by the ID. ie: * *http://sfeeserver/sf/go/artf1000 *http://sfeeserver/sf/go/doc1000 Edited to add: I have seemingly successfully tried this with releases, tasks, and discussions as well. A: You can cheat a little bit and have a look at the scripts from SFEE. Log into your SFEE via SSH and take a look at the following script: /usr/local/sourceforge/sourceforge_home/integration/post-commit.py Maybe it helps...
{ "language": "en", "url": "https://stackoverflow.com/questions/37996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Whats the best way to deliver TFS build status notifications to the team? I like the status email sent by TFS's alerts mechanism when a build breaks. However I would like to send such an email to the entire team and not rely on the team to subscribe to the alert... Having a hard time producing a nice and detailed enough message by myself when overriding the "OnBuildBreak" and using the MSBuildCommunityTasks EMail task. Is there a way to force a TFS alert to be sent to a list of emails? Can you recommend of a way to send such status notifications? A: You could try. Brian the build bunny :-) A: The Team Build Tray Notification tool what is included in the TFS 2008 Power Tools is very useful for this. See Buck Hodges' blog for screenshots and more information. A: I don't want to dig up an old topic, but for those that stumble upon it two years late (like me), this is built into TFS 2010 now. A: Set up an email alias for the team on the mail server, and enter this when subscribing to the mail. Try the Team Foundation Server Event Subscription Tool. This allows you to send emails to any address when any TFS event occurs. A: Brian the Build bunny is nice but the Nabaztagtag WiFi Rabbit bunny is pretty expensive and is currently out of stock. The Team Build Tray Notification that comes with TFS is ok, but: * *It's damn slow and polling is not configurable *It's too easy to miss the build being broken for projects you care about *Doesn't support different actions for different projects (e.g. show a modal dialog for project #1, but just show a short tray alert for project #2) *Doesn't support different triggers for different people (e.g. show notifications for just me on project #1 or anyone on project #2) *No information on what broke the build (e.g. compiler error, unit test, integration test) *No audible notification if system's on mute *No last build times So there's an open source project on Google Code that runs in the tray that's available on Google Code: http://code.google.com/p/siren-of-shame/. That project can work independently but it's designed to work with a USB Siren that is available for sale. A: I generally like the TFS Build Status Tray by Rob Aquila. Be sure to get the 1.0.1 Beta, as this lets you easily specify the projects to watch using a bit of GUI and also has a notify icon that changes color, so you only need to open the actual build status list when the icon turns red. The 1.0 version had a fixed icon, and only notify toasts in the corner of your screen. There is also a version of the same tool that is meant to be shown full screen on a wall mounted display for instance. A: The July release of TFS 2008 PowerTools adds an "Alert Editor" to Team Explorer. Adding Alerts is a breeze. It has a query tool similar to the Query tool. A: In my mind, a open source project named 'Web Deployment Projects' can do this. your can search for it.
{ "language": "en", "url": "https://stackoverflow.com/questions/37997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to get your network support team behind click-once? I'm trying to make the case for click-once and smart client development but my network support team wants to keep with web development for everything. What is the best way to convince them that click-once and smart client development have a place in the business? A: Here is a couple of ideas that may help * *long running processes, they are not asp.net best friend. *scaling, using client side processing as compared to bigger or more servers reduces cost etc. A: We use ClickOnce where I work; in terms of comparison to a web release I would base the case around the need for providing users with a rich client app, otherwise it might well actually be better to use web applications. In terms of releasing a rich client app ClickOnce is fantastic; you can set it up to enforce updates on startup thus enforcing a version throughout the network. You can make the case that ClickOnce gives you the same benefit of having a single deployment point that web deployment possesses. Personally I've found ClickOnce to be unbelievably useful. If you're developing rich client .net apps (in Windows, though let's face it the vast majority of real .net development is in Windows) and want to deploy it across a network nothing else compares. A: They have a place in the Windows environment but not in any other environment and so if you intend on writing applications for external clients, then your probably best sticking with Web based development. I heard this "Write Once, Run Many" before from Microsoft when Asp.net 1.1 was released, it never happened in practice. A: @Mark scaling, using client side processing as compared to bigger or more servers reduces cost etc. I'm not sure I would entirely agree with this. It would seem to cost less to buy 1 powerful server and 1,000's of "dum terminals" than an average powerful server and 1,000 of powerful desktop computers. A: @GateKiller when i speak of scaling i was talking about the cost of buying more servers and not clients. most workstations in an organization barely use 50% of their computing power right through the day. If i was to use a click once deployed application i would be using the grunt of existing workstations therefore not having any further cost on the organiztion.
{ "language": "en", "url": "https://stackoverflow.com/questions/38002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to use LINQ To SQL in an N-Tier Solution? Now that LINQ to SQL is a little more mature, I'd like to know of any techniques people are using to create an n-tiered solution using the technology, because it does not seem that obvious to me. A: Hm, Rockford Lhotka sad, that LINQ to SQL is wonderful technology for fetching data from database. He suggests that afterwards they'll must to be bind to "reach domain objects" (aka. CSLA objetcs). Seriously speaking, LINQ to SQL had it's support for n-tier architecture see DataContext.Update method. A: You might want to look into the ADO .Net Entity Framework as an alternative to LINQ to SQL, although it does support LINQ as well. I believe LINQ to SQL is designed to be fairly lightweight and simple, whereas the Entity Framework is more heavy duty and probably more suitable in large Enterprise applications. A: LINQ to SQL doesn't really have a n-tier story that I've seen, since the objects that it creates are created in the class with the rest of it, you don't really have an assembly that you can nicely reference through something like Web Services, etc. The only way I'd really consider it is using the datacontext to fetch data, then fill an intermediary data model, passing that through, and referencing it on both sides, and using that in your client side - then passing them back and pushing the data back into a new Datacontext or intellgently updating rows after you refetch them. That's if I'm understanding what you're trying to get at :\ I asked ScottGu the same question on his blog when I first started looking at it - but I haven't seen a single scenario or app in the wild that uses LINQ to SQL in this way. Websites like Rob Connery's Storefront are closer to the provider. A: OK, I am going to give myself one possible solution. Inserts/Updates were never an issue; you can wrap the business logic in a Save/Update method; e.g. public class EmployeesDAL { ... SaveEmployee(Employee employee) { //data formatting employee.FirstName = employee.FirstName.Trim(); employee.LastName = employee.LastName.Trim(); //business rules if(employee.FirstName.Length > 0 && employee.LastName.Length > 0) { MyCompanyContext context = new MyCompanyContext(); //insert if(employee.empid == 0) context.Employees.InsertOnSubmit(employee); else { //update goes here } context.SubmitChanges(); } else throw new BusinessRuleException("Employees must have first and last names"); } } For fetching data, or at least the fetching of data that is coming from more than one table you can use stored procedures or views because the results will not be anonymous so you can return them from an outside method. For instance, using a stored proc: public ISingleResult<GetEmployeesAndManagersResult> LoadEmployeesAndManagers() { MyCompanyContext context = new MyCompanyContext(); var emps = context.GetEmployeesAndManagers(); return emps; } A: Seriously speaking, LINQ to SQL had it's support for n-tier architecture see DataContext.Update method Some of what I've read suggests that the business logic wraps the DataContext - in other words you wrap the update in the way that you suggest. The way i traditionally write business objects i usually encapsulate the "Load methods" in the BO as well; so I might have a method named LoadEmployeesAndManagers that returns a list of employees and their immediate managers (this is a contrived example) . Maybe its just me, but in my front end I'd rather see e.LoadEmployeesAndManagers() than some long LINQ statement. Anyway, using LINQ it would probably look something like this (not checked for syntax correctness): var emps = from e in Employees join m in Employees on e.ManagerEmpID equals m.EmpID select new { e, m.FullName }; Now if I understand things correctly, if I put this in say a class library and call it from my front end, the only way I can return this is as an IEnumerable, so I lose my strong typed goodness. The only way I'd be able to return a strongly typed object would be to create my own Employees class (plus a string field for manager name) and fill it from the results of my LINQ to SQL statement and then return that. But this seems counter intuitive... what exactly did LINQ to SQL buy me if I have to do all that? I think that I might be looking at things the wrong way; any enlightenment would be appreciated. A: "the only way I can return this is as an IEnumerable, so I lose my strong typed goodness" that is incorrect. In fact your query is strongly typed, it is just an anonymous type. I think the query you want is more like: var emps = from e in Employees join m in Employees on e.ManagerEmpID equals m.EmpID select new Employee { e, m.FullName }; Which will return IEnumerable. Here is an article I wrote on the topic. Linq-to-sql is an ORM. It does not affect the way that you design an N-tiered application. You use it the same way you would use any other ORM. A: @liammclennan Which will return IEnumerable. ... Linq-to-sql is an ORM. It does not affect the way that you design an N-tiered application. You use it the same way you would use any other ORM. Then I guess I am still confused. Yes, Linq-to-Sql is an ORM; but as far as I can tell I am still littering my front end code with inline sql type statements (linq, not sql.... but still I feel that this should be abstracted away from the front end). Suppose I wrap the LINQ statement we've been using as an example in a method. As far as I can tell, the only way I can return it is this way: public class EmployeesDAL { public IEnumerable LoadEmployeesAndManagers() { MyCompanyContext context = new MyCompanyContext(); var emps = from e in context.Employees join m in context.Employees on e.ManagerEmpID equals m.EmpID select new { e, m.FullName }; return emps; } } From my front end code I would do something like this: EmployeesDAL dal = new EmployeesDAL; var emps = dal.LoadEmployeesAndManagers(); This of course returns an IEnumerable; but I cannot use this like any other ORM like you say (unless of course I misunderstand), because I cannot do this (again, this is a contrived example): txtEmployeeName.Text = emps[0].FullName This is what I meant by "I lose strong typed goodness." I think that I am starting to agree with Crucible; that LINQ-to-SQL was not designed to be used in this way. Again, if I am not seeing things correctly, someone show me the way :)
{ "language": "en", "url": "https://stackoverflow.com/questions/38005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: C# string concatenation and string interning When performing string concatentation of an existing string in the intern pool, is a new string entered into the intern pool or is a reference returned to the existing string in the intern pool? According to this article, String.Concat and StringBuilder will insert new string instances into the intern pool? http://community.bartdesmet.net/blogs/bart/archive/2006/09/27/4472.aspx Can anyone explain how concatenation works with the intern pool? A: If you create new strings, they will not automatically be put into the intern pool, unless you concatenate constants compile-time, in which case the compiler will create one string result and intern that as part of the JIT process. A: You can see whether a string has been interned by calling String.IsInterned. The call will return a new string that is either a reference to an interned string equal to the string that was passed as an argument, or null if the string was not interned.
{ "language": "en", "url": "https://stackoverflow.com/questions/38010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Referencing same table name in different schemas I am facing problem with an Oracle Query in a .net 2.0 based windows application. I am using System.Data.OracleClient to connect to oracle database. Name of database is myDB. Below the the connection string I am using: Data Source=(DESCRIPTION =(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP) (HOST = 172.16.0.24)(PORT = 1522)))(CONNECT_DATA =(SERVICE_NAME = ORCL))); User ID=myDB;Password=myDB;Unicode=True If I run the below query then it will given me wrong result (here wrong result means incorrect data. The data doesn't belongs to myDB): SELECT ID, NAME FROM MyTempTable WHERE ID IN (10780, 10760, 11890) But if I append the database name along with it the it is giving correct result: SELECT ID, NAME FROM "myDB".MyTempTable WHERE ID IN (10780, 10760, 11890) My limitation is that I cannot append the database name as this is a generic application and can run with any database on run time. Please help. A: This looks like an issue with name resolution, try creating a public synonym on the table: CREATE PUBLIC SYNONYM MyTempTable for MyTempTable; Also, what exactly do you mean by wrong result, incorrect data, error message? Edit: What is the name of the schema that the required table belongs to? It sounds like the table that you are trying to select from is in a different schema to the one that belongs to the user you are connecting as. A: Upon connecting to the database issue am ALTER SESSION SET CURRENT_SCHEMA=abc; where abc is the user that owns the tables. A: For starters, I would suggest that you use the .net data providers from Oracle - if at all possible. If you are starting off in a project it will be the best way to save yourself pain further down the line. You can get them from here A: To expand on what stjohnroe has said it looks like the reason you are getting different results is because two different tables with the same name exist on different schemas. By adding the myDB username to the front of the query you now access the table with the data you are expecting. (Since you say the data doesn't belong on "myDB" this probably means the app/proc that is writing the data is writing to the wrong table too). The resolution is: 1. If the table really doesn't belong on "myDB" then drop it for tidyness sake (now you may get 904 table not found errors when you run your code) 2. Create a synonym to the schema and table you really want to access (eg CREATE SYNONYM myTable FOR aschema.myTable;) 3. Don't forget to grant access rights from the schema that owns the table (eg: GRANT SELECT,INSERT,DELETE ON myTable TO myDB; (here myDB refers to the user/schema)) A: Try adding CONNECT_DATA=(SID=myDB)(SERVICE_NAME=ORCL) in the connection string.
{ "language": "en", "url": "https://stackoverflow.com/questions/38014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What's the best approach to naming classes? Coming up with good, precise names for classes is notoriously difficult. Done right, it makes code more self-documenting and provides a vocabulary for reasoning about code at a higher level of abstraction. Classes which implement a particular design pattern might be given a name based on the well known pattern name (e.g. FooFactory, FooFacade), and classes which directly model domain concepts can take their names from the problem domain, but what about other classes? Is there anything like a programmer's thesaurus that I can turn to when I'm lacking inspiration, and want to avoid using generic class names (like FooHandler, FooProcessor, FooUtils, and FooManager)? A: I'll cite some passages from Implementation Patterns by Kent Beck: Simple Superclass Name "[...] The names should be short and punchy. However, to make the names precise sometimes seems to require several words. A way out of this dilemma is picking a strong metaphor for the computation. With a metaphor in mind, even single words bring with them a rich web of associations, connections, and implications. For example, in the HotDraw drawing framework, my first name for an object in a drawing was DrawingObject. Ward Cunningham came along with the typography metaphor: a drawing is like a printed, laid-out page. Graphical items on a page are figures, so the class became Figure. In the context of the metaphor, Figure is simultaneously shorter, richer, and more precise than DrawingObject." Qualified Subclass Name "The names of subclasses have two jobs. They need to communicate what class they are like and how they are different. [...] Unlike the names at the roots of hierarchies, subclass names aren’t used nearly as often in conversation, so they can be expressive at the cost of being concise. [...] Give subclasses that serve as the roots of hierarchies their own simple names. For example, HotDraw has a class Handle which presents figure- editing operations when a figure is selected. It is called, simply, Handle in spite of extending Figure. There is a whole family of handles and they most appropriately have names like StretchyHandle and TransparencyHandle. Because Handle is the root of its own hierarchy, it deserves a simple superclass name more than a qualified subclass name. Another wrinkle in subclass naming is multiple-level hierarchies. [...] Rather than blindly prepend the modifiers to the immediate superclass, think about the name from the reader’s perspective. What class does he need to know this class is like? Use that superclass as the basis for the subclass name." Interface Two styles of naming interfaces depend on how you are thinking of the interfaces. Interfaces as classes without implementations should be named as if they were classes (Simple Superclass Name, Qualified Subclass Name). One problem with this style of naming is that the good names are used up before you get to naming classes. An interface called File needs an implementation class called something like ActualFile, ConcreteFile, or (yuck!) FileImpl (both a suffix and an abbreviation). In general, communicating whether one is dealing with a concrete or abstract object is important, whether the abstract object is implemented as an interface or a superclass is less important. Deferring the distinction between interfaces and superclasses is well >supported by this style of naming, leaving you free to change your mind later if that >becomes necessary. Sometimes, naming concrete classes simply is more important to communication than hiding the use of interfaces. In this case, prefix interface names with “I”. If the interface is called IFile, the class can be simply called File. For more detailed discussion, buy the book! It's worth it! :) A: Always go for MyClassA, MyClassB - It allows for a nice alpha sort.. I'm kidding! This is a good question, and something I experienced not too long ago. I was reorganising my codebase at work and was having problems of where to put what, and what to call it.. The real problem? I had classes doing too much. If you try to adhere to the single responsibility principle it will make everything all come together much nicer.. Rather than one monolithic PrintHandler class, you could break it down into PageHandler , PageFormatter (and so on) and then have a master Printer class which brings it all together. In my re-org, it took me time, but I ended up binning a lot of duplicate code, got my codebase much more logical and learned a hell of a lot when it comes to thinking before throwing an extra method in a class :D I would not however recommend putting things like pattern names into the class name. The classes interface should make that obvious (like hiding the constructor for a singleton). There is nothing wrong with the generic name, if the class is serving a generic purpose. Good luck! A: If your "FooProcessor" really does process foos, then don't be reluctant to give it that name just because you already have a BarProcessor, BazProcessor, etc. When in doubt, obvious is best. The other developers who have to read your code may not be using the same thesaurus you are. That said, more specificity wouldn't hurt for this particular example. "Process" is a pretty broad word. Is it really a "FooUpdateProcessor" (which might become "FooUpdater"), for example? You don't have to get too "creative" about the naming, but if you wrote the code you probably have a fairly good idea of what it does and doesn't do. Finally, remember that the bare class name isn't all that you and the readers of your code have to go on - there are usually namespaces in play as well. Those can often give readers enough context to see clearly what your class if really for, even if its bare name is fairly generic. A: Josh Bloch's excellent talk about good API design has a few good bits of advice: * *Classes should do one thing and do it well. *If a class is hard to name or explain then it's probably not following the advice in the previous bullet point. *A class name should instantly communicate what the class is. *Good names drive good designs. If your problem is what to name exposed internal classes, maybe you should consolidate them into a larger class. If your problem is naming a class that is doing a lot of different stuff, you should consider breaking it into multiple classes. If that's good advice for a public API then it can't hurt for any other class. A: If you're stuck with a name, sometimes just giving it any half-sensible name with commitment to revising it later is a good strategy. Don't get naming paralysis. Yes, names are very important but they're not important enough to waste huge amounts of time on. If you can't think up a good name in 10 minutes, move on. A: If a good name doesn't spring to mind, I would probably question whether there is a deeper problem - is the class serving a good purpose? If it is, naming it should be pretty straightforward.
{ "language": "en", "url": "https://stackoverflow.com/questions/38019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "102" }
Q: How do I find the authoritative name-server for a domain name? How can I find the origins of conflicting DNS records? A: The term you should be googling is "authoritative," not "definitive". On Linux or Mac you can use the commands whois, dig, host, nslookup or several others. nslookup might also work on Windows. An example: $ whois stackoverflow.com [...] Domain servers in listed order: NS51.DOMAINCONTROL.COM NS52.DOMAINCONTROL.COM As for the extra credit: Yes, it is possible. aryeh is definitely wrong, as his suggestion usually will only give you the IP address for the hostname. If you use dig, you have to look for NS records, like so: dig ns stackoverflow.com Keep in mind that this may ask your local DNS server and thus may give wrong or out-of-date answers that it has in its cache. A: We've built a dns lookup tool that gives you the domain's authoritative nameservers and its common dns records in one request. Example: https://www.misk.com/tools/#dns/stackoverflow.com Our tool finds the authoritative nameservers by performing a realtime (uncached) dns lookup at the root nameservers and then following the nameserver referrals until we reach the authoritative nameservers. This is the same logic that dns resolvers use to obtain authoritative answers. A random authoritative nameserver is selected (and identified) on each query allowing you to find conflicting dns records by performing multiple requests. You can also view the nameserver delegation path by clicking on "Authoritative Nameservers" at the bottom of the dns lookup results from the example above. Example: https://www.misk.com/tools/#dns/[email protected] A: You'll want the SOA (Start of Authority) record for a given domain name, and this is how you accomplish it using the universally available nslookup command line tool: command line> nslookup > set querytype=soa > stackoverflow.com Server: 217.30.180.230 Address: 217.30.180.230#53 Non-authoritative answer: stackoverflow.com origin = ns51.domaincontrol.com # ("primary name server" on Windows) mail addr = dns.jomax.net # ("responsible mail addr" on Windows) serial = 2008041300 refresh = 28800 retry = 7200 expire = 604800 minimum = 86400 Authoritative answers can be found from: stackoverflow.com nameserver = ns52.domaincontrol.com. stackoverflow.com nameserver = ns51.domaincontrol.com. The origin (or primary name server on Windows) line tells you that ns51.domaincontrol is the main name server for stackoverflow.com. At the end of output all authoritative servers, including backup servers for the given domain, are listed. A: On *nix: $ dig -t ns <domain name> A: You could find out the nameservers for a domain with the "host" command: [davidp@supernova:~]$ host -t ns stackoverflow.com stackoverflow.com name server ns51.domaincontrol.com. stackoverflow.com name server ns52.domaincontrol.com. A: You can use the whois service. On a UNIX like operating system you would execute the following command. Alternatively you can do it on the web at http://www.internic.net/whois.html. whois stackoverflow.com You would get the following response. ...text removed here... Domain servers in listed order: NS51.DOMAINCONTROL.COM NS52.DOMAINCONTROL.COM You can use nslookup or dig to find out more information about records for a given domain. This might help you resolve the conflicts you have described. A: I have found that for some domains, the above answers do not work. The quickest way I have found is to first check for an NS record. If that doesn't exist, check for an SOA record. If that doesn't exist, recursively resolve the name using dig and take the last NS record returned. An example that fits this is analyticsdcs.ccs.mcafee.com. * *Check for an NS record host -t NS analyticsdcs.ccs.mcafee.com. *If no NS found, check for an SOA record host -t SOA analyticsdcs.ccs.mcafee.com. *If neither NS or SOA, do full recursive and take the last NS returned dig +trace analyticsdcs.ccs.mcafee.com. | grep -w 'IN[[:space:]]*NS' | tail -1 *Test that the name server returned works host analyticsdcs.ccs.mcafee.com. gtm2.mcafee.com. A: You used the singular in your question but there are typically several authoritative name servers, the RFC 1034 recommends at least two. Unless you mean "primary name server" and not "authoritative name server". The secondary name servers are authoritative. To find out the name servers of a domain on Unix: % dig +short NS stackoverflow.com ns52.domaincontrol.com. ns51.domaincontrol.com. To find out the server listed as primary (the notion of "primary" is quite fuzzy these days and typically has no good answer): % dig +short SOA stackoverflow.com | cut -d' ' -f1 ns51.domaincontrol.com. To check discrepencies between name servers, my preference goes to the old check_soa tool, described in Liu & Albitz "DNS & BIND" book (O'Reilly editor). The source code is available in http://examples.oreilly.com/dns5/ % check_soa stackoverflow.com ns51.domaincontrol.com has serial number 2008041300 ns52.domaincontrol.com has serial number 2008041300 Here, the two authoritative name servers have the same serial number. Good. A: I found that the best way it to add always the +trace option: dig SOA +trace stackoverflow.com It works also with recursive CNAME hosted in different provider. +trace trace imply +norecurse so the result is just for the domain you specify. A: An easy way is to use an online domain tool. My favorite is Domain Tools (formerly whois.sc). I'm not sure if they can resolve conflicting DNS records though. As an example, the DNS servers for stackoverflow.com are NS51.DOMAINCONTROL.COM NS52.DOMAINCONTROL.COM A: SOA records are present on all servers further up the hierarchy, over which the domain owner has NO control, and they all in effect point to the one authoritative name server under control of the domain owner. The SOA record on the authoritative server itself is, on the other hand, not strictly needed for resolving that domain, and can contain bogus info (or hidden primary, or otherwise restricted servers) and should not be relied on to determine the authoritative name server for a given domain. You need to query the server that is authoritative for the top level domain to obtain reliable SOA information for a given child domain. (The information about which server is authoritative for which TLD can be queried from the root name servers). When you have reliable information about the SOA from the TLD authoritative server, you can then query the primary name server itself authoritative (the one thats in the SOA record on the gTLD nameserver!) for any other NS records, and then proceed with checking all those name servers you've got from querying the NS records, to see if there is any inconsistency for any other particular record, on any of those servers. This all works much better/reliable with linux and dig than with nslookup/windows. A: Unfortunately, most of these tools only return the NS record as provided by the actual name server itself. To be more accurate in determining which name servers are actually responsible for a domain, you'd have to either use "whois" and check the domains listed there OR use "dig [domain] NS @[root name server]" and run that recursively until you get the name server listings... I wish there were a simple command line that you could run to get THAT result dependably and in a consistent format, not just the result that is given from the name server itself. The purpose of this for me is to be able to query about 330 domain names that I manage so I can determine exactly which name server each domain is pointing to (as per their registrar settings). Anyone know of a command using "dig" or "host" or something else on *nix?
{ "language": "en", "url": "https://stackoverflow.com/questions/38021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "354" }
Q: How can I improve my support of Novell networks when I don't have a Novell network? I work for a .NET/MSSQL shop that has trouble supporting customers running Novell, partially because we don't have Novell (or the money for it) and partially because we have no one with Novell experience. This question could easily be expanded to "How can we improve our support of technology X when we don't have technology X?" Obviously, I expect someone to say "it is easy to acquire the technology or someone with that kind of experience," but keep in mind I'm just a developer, not the development manager or someone with power over the purse strings. I looked for a Novell server virtual appliance (though I'm not sure "Novell server" is what I should be looking for) but didn't find much on VMware's website. A: There used to be some relatively cheap developers network which we used to belong to before the Novell questions all whithered and died away (famous last words, now I bet we will get one tomorrow), there is never any substitute for having the software / hardware, the only alternative is to write a test program and get your user to run it. I am intrigued though as to what problem you are getting, the main ones we got were file locks with Jet databases. A: When you say running Novell, you need to consider what that means. Most likely you mean either uses Netware servers, or use eDirectory for authentication. With the release of Open Enterprise Server, Novell ported the core functionality of most of the Netware stack to run on SLES (SUSE Linux Enterprise Server). Thus OES runs on a Netware or Linux kernel. Services are much the same on both (there are some subtle differences that are probably outside the scope of this issue). If you mean Netware servers (or even OES Linux servers providing file shares) then it becomes an issue of how you access the file system. If it is simple reads and writes from a network drive, then there are two approaches. 1) Install the Novell Client on the box that needs the file system access so it can make an NCP (Novell Core Protocols) connection to volumes and data hosted there. 2) Get the Novell server admin to enable CIFS/Samba (On Netware kernels, it is CIFS a not-ported-from-Samba implementation of CIFS. On OES Linux it is currently Samba using eDirectory for credentails. On the soon to be released OES 2 Sp1, on the Linux kernel, the Netware CIFS stack has been ported to Linux since it is much more performant and scalable, when compared to just Samba) on the server. More likely however you mean eDirectory is used for Authentication. If that is the case, just pretend it is an LDAP directory and you should be pretty much fine. eDirectory is cross platform, and runs on Windows, Netware, Linux (SLES and Red Hat at least), Solaris, AIX, and HPUX. It is practically indistinguishable which platform it is running on, so whatever you have in house expertise on the core OS, install an eDirectory instance on it, set up a test tree, and you can test your authentication code against it fairly easily. There should be a free 250,000 user license available for just eDirectory for Developers still. (In fact I do not think they even bother asking for licenses for just eDirectory. The add on products require licensing. OES requires a license to access file shares or shared printers. Identity Manager (IDM) is bundled with almost any other Novell product license, but only for use with the bundled drivers (AD, eDir, Notes, Exchange) the rest require licensing. Hope that helps. If you are looking for pointers and beginners guides, there is a huge amount of content at Novell Cool Solutions. If you have a specific problem let me know and I will see if I can help out. A: There is a 60 day evaluation trial of Open Enterprise Server 2 available (requires free registration). If you install it in a VM, there's nothing stopping you from reinstalling it after 60 days (well except licence). But you will need someone good with Linux to handle this (and preferably good with this precise technology). In a MS shop this might be a problem. The easiest solution would be to outsource this - have some external techs test your software for compatibility. If you find out you are paying too much - hire someone who knows this software stack. You can't support something if you don't test against it. And you can't test against something you don't know.
{ "language": "en", "url": "https://stackoverflow.com/questions/38026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why does Splint (the C code checker) give an error when comparing a float to an int? Both are mathematical values, however the float does have more precision. Is that the only reason for the error - the difference in precision? Or is there another potential (and more serious) problem? A: It's because the set of integer values does not equal the set of float values for the 'int' and 'float' types. For example, the float value 0.5 has no equal in the integer set and the integer value 4519245367 might not exist in the set of values a float can store. So, the checker flags this as an issue to be checked by the programmer. A: Because it probably isn't a very good idea. Not all floats can be truncated to ints; not all ints can be converted to floats. A: When doing the comparison, the integer value will get "promoted" to a floating point value. At that point you are doing a exact equality comparison between two floating point numbers, which is almost always a bad thing. You should generally have some sort of "epsilon ball", or range of acceptable values, and you do the comparison if the two vaues are close enough to each other to be considered equal. You need a function roughly like this: int double_equals(double a, double b, double epsilon) { return ( a > ( b - epsilon ) && a < ( b + epsilon ) ); } If your application doesn't have an obvious choice of epsilon, then use DBL_EPSILON. A: Because floats can't store an exact int value, so if you have two variables, int i and float f, even if you assign "i = f;", the comparison "if (i == f)" probably won't return true. A: Assuming signed integers and IEEE floating point format, the magnitudes of integers that can be represented are: short -> 15 bits float -> 23 bits long -> 31 bits double -> 52 bits Therefore a float can represent any short and a double can represent any long. A: If you need to get around this (you have a legitimate reason and are happy none of the issues mentioned in the other answers are an issue for you) then just cast from one type to another.
{ "language": "en", "url": "https://stackoverflow.com/questions/38027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Selecting X words from a text field in MySQL I'm building a basic search functionality, using LIKE (I'd be using fulltext but can't at the moment) and I'm wondering if MySQL can, on searching for a keyword (e.g. WHERE field LIKE '%word%') return 20 words either side of the keyword, as well? A: You can do it all in the query using SUBSTRING_INDEX CONCAT_WS( ' ', -- 20 words before TRIM( SUBSTRING_INDEX( SUBSTRING(field, 1, INSTR(field, 'word') - 1 ), ' ', -20 ) ), -- your word 'word', -- 20 words after TRIM( SUBSTRING_INDEX( SUBSTRING(field, INSTR(field, 'word') + LENGTH('word') ), ' ', 20 ) ) ) A: Use the INSTR() function to find the position of the word in the string, and then use SUBSTRING() function to select a portion of characters before and after the position. You'd have to look out that your SUBSTRING instruction don't use negative values or you'll get weird results. Try that, and report back. A: I don't think its possible to limit the number of words returned, however to limit the number of chars returned you could do something like SELECT SUBSTRING(field_name, LOCATE('keyword', field_name) - chars_before, total_chars) FROM table_name WHERE field_name LIKE "%keyword%" * *chars_before - is the number of chars you wish to select before the keyword(s) *total_chars - is the total number of chars you wish to select i.e. the following example would return 30 chars of data staring from 15 chars before the keyword SUBSTRING(field_name, LOCATE('keyword', field_name) - 15, 30) Note: as aryeh pointed out, any negative values in SUBSTRING() buggers things up considerably - for example if the keyword is found within the first [chars_before] chars of the field, then the last [chars_before] chars of data in the field are returned. A: I think your best bet is to get the result via SQL query and apply a regular expression programatically that will allow you to retrieve a group of words before and after the searched word. I can't test it now, but the regular expression should be something like: .*(\w+)\s*WORD\s*(\w+).* where you replace WORD for the searched word and use regex group 1 as before-words, and 2 as after-words I will test it later when I can ask my RegexBuddy if it will work :) and I will post it here
{ "language": "en", "url": "https://stackoverflow.com/questions/38035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: C++: How to extract a string from RapidXml In my C++ program I want to parse a small piece of XML, insert some nodes, then extract the new XML (preferably as a std::string). RapidXml has been recommended to me, but I can't see how to retrieve the XML back as a text string. (I could iterate over the nodes and attributes and build it myself, but surely there's a build in function that I am missing.) Thank you. A: Use print function (found in rapidxml_print.hpp utility header) to print the XML node contents to a stringstream. A: rapidxml::print reuqires an output iterator to generate the output, so a character string works with it. But this is risky because I can not know whether an array with fixed length (like 2048 bytes) is long enough to hold all the content of the XML. The right way to do this is to pass in an output iterator of a string stream so allow the buffer to be expanded when the XML is being dumped into it. My code is like below: std::stringstream stream; std::ostream_iterator<char> iter(stream); rapidxml::print(iter, doc, rapidxml::print_no_indenting); printf("%s\n", stream.str().c_str()); printf("len = %d\n", stream.str().size()); A: If you do build XML yourself, don't forget to escape the special characters. This tends to be overlooked, but can cause some serious headaches if it is not implemented: * *<        &lt; *>        &gt; *&        &amp; *"        &quot; *'        &apos; A: Here's how to print a node to a string straight from the RapidXML Manual: xml_document<> doc; // character type defaults to char // ... some code to fill the document // Print to stream using operator << std::cout << doc; // Print to stream using print function, specifying printing flags print(std::cout, doc, 0); // 0 means default printing flags // Print to string using output iterator std::string s; print(std::back_inserter(s), doc, 0); // Print to memory buffer using output iterator char buffer[4096]; // You are responsible for making the buffer large enough! char *end = print(buffer, doc, 0); // end contains pointer to character after last printed character *end = 0; // Add string terminator after XML A: Althoug the documentation is poor on this topic, I managed to get some working code by looking at the source. Although it is missing the xml header which normally contains important information. Here is a small example program that does what you are looking for using rapidxml: #include <iostream> #include <sstream> #include "rapidxml/rapidxml.hpp" #include "rapidxml/rapidxml_print.hpp" int main(int argc, char* argv[]) { char xml[] = "<?xml version=\"1.0\" encoding=\"latin-1\"?>" "<book>" "</book>"; //Parse the original document rapidxml::xml_document<> doc; doc.parse<0>(xml); std::cout << "Name of my first node is: " << doc.first_node()->name() << "\n"; //Insert something rapidxml::xml_node<> *node = doc.allocate_node(rapidxml::node_element, "author", "John Doe"); doc.first_node()->append_node(node); std::stringstream ss; ss <<*doc.first_node(); std::string result_xml = ss.str(); std::cout <<result_xml<<std::endl; return 0; } A: If you aren't yet committed to Rapid XML, I can recommend some alternative libraries: * *Xerces - This is probably the defacto C++ implementation. *XMLite - I've had some luck with this minimal XML implementation. See the article at http://www.codeproject.com/KB/recipes/xmlite.aspx A: Use static_cast<> Ex: rapidxml::xml_document<> doc; rapidxml::xml_node <> * root_node = doc.first_node(); std::string strBuff; doc.parse<0>(xml); . . . strBuff = static_cast<std::string>(root_node->first_attribute("attribute_name")->value()); A: Following is very easy, std::string s; print(back_inserter(s), doc, 0); cout << s; You only need to include "rapidxml_print.hpp" header in your source code.
{ "language": "en", "url": "https://stackoverflow.com/questions/38037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How can I get the DateTime for the start of the week? How do I find the start of the week (both Sunday and Monday) knowing just the current time in C#? Something like: DateTime.Now.StartWeek(Monday); A: Use an extension method: public static class DateTimeExtensions { public static DateTime StartOfWeek(this DateTime dt, DayOfWeek startOfWeek) { int diff = (7 + (dt.DayOfWeek - startOfWeek)) % 7; return dt.AddDays(-1 * diff).Date; } } Which can be used as follows: DateTime dt = DateTime.Now.StartOfWeek(DayOfWeek.Monday); DateTime dt = DateTime.Now.StartOfWeek(DayOfWeek.Sunday); A: A little more verbose and culture-aware: System.Globalization.CultureInfo ci = System.Threading.Thread.CurrentThread.CurrentCulture; DayOfWeek fdow = ci.DateTimeFormat.FirstDayOfWeek; DayOfWeek today = DateTime.Now.DayOfWeek; DateTime sow = DateTime.Now.AddDays(-(today - fdow)).Date; A: using System; using System.Globalization; namespace MySpace { public static class DateTimeExtention { // ToDo: Need to provide culturaly neutral versions. public static DateTime GetStartOfWeek(this DateTime dt) { DateTime ndt = dt.Subtract(TimeSpan.FromDays((int)dt.DayOfWeek)); return new DateTime(ndt.Year, ndt.Month, ndt.Day, 0, 0, 0, 0); } public static DateTime GetEndOfWeek(this DateTime dt) { DateTime ndt = dt.GetStartOfWeek().AddDays(6); return new DateTime(ndt.Year, ndt.Month, ndt.Day, 23, 59, 59, 999); } public static DateTime GetStartOfWeek(this DateTime dt, int year, int week) { DateTime dayInWeek = new DateTime(year, 1, 1).AddDays((week - 1) * 7); return dayInWeek.GetStartOfWeek(); } public static DateTime GetEndOfWeek(this DateTime dt, int year, int week) { DateTime dayInWeek = new DateTime(year, 1, 1).AddDays((week - 1) * 7); return dayInWeek.GetEndOfWeek(); } } } A: Putting it all together, with Globalization and allowing for specifying the first day of the week as part of the call we have public static DateTime StartOfWeek ( this DateTime dt, DayOfWeek? firstDayOfWeek ) { DayOfWeek fdow; if ( firstDayOfWeek.HasValue ) { fdow = firstDayOfWeek.Value; } else { System.Globalization.CultureInfo ci = System.Threading.Thread.CurrentThread.CurrentCulture; fdow = ci.DateTimeFormat.FirstDayOfWeek; } int diff = dt.DayOfWeek - fdow; if ( diff < 0 ) { diff += 7; } return dt.AddDays( -1 * diff ).Date; } A: Step 1: Create a static class public static class TIMEE { public static DateTime StartOfWeek(this DateTime dt, DayOfWeek startOfWeek) { int diff = (7 + (dt.DayOfWeek - startOfWeek)) % 7; return dt.AddDays(-1 * diff).Date; } public static DateTime EndOfWeek(this DateTime dt, DayOfWeek startOfWeek) { int diff = (7 - (dt.DayOfWeek - startOfWeek)) % 7; return dt.AddDays(1 * diff).Date; } } Step 2: Use this class to get both start and end day of the week DateTime dt = TIMEE.StartOfWeek(DateTime.Now ,DayOfWeek.Monday); DateTime dt1 = TIMEE.EndOfWeek(DateTime.Now, DayOfWeek.Sunday); A: Using Fluent DateTime: var monday = DateTime.Now.Previous(DayOfWeek.Monday); var sunday = DateTime.Now.Previous(DayOfWeek.Sunday); A: Ugly but it at least gives the right dates back With start of week set by system: public static DateTime FirstDateInWeek(this DateTime dt) { while (dt.DayOfWeek != System.Threading.Thread.CurrentThread.CurrentCulture.DateTimeFormat.FirstDayOfWeek) dt = dt.AddDays(-1); return dt; } Without: public static DateTime FirstDateInWeek(this DateTime dt, DayOfWeek weekStartDay) { while (dt.DayOfWeek != weekStartDay) dt = dt.AddDays(-1); return dt; } A: var now = System.DateTime.Now; var result = now.AddDays(-((now.DayOfWeek - System.Threading.Thread.CurrentThread.CurrentCulture.DateTimeFormat.FirstDayOfWeek + 7) % 7)).Date; A: This would give you midnight on the first Sunday of the week: DateTime t = DateTime.Now; t -= new TimeSpan ((int) t.DayOfWeek, t.Hour, t.Minute, t.Second); This gives you the first Monday at midnight: DateTime t = DateTime.Now; t -= new TimeSpan ((int) t.DayOfWeek - 1, t.Hour, t.Minute, t.Second); A: Try with this in C#. With this code you can get both the first date and last date of a given week. Here Sunday is the first day and Saturday is the last day, but you can set both days according to your culture. DateTime firstDate = GetFirstDateOfWeek(DateTime.Parse("05/09/2012").Date, DayOfWeek.Sunday); DateTime lastDate = GetLastDateOfWeek(DateTime.Parse("05/09/2012").Date, DayOfWeek.Saturday); public static DateTime GetFirstDateOfWeek(DateTime dayInWeek, DayOfWeek firstDay) { DateTime firstDayInWeek = dayInWeek.Date; while (firstDayInWeek.DayOfWeek != firstDay) firstDayInWeek = firstDayInWeek.AddDays(-1); return firstDayInWeek; } public static DateTime GetLastDateOfWeek(DateTime dayInWeek, DayOfWeek firstDay) { DateTime lastDayInWeek = dayInWeek.Date; while (lastDayInWeek.DayOfWeek != firstDay) lastDayInWeek = lastDayInWeek.AddDays(1); return lastDayInWeek; } A: I tried several, but I did not solve the issue with a week starting on a Monday, resulting in giving me the coming Monday on a Sunday. So I modified it a bit and got it working with this code: int delta = DayOfWeek.Monday - DateTime.Now.DayOfWeek; DateTime monday = DateTime.Now.AddDays(delta == 1 ? -6 : delta); return monday; A: The same for end of the week (in style of Compile This's answer): public static DateTime EndOfWeek(this DateTime dt) { int diff = 7 - (int)dt.DayOfWeek; diff = diff == 7 ? 0 : diff; DateTime eow = dt.AddDays(diff).Date; return new DateTime(eow.Year, eow.Month, eow.Day, 23, 59, 59, 999) { }; } A: Thanks for the examples. I needed to always use the "CurrentCulture" first day of the week and for an array I needed to know the exact Daynumber.. so here are my first extensions: public static class DateTimeExtensions { //http://stackoverflow.com/questions/38039/how-can-i-get-the-datetime-for-the-start-of-the-week //http://stackoverflow.com/questions/1788508/calculate-date-with-monday-as-dayofweek1 public static DateTime StartOfWeek(this DateTime dt) { //difference in days int diff = (int)dt.DayOfWeek - (int)CultureInfo.CurrentCulture.DateTimeFormat.FirstDayOfWeek; //sunday=always0, monday=always1, etc. //As a result we need to have day 0,1,2,3,4,5,6 if (diff < 0) { diff += 7; } return dt.AddDays(-1 * diff).Date; } public static int DayNoOfWeek(this DateTime dt) { //difference in days int diff = (int)dt.DayOfWeek - (int)CultureInfo.CurrentCulture.DateTimeFormat.FirstDayOfWeek; //sunday=always0, monday=always1, etc. //As a result we need to have day 0,1,2,3,4,5,6 if (diff < 0) { diff += 7; } return diff + 1; //Make it 1..7 } } A: Here is a correct solution. The following code works regardless if the first day of the week is a Monday or a Sunday or something else. public static class DateTimeExtension { public static DateTime GetFirstDayOfThisWeek(this DateTime d) { CultureInfo ci = System.Threading.Thread.CurrentThread.CurrentCulture; var first = (int)ci.DateTimeFormat.FirstDayOfWeek; var current = (int)d.DayOfWeek; var result = first <= current ? d.AddDays(-1 * (current - first)) : d.AddDays(first - current - 7); return result; } } class Program { static void Main() { System.Threading.Thread.CurrentThread.CurrentCulture = CultureInfo.GetCultureInfo("en-US"); Console.WriteLine("Current culture set to en-US"); RunTests(); Console.WriteLine(); System.Threading.Thread.CurrentThread.CurrentCulture = CultureInfo.GetCultureInfo("da-DK"); Console.WriteLine("Current culture set to da-DK"); RunTests(); Console.ReadLine(); } static void RunTests() { Console.WriteLine("Today {1}: {0}", DateTime.Today.Date.GetFirstDayOfThisWeek(), DateTime.Today.Date.ToString("yyyy-MM-dd")); Console.WriteLine("Saturday 2013-03-02: {0}", new DateTime(2013, 3, 2).GetFirstDayOfThisWeek()); Console.WriteLine("Sunday 2013-03-03: {0}", new DateTime(2013, 3, 3).GetFirstDayOfThisWeek()); Console.WriteLine("Monday 2013-03-04: {0}", new DateTime(2013, 3, 4).GetFirstDayOfThisWeek()); } } A: Modulo in C# works bad for -1 mod 7 (it should be 6, but C# returns -1) so... a "one-liner" solution to this will look like this :) private static DateTime GetFirstDayOfWeek(DateTime date) { return date.AddDays( -(((int)date.DayOfWeek - 1) - (int)Math.Floor((double)((int)date.DayOfWeek - 1) / 7) * 7)); } A: I did it for Monday, but with similar logic for Sunday. public static DateTime GetStartOfWeekDate() { // Get today's date DateTime today = DateTime.Today; // Get the value for today. DayOfWeek is an enum with 0 being Sunday, 1 Monday, etc var todayDayOfWeek = (int)today.DayOfWeek; var dateStartOfWeek = today; // If today is not Monday, then get the date for Monday if (todayDayOfWeek != 1) { // How many days to get back to Monday from today var daysToStartOfWeek = (todayDayOfWeek - 1); // Subtract from today's date the number of days to get to Monday dateStartOfWeek = today.AddDays(-daysToStartOfWeek); } return dateStartOfWeek; } A: Let's combine the culture-safe answer and the extension method answer: public static class DateTimeExtensions { public static DateTime StartOfWeek(this DateTime dt, DayOfWeek startOfWeek) { System.Globalization.CultureInfo ci = System.Threading.Thread.CurrentThread.CurrentCulture; DayOfWeek fdow = ci.DateTimeFormat.FirstDayOfWeek; return DateTime.Today.AddDays(-(DateTime.Today.DayOfWeek- fdow)); } } A: This would give you the preceding Sunday (I think): DateTime t = DateTime.Now; t -= new TimeSpan ((int) t.DayOfWeek, 0, 0, 0); A: For Monday DateTime startAtMonday = DateTime.Now.AddDays(DayOfWeek.Monday - DateTime.Now.DayOfWeek); For Sunday DateTime startAtSunday = DateTime.Now.AddDays(DayOfWeek.Sunday- DateTime.Now.DayOfWeek); A: The quickest way I can come up with is: var sunday = DateTime.Today.AddDays(-(int)DateTime.Today.DayOfWeek); If you would like any other day of the week to be your start date, all you need to do is add the DayOfWeek value to the end var monday = DateTime.Today.AddDays(-(int)DateTime.Today.DayOfWeek + (int)DayOfWeek.Monday); var tuesday = DateTime.Today.AddDays(-(int)DateTime.Today.DayOfWeek + (int)DayOfWeek.Tuesday); A: This may be a bit of a hack, but you can cast the .DayOfWeek property to an int (it's an enum and since its not had its underlying data type changed it defaults to int) and use that to determine the previous start of the week. It appears the week specified in the DayOfWeek enum starts on Sunday, so if we subtract 1 from this value that'll be equal to how many days the Monday is before the current date. We also need to map the Sunday (0) to equal 7 so given 1 - 7 = -6 the Sunday will map to the previous Monday:- DateTime now = DateTime.Now; int dayOfWeek = (int)now.DayOfWeek; dayOfWeek = dayOfWeek == 0 ? 7 : dayOfWeek; DateTime startOfWeek = now.AddDays(1 - (int)now.DayOfWeek); The code for the previous Sunday is simpler as we don't have to make this adjustment:- DateTime now = DateTime.Now; int dayOfWeek = (int)now.DayOfWeek; DateTime startOfWeek = now.AddDays(-(int)now.DayOfWeek); A: The following method should return the DateTime that you want. Pass in true for Sunday being the first day of the week, false for Monday: private DateTime getStartOfWeek(bool useSunday) { DateTime now = DateTime.Now; int dayOfWeek = (int)now.DayOfWeek; if(!useSunday) dayOfWeek--; if(dayOfWeek < 0) {// day of week is Sunday and we want to use Monday as the start of the week // Sunday is now the seventh day of the week dayOfWeek = 6; } return now.AddDays(-1 * (double)dayOfWeek); } A: You could use the excellent Umbrella library: using nVentive.Umbrella.Extensions.Calendar; DateTime beginning = DateTime.Now.BeginningOfWeek(); However, they do seem to have stored Monday as the first day of the week (see the property nVentive.Umbrella.Extensions.Calendar.DefaultDateTimeCalendarExtensions.WeekBeginsOn), so that previous localized solution is a bit better. Unfortunate. Edit: looking closer at the question, it looks like Umbrella might actually work for that too: // Or DateTime.Now.PreviousDay(DayOfWeek.Monday) DateTime monday = DateTime.Now.PreviousMonday(); DateTime sunday = DateTime.Now.PreviousSunday(); Although it's worth noting that if you ask for the previous Monday on a Monday, it'll give you seven days back. But this is also true if you use BeginningOfWeek, which seems like a bug :(. A: Following on from Compile This' answer, use the following method to obtain the date for any day of the week: public static DateTime GetDayOfWeek(DateTime dateTime, DayOfWeek dayOfWeek) { var monday = dateTime.Date.AddDays((7 + (dateTime.DayOfWeek - DayOfWeek.Monday) % 7) * -1); var diff = dayOfWeek - DayOfWeek.Monday; if (diff == -1) { diff = 6; } return monday.AddDays(diff); } A: This will return both the beginning of the week and the end of the week dates: private string[] GetWeekRange(DateTime dateToCheck) { string[] result = new string[2]; TimeSpan duration = new TimeSpan(0, 0, 0, 0); //One day DateTime dateRangeBegin = dateToCheck; DateTime dateRangeEnd = DateTime.Today.Add(duration); dateRangeBegin = dateToCheck.AddDays(-(int)dateToCheck.DayOfWeek); dateRangeEnd = dateToCheck.AddDays(6 - (int)dateToCheck.DayOfWeek); result[0] = dateRangeBegin.Date.ToString(); result[1] = dateRangeEnd.Date.ToString(); return result; } I have posted the complete code for calculating the begin/end of week, month, quarter and year on my blog ZamirsBlog A: Here is a combination of a few of the answers. It uses an extension method that allows the culture to be passed in. If one is not passed in, the current culture is used. This will give it maximum flexibility and reuse. /// <summary> /// Gets the date of the first day of the week for the date. /// </summary> /// <param name="date">The date to be used</param> /// <param name="cultureInfo">If none is provided, the current culture is used</param> /// <returns>The date of the beggining of the week based on the culture specifed</returns> public static DateTime StartOfWeek(this DateTime date, CultureInfo cultureInfo=null) => date.AddDays(-1 * (7 + (date.DayOfWeek - (cultureInfo ?? CultureInfo.CurrentCulture).DateTimeFormat.FirstDayOfWeek)) % 7).Date; Example Usage: public static void TestFirstDayOfWeekExtension() { DateTime date = DateTime.Now; foreach(System.Globalization.CultureInfo culture in CultureInfo.GetCultures(CultureTypes.UserCustomCulture | CultureTypes.SpecificCultures)) { Console.WriteLine($"{culture.EnglishName}: {date.ToShortDateString()} First Day of week: {date.StartOfWeek(culture).ToShortDateString()}"); } } A: If you want Saturday or Sunday or any day of week, but not exceeding the current week (Sat-Sun), I got you covered with this piece of code. public static DateTime GetDateInCurrentWeek(this DateTime date, DayOfWeek day) { var temp = date; var limit = (int)date.DayOfWeek; var returnDate = DateTime.MinValue; if (date.DayOfWeek == day) return date; for (int i = limit; i < 6; i++) { temp = temp.AddDays(1); if (day == temp.DayOfWeek) { returnDate = temp; break; } } if (returnDate == DateTime.MinValue) { for (int i = limit; i > -1; i++) { date = date.AddDays(-1); if (day == date.DayOfWeek) { returnDate = date; break; } } } return returnDate; } A: We like one-liners: Get the difference between the current culture's first day of week and the current day, and then subtract the number of days from the current day: var weekStartDate = DateTime.Now.AddDays(-((int)now.DayOfWeek - (int)DateTimeFormatInfo.CurrentInfo.FirstDayOfWeek)); A: Calculating this way lets you choose which day of the week indicates the start of a new week (in the example I chose Monday). Note that doing this calculation for a day that is a Monday will give the current Monday and not the previous one. //Replace with whatever input date you want DateTime inputDate = DateTime.Now; //For this example, weeks start on Monday int startOfWeek = (int)DayOfWeek.Monday; //Calculate the number of days it has been since the start of the week int daysSinceStartOfWeek = ((int)inputDate.DayOfWeek + 7 - startOfWeek) % 7; DateTime previousStartOfWeek = inputDate.AddDays(-daysSinceStartOfWeek); A: I work with a lot of schools, so correctly using Monday as the first day of the week is important here. A lot of the most terse answers here don't work on Sunday -- we often end up returning the date of tomorrow on Sunday, which is not good for running a report on last week's activities. Here's my solution, which returns last Monday on Sunday, and today on Monday. // Adding 7 so remainder is always positive; Otherwise % returns -1 on Sunday. var daysToSubtract = (7 + (int)today.DayOfWeek - (int)DayOfWeek.Monday) % 7; var monday = today .AddDays(-daysToSubtract) .Date; Remember to use a method parameter for "today" so it's unit testable!! A: public static System.DateTime getstartweek() { System.DateTime dt = System.DateTime.Now; System.DayOfWeek dmon = System.DayOfWeek.Monday; int span = dt.DayOfWeek - dmon; dt = dt.AddDays(-span); return dt; } A: d = DateTime.Now; int dayofweek =(int) d.DayOfWeek; if (dayofweek != 0) { d = d.AddDays(1 - dayofweek); } else { d = d.AddDays(-6); } A: namespace DateTimeExample { using System; public static class DateTimeExtension { public static DateTime GetMonday(this DateTime time) { if (time.DayOfWeek != DayOfWeek.Monday) return GetMonday(time.AddDays(-1)); //Recursive call return time; } } internal class Program { private static void Main() { Console.WriteLine(DateTime.Now.GetMonday()); Console.ReadLine(); } } } A: I did it like this: DateTime.Now.Date.AddDays(-(DateTime.Now.Date.DayOfWeek == 0 ? 7 : (int)DateTime.Now.Date.DayOfWeek) + 1) All this code does is subtracting a number of days from the given datetime. If day of week is 0(sunday) then subtract 7 else subtract the day of the week. Then add 1 day to the result of the previous line, which gives you the monday of that date. This way you can play around with the number(1) at the end to get the desired day. private static DateTime GetDay(DateTime date, int daysAmount = 1) { return date.Date.AddDays(-(date.Date.DayOfWeek == 0 ? 7 : (int)date.Date.DayOfWeek) + daysAmount); } If you really want to use the DayOfWeek enum then something like this can be used... though I presonally prefer the above one, as I can add or subtract any amount of days. private static DateTime GetDayOfWeek(DateTime date, DayOfWeek dayOfWeek = DayOfWeek.Monday) { return date.Date.AddDays(-(date.Date.DayOfWeek == 0 ? 7 : (int)date.Date.DayOfWeek) + (dayOfWeek == 0 ? 7 : (int)dayOfWeek)); }
{ "language": "en", "url": "https://stackoverflow.com/questions/38039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "566" }
Q: How to check if a process is still running using Python on Linux? The only nice way I've found is: import sys import os try: os.kill(int(sys.argv[1]), 0) print "Running" except: print "Not running" (Source) But is this reliable? Does it work with every process and every distribution? A: Here's the solution that solved it for me: import os import subprocess import re def findThisProcess( process_name ): ps = subprocess.Popen("ps -eaf | grep "+process_name, shell=True, stdout=subprocess.PIPE) output = ps.stdout.read() ps.stdout.close() ps.wait() return output # This is the function you can use def isThisRunning( process_name ): output = findThisProcess( process_name ) if re.search('path/of/process'+process_name, output) is None: return False else: return True # Example of how to use if isThisRunning('some_process') == False: print("Not running") else: print("Running!") I'm a Python + Linux newbie, so this might not be optimal. It solved my problem, and hopefully will help other people as well. A: But is this reliable? Does it work with every process and every distribution? Yes, it should work on any Linux distribution. Be aware that /proc is not easily available on Unix based systems, though (FreeBSD, OSX). A: Mark's answer is the way to go, after all, that's why the /proc file system is there. For something a little more copy/pasteable: >>> import os.path >>> os.path.exists("/proc/0") False >>> os.path.exists("/proc/12") True A: Seems to me a PID-based solution is too vulnerable. If the process you're trying to check the status of has been terminated, its PID can be reused by a new process. So, IMO ShaChris23 the Python + Linux newbie gave the best solution to the problem. Even it only works if the process in question is uniquely identifiable by its command string, or you are sure there would be only one running at a time. A: i had problems with the versions above (for example the function found also part of the string and such things...) so i wrote my own, modified version of Maksym Kozlenko's: #proc -> name/id of the process #id = 1 -> search for pid #id = 0 -> search for name (default) def process_exists(proc, id = 0): ps = subprocess.Popen("ps -A", shell=True, stdout=subprocess.PIPE) ps_pid = ps.pid output = ps.stdout.read() ps.stdout.close() ps.wait() for line in output.split("\n"): if line != "" and line != None: fields = line.split() pid = fields[0] pname = fields[3] if(id == 0): if(pname == proc): return True else: if(pid == proc): return True return False I think it's more reliable, easier to read and you have the option to check for process ids or names. A: on linux, you can look in the directory /proc/$PID to get information about that process. In fact, if the directory exists, the process is running. A: It should work on any POSIX system (although looking at the /proc filesystem, as others have suggested, is easier if you know it's going to be there). However: os.kill may also fail if you don't have permission to signal the process. You would need to do something like: import sys import os import errno try: os.kill(int(sys.argv[1]), 0) except OSError, err: if err.errno == errno.ESRCH: print "Not running" elif err.errno == errno.EPERM: print "No permission to signal this process!" else: print "Unknown error" else: print "Running" A: I use this to get the processes, and the count of the process of the specified name import os processname = 'somprocessname' tmp = os.popen("ps -Af").read() proccount = tmp.count(processname) if proccount > 0: print(proccount, ' processes running of ', processname, 'type') A: Sligtly modified version of ShaChris23 script. Checks if proc_name value is found within process args string (for example Python script executed with python ): def process_exists(proc_name): ps = subprocess.Popen("ps ax -o pid= -o args= ", shell=True, stdout=subprocess.PIPE) ps_pid = ps.pid output = ps.stdout.read() ps.stdout.close() ps.wait() for line in output.split("\n"): res = re.findall("(\d+) (.*)", line) if res: pid = int(res[0][0]) if proc_name in res[0][1] and pid != os.getpid() and pid != ps_pid: return True return False
{ "language": "en", "url": "https://stackoverflow.com/questions/38056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }