text
stringlengths
8
267k
meta
dict
Q: Destructor for pointer to fixed size array Suppose I have a C++ class with two private variables. A fixed size array, data, and a pointer to that array, pnt. class MyClass { private: double *pnt; double data[2]; public: myClass(); virtual ~MyClass(); double* getPnt() const; void setPnt(double* input); }; MyClass::MyClass() { double * input; data[0] = 1; data[1] = 2; input= data; setPnt(input); } MyClass::~MyClass() { delete this->pnt; // This throws a runtime error } void MyClass::setPnt(double * input) { pnt = input; } double * MyClass::getPnt() const; { return pnt; } int main() { MyClass spam; // Construct object delete spam; // Error C2440: 'delete' cannot convert from 'MyClass' to 'void*' } There are two problems with this code. First if I try to call delete on the object, I get: Error C2440: 'delete' cannot convert from 'MyClass' to 'void*' Secondly, if I comment out the delete statement, I get a realtime error stating a debug assertion failed! and this: Expression: _BLOCK_TYPE_IS_VALID(pHead->nBlockUse) My question is then: For a class with a pointer to a private fixed size array, how do I properly free memory, write/call the destructor? P.S I can't use vector or nice containers like that (hence this question). A: I see no static array. I see a fixed-size array. Also memory for data is allocated as part of the object. You must not explicitely delete a member of the class: the delete operator will take care of that IFF the instance was dynamically allocated. { MyClass x; // auto variable } // x destructor run, no delete operator vs. { MyClass* x = new MyClass(); // heap allocation variable delete x; // x destructor run, ::delete de-allocates from heap } A: data is a subobject, it will be deallocated when the MyClass instance goes away. The compiler will insert any necessary code into the MyClass destructor to call destructors for all subobjects before the memory is freed.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Is operator && strict in Haskell? For example, I have an operation fnB :: a -> Bool that makes no sense until fnA :: Bool returns False. In C I may compose these two operations in one if block: if( fnA && fnB(a) ){ doSomething; } and C will guarantee that fnB will not execute until fnA returns false. But Haskell is lazy, and, generally, there is no guarantee what operation will execute first, until we don't use seq, $!, or something else to make our code strict. Generally, this is what we need to be happy. But using && operator, I would expect that fnB will not be evaluated until fnA returns its result. Does Haskell provide such a guarantee with &&? And will Haskell evaluate fnB even when fnA returns False? A: As noted by others, naturally (&&) is strict in one of its arguments. By the standard definition it's strict in its first argument. You can use flip to flip the semantics. As an additional note: Note that the arguments to (&&) cannot have side effects, so there are only two reasons why you would want to care whether x && y is strict in y: * *Performance: If y takes a long time to compute. *Semantics: If you expect that y can be bottom. A: The function (&&) is strict in its second argument only if its first argument is True. It is always strict in its first argument. This strictness / laziness is what guarantees the order of evaluation. So it behaves exactly like C. The difference is that in Haskell, (&&) is an ordinary function. In C, this would be impossible. But Haskell is lazy, and, generally, there are no guarantee what operation will execute first, until we don't use seq, $!, or something else to make our code strict. This is not correct. The truth is deeper. Crash course in strictness: We know (&&) is strict in its first parameter because: ⊥ && x = ⊥ Here, ⊥ is something like undefined or an infinite loop (⊥ is pronounced "bottom"). We also know that (False &&) is non-strict in its second argument: False && ⊥ = False It can't possibly evaluate its second argument, because its second argument is ⊥ which can't be evaluated. However, the function (True &&) is strict in its second argument, because: True && ⊥ = ⊥ So, we say that (&&) is always strict in its first argument, and strict in its second argument only when the first argument is True. Order of evaluation: For (&&), its strictness properties are enough to guarantee order of execution. That is not always the case. For example, (+) :: Int -> Int -> Int is always strict in both arguments, so either argument can be evaluated first. However, you can only tell the difference by catching exceptions in the IO monad, or if you use an unsafe function. A: Haskell is lazy, and, generally, there is no guarantee what operation will execute first Not quite. Haskell is pure (except for unsafePerformIO and the implementation of IO), and there is no way to observe which operation will execute first (except for unsafePerformIO and the implementation of IO). The order of execution simply does not matter for the result. && has a 9-value truth table, including the cases where one or both arguments are undefined, and it defines the operation exactly: a b a && b True True True True False False True undefined undefined False True False False False False False undefined False undefined True undefined undefined False undefined undefined undefined undefined As long as the implementation follows that table, it's allowed to execute things in any order it wants. (If you study the table, you'll notice that there's no way for a sequential implementation to follow it unless it executes a first, then b iff a is True. But Haskell implementations are not required to be sequential! An implementation is always allowed to kick of execution of b whenever it wants; your only guarantee is that, according to the table, the result of executing b can only impact your program when a is True.) (Note that 'laziness' is the only way to write a function with a truth table like the above at all; in a language like C or ML, all five of the lines with undefined in either argument would be force to have undefined as the result, where in Haskell (and in C, because && is built in to the C language) one of the lines can have False as the result instead.) A: I believe it works the way you expect; evaluate the RHS iff the LHS evaluates to True. However, assuming the RHS has no side-effects, how would you know (or care)? Edit: I guess the RHS could be undefined, and then you would care...
{ "language": "en", "url": "https://stackoverflow.com/questions/7612580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Macros for Build Commands and Properties in Visual Studio 11 Three-part question first: * *Are the build-time macros fully fleshed out in Visual Studio 11? *How can I edit & define them in the IDE? *How can I let the macros still have their meaning during the Debugging session? Note that I'm referring to the build-time macros such as $(ProjectDir), not the IDE macros used to record a series of keystrokes -- they took those away, but I can live without them. I have been defining my own build-time macros via the property pages in Visual Studio 10. For example, I might create a macro to define where Boost is installed on my local computer named $(BoostDir) in x64 builds via View>Property Manager>Microsoft.Cpp.x64.user: I use this functionality because I might have (for example) Boost installed in different locations on different computers. These properties are machine-specific, and not checked in to source control. I can then use these macros in the project settings which are checked in to source control, and it should work on every machine I compile this project on so long as I have defined the macros for that machine. In trying to compile & run my VS10 projects in the Visual Studio 11 Developer Preview, I'm running in to a couple of issues with respect to these macros. First, in VS11 there does not appear to be an interface to define these user-defined macros. Where they were in VS10, there is now just nothing in VS11: It appears that these macros do exist at some level within the Compiler/IDE, because I am able to compile my project without errors, which I wouldn't be able to do if these macros didn't exist somewhere. Second, while the macros seem to exist during the build process, they don't appear to exist during the Debugging session. Macros I used to resolve the directories where certain DLLs exist in VS10 now don't resolve those directories in VS11, and I get run-time errors: The directory where this DLL is located is defined here in VS10 and VS11: ...but my macros don't seem to mean anything during the debug session. How can I get all the macro-related functionality I depended upon in VS10 back in VS11? A: If you haven't already figured out how to do this, you need to use your own property sheets. Make a new property sheet, and use the section in Common Properties > User Macros. For example, I have a macro called BOOST_VER in a property sheet called IncludeBoost.props that I set everything in. User Macros Name Value BOOST_VER 1.49.0 C/C++ > General > Additional Include Directories > "$(environment variable)\3rdParty\Boost_$(BOOST_VER)\Boost";%(AdditionalIncludeDirectories) C/C++ > Preprocessor > Preprocessor Definitions > BOOST_ALL_DYN_LINK;%(PreprocessorDefinitions) Linker > General $(env var)\3rdParty\Boost_$(BOOST_VER)\Boost\lib\$(Platform);%(AdditionalLibraryDirectories) Note that the proper platform is picked too (Win32 or x64). BTW, the Property Manager supports Ctrl+click multiple selection. HTH. A: My Microsoft.Cpp.x64.user.props, which I think is what you edited, sits in my %blah%\AppData\Local\Microsoft\MSBuild\v4.0. My guess is that VS11 doesn't use this one for build settings. What you really should do is creating your own property sheet to use with your projects.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How will the "fb app support" on the upcoming facebook mobile app update work? What changes need to be made to fb apps to support it? As outlined here: http://www.cnbc.com/id/44726592 It looks like we'll be able to use FB apps using the Facebook mobile app. Will applications need to be modified to support this? A: Anything at this point would just be speculation until there is an official annoucement from Facebook. But yes there will probably need to be several changes to work on mobile devices that have smaller screens and don't support flash.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: JFrame and Nimbus Look And Feel I use Nimbus Look and Feel in a project. However, although every GUI JComponent have a Look and Feel of Nimbus, JFrame always have Windows Look and Feel. How can JFrame have Nimbus Look And Feel? Edit: Operating System : Windows XP A: Try using this: JFrame.setDefaultLookAndFeelDecorated(true); //before creating JFrames For more info., see How to Set the Look and Feel in the tutorial. import javax.swing.*; class FrameLook { public static void showFrame(String plaf) { try { UIManager.setLookAndFeel(plaf); } catch(Exception e) { e.printStackTrace(); } JFrame f = new JFrame(plaf); f.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); f.setSize(400,100); f.setLocationByPlatform(true); f.setDefaultLookAndFeelDecorated(true); f.setVisible(true); } public static void main(String[] args) { showFrame(UIManager.getSystemLookAndFeelClassName()); showFrame(UIManager.getCrossPlatformLookAndFeelClassName()); showFrame("com.sun.java.swing.plaf.nimbus.NimbusLookAndFeel"); } } A: Confirming @Andrew's suspicion, setDefaultLookAndFeelDecorated() says that, when supported, "newly created JFrames will have their Window decorations provided by the current LookAndFeel." I changed the size to see the whole title. import javax.swing.*; class FrameLook { public static void showFrame(String plaf) { try { UIManager.setLookAndFeel(plaf); } catch (Exception e) { e.printStackTrace(System.out); } JFrame f = new JFrame(plaf); f.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE); f.setSize(500, 100); f.setLocationByPlatform(true); JFrame.setDefaultLookAndFeelDecorated(true); f.setVisible(true); } public static void main(String[] args) { showFrame(UIManager.getSystemLookAndFeelClassName()); showFrame(UIManager.getCrossPlatformLookAndFeelClassName()); showFrame("com.sun.java.swing.plaf.nimbus.NimbusLookAndFeel"); } } A: And confirming based on the Windows Classic UI for XP. A: You can't do it directly since Nimbus doesn't support window decorations, that's why you always get a system window, even with the given answers. Try this very simple code: import javax.swing.LookAndFeel; import javax.swing.UIManager; import javax.swing.UIManager.LookAndFeelInfo; public class DoesNimbusSupportWindowDecorations { @SuppressWarnings("unchecked") public static void main(String... args) { LookAndFeel nimbus = null; for (LookAndFeelInfo lafInfo : UIManager.getInstalledLookAndFeels()) { if (lafInfo.getName() == "Nimbus") { try { nimbus = ((Class<LookAndFeel>) Class.forName( lafInfo.getClassName())).newInstance(); } catch (Exception e) { System.err.println("Unexpected exception."); } } } if (nimbus != null) { System.out.println("Nimbus supports window decorations...? " + (nimbus.getSupportsWindowDecorations() ? "YES" : "NO")); } else { System.err.println("Your system does not support Nimbus, you can't" + " run this test."); } } } or simply inside your code with the proper import: System.out.println(new NimbusLookAndFeel().getSupportsWindowDecorations()); What's beyond my understanding is why Sun decided such a thing since the decorations do exist for internal frames and have a custom decoration. I'll be investigating if it's possible to use these decorations by extending NimbusLookAndFeel or playing with defaults, since Nimbus is based on Synth, unsure about the best way. A: This is how I do. A copy paste from my eclipse project. import javax.swing.UIManager.LookAndFeelInfo; import java.awt.EventQueue; import java.awt.BorderLayout; import javax.swing.*; public class Frame1 { private JFrame frame; public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { try { for (LookAndFeelInfo info : UIManager.getInstalledLookAndFeels()) { if ("Nimbus".equals(info.getName())) { UIManager.setLookAndFeel(info.getClassName()); break; } } Frame1 window = new Frame1(); window.frame.setVisible(true); } catch (Exception e) { e.printStackTrace(); } } }); }
{ "language": "en", "url": "https://stackoverflow.com/questions/7612592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: NSInteger Variable at SQLite Query Can anyone help me with SQLite query in my iOS app? I want to make query, which will depend on NSInteger varialbe at WHERE clause, but i dont know how to add variable into query. I tried something like this: MySqlite *sqlConnect = [[MySqlite alloc] init]; sqlite3_stmt *statement = [sqlConnect makeSQL:"select * from table where id = %i", myIntegerVariable - 1]; But of course it doesn't work. Thanks. EDIT MySqlite.h #import <Foundation/Foundation.h> #import <sqlite3.h> @interface MySqlite : NSObject { } - (sqlite3_stmt *)makeSQL:(char *)sql; @end MySqlite.m #import "MySqlite.h" @implementation MySqlite NSString *dbFileName = @"data.sqlite"; - (void)createEditableCopyOfDatabaseIfNeeded { NSFileManager *fileManager = [NSFileManager defaultManager]; NSError *error; NSString *writableDBPath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0] stringByAppendingPathComponent:dbFileName]; if ([fileManager fileExistsAtPath:writableDBPath]){ return; } NSString *defaultDBPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:dbFileName]; if (! [fileManager copyItemAtPath:defaultDBPath toPath:writableDBPath error:&error] ) { NSLog(@"Fail edit db"); } } - (sqlite3 *) getDBConnection{ [self createEditableCopyOfDatabaseIfNeeded]; sqlite3 *DBConnection; NSString *path = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0] stringByAppendingPathComponent:dbFileName]; if( sqlite3_open([path UTF8String], &DBConnection) != SQLITE_OK) { NSLog(@"Fail open db"); return FALSE; } return DBConnection; } - (sqlite3_stmt *)makeSQL:(char *)sql{ NSLog(@"Executing query: %s", sql); sqlite3_stmt *statement = nil; sqlite3 *db = [self getDBConnection]; if ( db ) { if (sqlite3_prepare_v2(db, sql, -1, &statement, NULL) != SQLITE_OK) { NSLog(@"Error at query: %s", sqlite3_errmsg(db)); } } else { NSLog(@"Error connect to db"); } return statement; } @end A: If I understood you correctly, you want to inject an integer variable into the SQL Query. If so, something like below should do the job: MySqlite *sqlConnect = [[MySqlite alloc] init]; sqlite3_stmt *statement = [sqlConnect makeSQL:[[NSString stringWithFormat:@"select * from table where id = %d", myIntegerVariable - 1] UTF8String]]; %d is the modifier for an Integer. That is assuming that the makeSQL selector takes in an NSString SQL String
{ "language": "en", "url": "https://stackoverflow.com/questions/7612593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: use php to generate image, the background is always in wrong color The function is when users upload a picture, program will generate a new white background squre image, and put user's picture in the center of this image. But the problem is, I set the background to white, it always shows black. The code is $capture = imagecreatetruecolor($width, $height); $rgb = explode(",", $this->background); $white = imagecolorallocate($capture, $rgb[0], $rgb[1], $rgb[2]); imagefill($capture, 0, 0, $white); and the code that controls the color is protected $background = "255,255,255"; I've been tried to change $white = imagecolorallocate($capture, $rgb[0], $rgb[1], $rgb[2]); to $white = imagecolorallocate($capture, 255, 255, 255);. But the background still shows in black. Thanks for any answer A: From the manual imagecreatetruecolor() returns an image identifier representing a black image of the specified size. The first call to imagecolorallocate sets the background for palette based images but a true color image is not. The way I set the background color on true color images is to just fill it with a solid rectangle. imagefilledrectangle($capture, 0, 0, $width, $height, $white); A: what about imagefill($im, x, y, $color) ? $red = imagecolorallocate($im, 255, 0, 0); imagefill($im, 0, 0, $red); This will fill entire image with red color, and the x,y parameters are the starting point of rectangle fill.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Issue using Select Named LOV in Oracle Application Express I am using Application Express 3.2.1 and I have an application that when run pulls these columns from a database: Display Name, Email Address, Phone Number, Home Network, Country I want to make a way to filter the rows by Country, so I created a Select Named LOV with this values definition: select name d, name v from ( select distinct(country) name from hh_carriers ) When I click "Run" for my application, all my data is displayed correctly and my select list populates correctly. However, when I select a "Country" from my select list, the data doesn't change. The select list also goes back to the default value. Is there something else I have to do with the Select Named POV? What do I have to do to make the filter work? Thanks. This is what my SQL statement looks like to generate my page: select * from hh_carriers where country like :P5_COUNTRY Display Extra Values: No Source Used: Only when current value in session state is null Source Type: Static assignment (value equals source attribute) Edit: Changed some things for clarity A: You need to reference the select list in the WHERE clause of the report query something like this: where home_network = :p123_home_network (p123_home_network being the name of the select list) A: How is your page set up basically? Does it boil down to: a page item on top of type select list, based on your lov, and a report region under that, with the query where you filter based on your page item? If so, then it shouldn't be that hard to get it to work. Like Wolf commented, you will need to submit the page after you changed the value in your select list. Its value needs to be submitted into session state, since that is what your query filter is referencing. What i don't get is your issue with the select list going back to default value. At what point does this happen? A: My "Expression 1" in my "Edit Page Computation" needed to be set to P5_COUNTRY.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is it safe to end process emulator-arm.exe (Android emulator) instead of closing it by the close button I have noticed that closing the Android emulator by close button (the top right button) takes a lot of time. Ending process emulator-arm.exe (through task manager) closes it instantly. When I open the Android emulator next time, there seems to be no data loss. So is it safe to end process emulator-arm.exe all time? A: If you don't have any data loss that you care about, the answer is yes you can do that. It should be noted that I don't do that, I close it normally using the close button or closing the emulator window. A: It probably takes long to close because it's saving a snapshot to speed up next startup. If you are not using the sanpshots you can run the emulator with -no-snapstorage option to disable them.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why can't I use the 'await' operator within the body of a lock statement? The await keyword in C# (.NET Async CTP) is not allowed from within a lock statement. From MSDN: An await expression cannot be used in a synchronous function, in a query expression, in the catch or finally block of an exception handling statement, in the block of a lock statement, or in an unsafe context. I assume this is either difficult or impossible for the compiler team to implement for some reason. I attempted a work around with the using statement: class Async { public static async Task<IDisposable> Lock(object obj) { while (!Monitor.TryEnter(obj)) await TaskEx.Yield(); return new ExitDisposable(obj); } private class ExitDisposable : IDisposable { private readonly object obj; public ExitDisposable(object obj) { this.obj = obj; } public void Dispose() { Monitor.Exit(this.obj); } } } // example usage using (await Async.Lock(padlock)) { await SomethingAsync(); } However this does not work as expected. The call to Monitor.Exit within ExitDisposable.Dispose seems to block indefinitely (most of the time) causing deadlocks as other threads attempt to acquire the lock. I suspect the unreliability of my work around and the reason await statements are not allowed in lock statement are somehow related. Does anyone know why await isn't allowed within the body of a lock statement? A: Basically it would be the wrong thing to do. There are two ways this could be implemented: * *Keep hold of the lock, only releasing it at the end of the block. This is a really bad idea as you don't know how long the asynchronous operation is going to take. You should only hold locks for minimal amounts of time. It's also potentially impossible, as a thread owns a lock, not a method - and you may not even execute the rest of the asynchronous method on the same thread (depending on the task scheduler). *Release the lock in the await, and reacquire it when the await returns This violates the principle of least astonishment IMO, where the asynchronous method should behave as closely as possible like the equivalent synchronous code - unless you use Monitor.Wait in a lock block, you expect to own the lock for the duration of the block. So basically there are two competing requirements here - you shouldn't be trying to do the first here, and if you want to take the second approach you can make the code much clearer by having two separated lock blocks separated by the await expression: // Now it's clear where the locks will be acquired and released lock (foo) { } var result = await something; lock (foo) { } So by prohibiting you from awaiting in the lock block itself, the language is forcing you to think about what you really want to do, and making that choice clearer in the code that you write. A: This is just an extension to this answer by user1639030. Basic Version using System; using System.Threading; using System.Threading.Tasks; public class SemaphoreLocker { private readonly SemaphoreSlim _semaphore = new SemaphoreSlim(1, 1); public async Task LockAsync(Func<Task> worker) { await _semaphore.WaitAsync(); try { await worker(); } finally { _semaphore.Release(); } } // overloading variant for non-void methods with return type (generic T) public async Task<T> LockAsync<T>(Func<Task<T>> worker) { await _semaphore.WaitAsync(); try { return await worker(); } finally { _semaphore.Release(); } } } Usage: public class Test { private static readonly SemaphoreLocker _locker = new SemaphoreLocker(); public async Task DoTest() { await _locker.LockAsync(async () => { // [async] calls can be used within this block // to handle a resource by one thread. }); // OR var result = await _locker.LockAsync(async () => { // [async] calls can be used within this block // to handle a resource by one thread. }); } } Extended Version A version of the LockAsync method that claims to be completely deadlock-safe (from the 4th revision suggested by Jez). using System; using System.Threading; using System.Threading.Tasks; public class SemaphoreLocker { private readonly SemaphoreSlim _semaphore = new SemaphoreSlim(1, 1); public async Task LockAsync(Func<Task> worker) { var isTaken = false; try { do { try { } finally { isTaken = await _semaphore.WaitAsync(TimeSpan.FromSeconds(1)); } } while (!isTaken); await worker(); } finally { if (isTaken) { _semaphore.Release(); } } } // overloading variant for non-void methods with return type (generic T) public async Task<T> LockAsync<T>(Func<Task<T>> worker) { var isTaken = false; try { do { try { } finally { isTaken = await _semaphore.WaitAsync(TimeSpan.FromSeconds(1)); } } while (!isTaken); return await worker(); } finally { if (isTaken) { _semaphore.Release(); } } } } Usage: public class Test { private static readonly SemaphoreLocker _locker = new SemaphoreLocker(); public async Task DoTest() { await _locker.LockAsync(async () => { // [async] calls can be used within this block // to handle a resource by one thread. }); // OR var result = await _locker.LockAsync(async () => { // [async] calls can be used within this block // to handle a resource by one thread. }); } } A: I assume this is either difficult or impossible for the compiler team to implement for some reason. No, it is not at all difficult or impossible to implement -- the fact that you implemented it yourself is a testament to that fact. Rather, it is an incredibly bad idea and so we don't allow it, so as to protect you from making this mistake. call to Monitor.Exit within ExitDisposable.Dispose seems to block indefinitely (most of the time) causing deadlocks as other threads attempt to acquire the lock. I suspect the unreliability of my work around and the reason await statements are not allowed in lock statement are somehow related. Correct, you have discovered why we made it illegal. Awaiting inside a lock is a recipe for producing deadlocks. I'm sure you can see why: arbitrary code runs between the time the await returns control to the caller and the method resumes. That arbitrary code could be taking out locks that produce lock ordering inversions, and therefore deadlocks. Worse, the code could resume on another thread (in advanced scenarios; normally you pick up again on the thread that did the await, but not necessarily) in which case the unlock would be unlocking a lock on a different thread than the thread that took out the lock. Is that a good idea? No. I note that it is also a "worst practice" to do a yield return inside a lock, for the same reason. It is legal to do so, but I wish we had made it illegal. We're not going to make the same mistake for "await". A: Use the SemaphoreSlim.WaitAsync method. await mySemaphoreSlim.WaitAsync(); try { await Stuff(); } finally { mySemaphoreSlim.Release(); } A: This referes to Building Async Coordination Primitives, Part 6: AsyncLock , http://winrtstoragehelper.codeplex.com/ , Windows 8 app store and .net 4.5 Here is my angle on this: The async/await language feature makes many things fairly easy but it also introduces a scenario that was rarely encounter before it was so easy to use async calls: reentrance. This is especially true for event handlers, because for many events you don't have any clue about whats happening after you return from the event handler. One thing that might actually happen is, that the async method you are awaiting in the first event handler, gets called from another event handler still on the same thread. Here is a real scenario I came across in a windows 8 App store app: My app has two frames: coming into and leaving from a frame I want to load/safe some data to file/storage. OnNavigatedTo/From events are used for the saving and loading. The saving and loading is done by some async utility function (like http://winrtstoragehelper.codeplex.com/). When navigating from frame 1 to frame 2 or in the other direction, the async load and safe operations are called and awaited. The event handlers become async returning void => they cant be awaited. However, the first file open operation (lets says: inside a save function) of the utility is async too and so the first await returns control to the framework, which sometime later calls the other utility (load) via the second event handler. The load now tries to open the same file and if the file is open by now for the save operation, fails with an ACCESSDENIED exception. A minimum solution for me is to secure the file access via a using and an AsyncLock. private static readonly AsyncLock m_lock = new AsyncLock(); ... using (await m_lock.LockAsync()) { file = await folder.GetFileAsync(fileName); IRandomAccessStream readStream = await file.OpenAsync(FileAccessMode.Read); using (Stream inStream = Task.Run(() => readStream.AsStreamForRead()).Result) { return (T)serializer.Deserialize(inStream); } } Please note that his lock basically locks down all file operation for the utility with just one lock, which is unnecessarily strong but works fine for my scenario. Here is my test project: a windows 8 app store app with some test calls for the original version from http://winrtstoragehelper.codeplex.com/ and my modified version that uses the AsyncLock from Stephen Toub. May I also suggest this link: http://www.hanselman.com/blog/ComparingTwoTechniquesInNETAsynchronousCoordinationPrimitives.aspx A: Stephen Taub has implemented a solution to this question, see Building Async Coordination Primitives, Part 7: AsyncReaderWriterLock. Stephen Taub is highly regarded in the industry, so anything he writes is likely to be solid. I won't reproduce the code that he posted on his blog, but I will show you how to use it: /// <summary> /// Demo class for reader/writer lock that supports async/await. /// For source, see Stephen Taub's brilliant article, "Building Async Coordination /// Primitives, Part 7: AsyncReaderWriterLock". /// </summary> public class AsyncReaderWriterLockDemo { private readonly IAsyncReaderWriterLock _lock = new AsyncReaderWriterLock(); public async void DemoCode() { using(var releaser = await _lock.ReaderLockAsync()) { // Insert reads here. // Multiple readers can access the lock simultaneously. } using (var releaser = await _lock.WriterLockAsync()) { // Insert writes here. // If a writer is in progress, then readers are blocked. } } } If you want a method that's baked into the .NET framework, use SemaphoreSlim.WaitAsync instead. You won't get a reader/writer lock, but you will get tried and tested implementation. A: Hmm, looks ugly, seems to work. static class Async { public static Task<IDisposable> Lock(object obj) { return TaskEx.Run(() => { var resetEvent = ResetEventFor(obj); resetEvent.WaitOne(); resetEvent.Reset(); return new ExitDisposable(obj) as IDisposable; }); } private static readonly IDictionary<object, WeakReference> ResetEventMap = new Dictionary<object, WeakReference>(); private static ManualResetEvent ResetEventFor(object @lock) { if (!ResetEventMap.ContainsKey(@lock) || !ResetEventMap[@lock].IsAlive) { ResetEventMap[@lock] = new WeakReference(new ManualResetEvent(true)); } return ResetEventMap[@lock].Target as ManualResetEvent; } private static void CleanUp() { ResetEventMap.Where(kv => !kv.Value.IsAlive) .ToList() .ForEach(kv => ResetEventMap.Remove(kv)); } private class ExitDisposable : IDisposable { private readonly object _lock; public ExitDisposable(object @lock) { _lock = @lock; } public void Dispose() { ResetEventFor(_lock).Set(); } ~ExitDisposable() { CleanUp(); } } } A: I created a MutexAsyncable class, inspired by Stephen Toub's AsyncLock implementation (discussion at this blog post), which can be used as a drop-in replacement for a lock statement in either sync or async code: using System; using System.Threading; using System.Threading.Tasks; namespace UtilsCommon.Lib; /// <summary> /// Class that provides (optionally async-safe) locking using an internal semaphore. /// Use this in place of a lock() {...} construction. /// Bear in mind that all code executed inside the worker must finish before the next /// thread is able to start executing it, so long-running code should be avoided inside /// the worker if at all possible. /// /// Example usage for sync: /// using (mutex.LockSync()) { /// // ... code here which is synchronous and handles a shared resource ... /// return[ result]; /// } /// /// ... or for async: /// using (await mutex.LockAsync()) { /// // ... code here which can use await calls and handle a shared resource ... /// return[ result]; /// } /// </summary> public sealed class MutexAsyncable { #region Internal classes private sealed class Releaser : IDisposable { private readonly MutexAsyncable _toRelease; internal Releaser(MutexAsyncable toRelease) { _toRelease = toRelease; } public void Dispose() { _toRelease._semaphore.Release(); } } #endregion private readonly SemaphoreSlim _semaphore = new(1, 1); private readonly Task<IDisposable> _releaser; public MutexAsyncable() { _releaser = Task.FromResult((IDisposable)new Releaser(this)); } public IDisposable LockSync() { _semaphore.Wait(); return _releaser.Result; } public Task<IDisposable> LockAsync() { var wait = _semaphore.WaitAsync(); if (wait.IsCompleted) { return _releaser; } else { // Return Task<IDisposable> which completes once WaitAsync does return wait.ContinueWith( (_, state) => (IDisposable)state!, _releaser.Result, CancellationToken.None, TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Default ); } } } It's safe to use the above if you're using .NET 5+ because that won't ever throw ThreadAbortException. I also created an extended SemaphoreLocker class, inspired by this answer, which can be a general-purpose replacement for lock, usable either synchronously or asynchronously. It is less efficient than the above MutexAsyncable and allocates more resources, although it has the benefit of forcing the worker code to release the lock once it's finished (technically, the IDisposable returned by the MutexAsyncable could not get disposed by calling code and cause deadlock). It also has extra try/finally code to deal with the possibility of ThreadAbortException, so should be usable in earlier .NET versions: using System; using System.Threading; using System.Threading.Tasks; namespace UtilsCommon.Lib; /// <summary> /// Class that provides (optionally async-safe) locking using an internal semaphore. /// Use this in place of a lock() {...} construction. /// Bear in mind that all code executed inside the worker must finish before the next thread is able to /// start executing it, so long-running code should be avoided inside the worker if at all possible. /// /// Example usage: /// [var result = ]await _locker.LockAsync(async () => { /// // ... code here which can use await calls and handle a shared resource one-thread-at-a-time ... /// return[ result]; /// }); /// /// ... or for sync: /// [var result = ]_locker.LockSync(() => { /// // ... code here which is synchronous and handles a shared resource one-thread-at-a-time ... /// return[ result]; /// }); /// </summary> public sealed class SemaphoreLocker : IDisposable { private readonly SemaphoreSlim _semaphore = new(1, 1); /// <summary> /// Runs the worker lambda in a locked context. /// </summary> /// <typeparam name="T">The type of the worker lambda's return value.</typeparam> /// <param name="worker">The worker lambda to be executed.</param> public T LockSync<T>(Func<T> worker) { var isTaken = false; try { do { try { } finally { isTaken = _semaphore.Wait(TimeSpan.FromSeconds(1)); } } while (!isTaken); return worker(); } finally { if (isTaken) { _semaphore.Release(); } } } /// <inheritdoc cref="LockSync{T}(Func{T})" /> public void LockSync(Action worker) { var isTaken = false; try { do { try { } finally { isTaken = _semaphore.Wait(TimeSpan.FromSeconds(1)); } } while (!isTaken); worker(); } finally { if (isTaken) { _semaphore.Release(); } } } /// <summary> /// Runs the worker lambda in an async-safe locked context. /// </summary> /// <typeparam name="T">The type of the worker lambda's return value.</typeparam> /// <param name="worker">The worker lambda to be executed.</param> public async Task<T> LockAsync<T>(Func<Task<T>> worker) { var isTaken = false; try { do { try { } finally { isTaken = await _semaphore.WaitAsync(TimeSpan.FromSeconds(1)); } } while (!isTaken); return await worker(); } finally { if (isTaken) { _semaphore.Release(); } } } /// <inheritdoc cref="LockAsync{T}(Func{Task{T}})" /> public async Task LockAsync(Func<Task> worker) { var isTaken = false; try { do { try { } finally { isTaken = await _semaphore.WaitAsync(TimeSpan.FromSeconds(1)); } } while (!isTaken); await worker(); } finally { if (isTaken) { _semaphore.Release(); } } } /// <summary> /// Releases all resources used by the current instance of the SemaphoreLocker class. /// </summary> public void Dispose() { _semaphore.Dispose(); } } A: I did try using a Monitor (code below) which appears to work but has a GOTCHA... when you have multiple threads it will give... System.Threading.SynchronizationLockException Object synchronization method was called from an unsynchronized block of code. using System; using System.Threading; using System.Threading.Tasks; namespace MyNamespace { public class ThreadsafeFooModifier : { private readonly object _lockObject; public async Task<FooResponse> ModifyFooAsync() { FooResponse result; Monitor.Enter(_lockObject); try { result = await SomeFunctionToModifyFooAsync(); } finally { Monitor.Exit(_lockObject); } return result; } } } Prior to this I was simply doing this, but it was in an ASP.NET controller so it resulted in a deadlock. public async Task<FooResponse> ModifyFooAsync() { lock(lockObject) { return SomeFunctionToModifyFooAsync.Result; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7612602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "474" }
Q: How to redirect console output to file and STILL get it in the console? I want to run an ANT script which prompts the user for input, so it needs to be interactive through the console. at the same time I want to log the console content to a log file. I know I can use ant >build.log 2<&1 which will redirect to file, but leave the console empty. So, how can that be done? needed on windows and unix. A: Use tee. ant 2>&1|tee build.log tee.exe is also available for Windows from http://unxutils.sourceforge.net/ A: You can use tee. Example: $ echo "Hello, world" | tee /tmp/outfile Hello, world $ cat /tmp/outfile Hello, world tee writes its stdin to both stdout as well as one or more files given on the command line.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: How to Create Function Object with attributes in one step This is an academic question, to clarify my understanding of JavaScript. A function object can be assigned attributes once it is created. However, is it possible to create a function object with attributes in one step? For example, function someFunction(){ } var someFunctionObject=someFunction; someFunctionObject.attr1="attr 1"; alert(someFunctionObject.attr1); /* "attr 1" Here the attribute is being added in the second step, after the object has been created. Is it possible to achieve this in one step? A: The best way to do so is to assign a named function to a variable as follows: var someFunctionObject = (function someFunction() { if (!someFunction.attr1) { someFunction.attr1 = "attr1"; return someFunction; } // extra processing }()); alert(someFunctionObject.attr1); This has the added advantage that if someone deletes the attr1 property, it'll be reinitialized later as follows: delete someFunctionObject.attr1; // deleted someFunctionObject(); // reinitialized alert(someFunctionObject.attr1); // back to square one A better way to write the above construct would be: var someFunctionObject = (function someFunction() { if (someFunction.attr1) { // extra processing } else { someFunction.attr1 = "attr1"; return someFunction; } }()); This is both faster and better because: * *We don't use the logical NOT operator. This saves an instruction to execute when interpreted. Logical NOT doesn't coerce, so using it has no benefits. *The function object is usually only initialized once, so logically the initialization should come last as it'll be rarely called. A: function someFunction(attr1){ this.attr1 = attr1; } var someFunctionObject = new someFunction('attr 1'); console.log( someFunctionObject.attr1 ); A: var someFunc = new function(attr1){ this.attr1 = attr1; }("blah"); alert(someFunc.attr1); That's one step, but not a very useful construct. A: The short answer is no. Read below for the explanation. The end result of your code is something like: var myFunctionObject = { prop1: 'foo', prop2: 'bar', construct: "alert('function body')" }; myFunctionObject(); // should alert myFunctionObject.prop1; // 'foo' A function definition/declaration actually does something very similar - it creates an object with the [[Construct]] property set to the actualy body of the function. But, this is just an implementation detail and you cannot access or set that [[Construct]] property separately, so you cannot define other properties in one-go (it would require additional syntax and there is none available). But, you can set properties from inside the function, if that's what you need. For example, this would be useful for caching: function getSomething() { if (getSomething.prop) return getSomething.prop; return getSomething.prop = fetchResource(); } A: You're only a tiny (but important) bit off in your phrasing. What you did in your sample code was make a function, then assign that function as a property on another object (someFunctionObject). What you did in literal terms looks like this: var someFunctionObject = { someFunction : function(){}, attr1 : "attr 1" } This results in one object (someFunctionObject, which is poorly named at this point, as it's just a regular object) with two properties assigned to it: one string, and one function.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Lucene.net: Querying and using a filter to limit results As usual I turn to the massive brain power that is the Stackoverflow user base to help solve a Lucene.NET problem I am battling with. First off, I am a complete noob when it comes to Lucene and Lucene.NET and by using the scattered tutorials and code snippets online, I have cobbled together the follow solution for my scenario. The Scenario I have an index of the following structure: --------------------------------------------------------- | id | date | security | text | --------------------------------------------------------- | 1 | 2011-01-01 | -1-12-4- | some analyzed text here | --------------------------------------------------------- | 2 | 2011-01-01 | -11-3- | some analyzed text here | --------------------------------------------------------- | 3 | 2011-01-01 | -1- | some analyzed text here | --------------------------------------------------------- I need to be able to query the text field, but restrict the results to users that have specific roleId's. What I came up with to accomplish this (after many, many trips to Google) is to use a "security field" and a Lucene filter to restrict the result set as outlined below: class SecurityFilter : Lucene.Net.Search.Filter { public override System.Collections.BitArray Bits(Lucene.Net.Index.IndexReader indexReader) { BitArray bitarray = new BitArray(indexReader.MaxDoc()); for (int i = 0; i < bitarray.Length; i++) { if (indexReader.Document(i).Get("security").Contains("-1-")) { bitarray.Set(i, true); } } return bitarray; } } ... and then ... Lucene.Net.Search.Sort sort = new Lucene.Net.Search.Sort(new Lucene.Net.Search.SortField("date", true)); Lucene.Net.Analysis.Standard.StandardAnalyzer analyzer = new Lucene.Net.Analysis.Standard.StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_29); Lucene.Net.Search.IndexSearcher searcher = new Lucene.Net.Search.IndexSearcher(Lucene.Net.Store.FSDirectory.Open(indexDirectory), true); Lucene.Net.QueryParsers.QueryParser parser = new Lucene.Net.QueryParsers.QueryParser(Lucene.Net.Util.Version.LUCENE_29, "text", analyzer); Lucene.Net.Search.Query query = parser.Parse("some search phrase"); SecurityFilter filter = new SecurityFilter(); Lucene.Net.Search.Hits hits = searcher.Search(query, filter, sort); This works as expected and would only return documents with the id's of 1 and 3. The problem is that on large indexes this process becomes very slow. Finally, my question... Does anyone out there have any tips on how to speed it up, or have an alternate solution that would be more efficient than the one I have presented here? A: If you index your security field as analyzed (such that it splits your security string as 1 12 4 ...) you can create a filter like this Filter filter = new QueryFilter(new TermQuery(new Term("security ", "1"))); or form a query like some text +security:1 A: I changed my answer with a simple example that explain what I meant in my previous answer. I made this quickly and doesnt respect best practices, but it should give you the idea. Note that the security field will need to be tokenized so that each ID in it are separate tokens, using the WhitespaceAnalyzer for example. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Lucene.Net.Search; using Lucene.Net.Documents; using Lucene.Net.Index; using Lucene.Net.Analysis.Standard; using System.IO; namespace ConsoleApplication1 { class Program { public class RoleFilterCache { static public Dictionary<string, Filter> Cache = new Dictionary<string,Filter>(); static public Filter Get(string role) { Filter cached = null; if (!Cache.TryGetValue(role, out cached)) { return null; } return cached; } static public void Put(string role, Filter filter) { if (role != null) { Cache[role] = filter; } } } public class User { public string Username; public List<string> Roles; } public static Filter GetFilterForUser(User u) { BooleanFilter userFilter = new BooleanFilter(); foreach (string rolename in u.Roles) { // call GetFilterForRole and add to the BooleanFilter userFilter.Add( new BooleanFilterClause(GetFilterForRole(rolename), BooleanClause.Occur.SHOULD) ); } return userFilter; } public static Filter GetFilterForRole(string role) { Filter roleFilter = RoleFilterCache.Get(role); if (roleFilter == null) { roleFilter = // the caching wrapper filter makes it cache the BitSet per segmentreader new CachingWrapperFilter( // builds the filter from the index and not from iterating // stored doc content which is much faster new QueryWrapperFilter( new TermQuery( new Term("security", role) ) ) ); // put in cache RoleFilterCache.Put(role, roleFilter); } return roleFilter; } static void Main(string[] args) { IndexWriter iw = new IndexWriter(new FileInfo("C:\\example\\"), new StandardAnalyzer(), true); Document d = new Document(); Field aField = new Field("content", "", Field.Store.YES, Field.Index.ANALYZED); Field securityField = new Field("security", "", Field.Store.NO, Field.Index.ANALYZED); d.Add(aField); d.Add(securityField); aField.SetValue("Only one can see."); securityField.SetValue("1"); iw.AddDocument(d); aField.SetValue("One and two can see."); securityField.SetValue("1 2"); iw.AddDocument(d); aField.SetValue("One and two can see."); securityField.SetValue("1 2"); iw.AddDocument(d); aField.SetValue("Only two can see."); securityField.SetValue("2"); iw.AddDocument(d); iw.Close(); User userone = new User() { Username = "User one", Roles = new List<string>() }; userone.Roles.Add("1"); User usertwo = new User() { Username = "User two", Roles = new List<string>() }; usertwo.Roles.Add("2"); User userthree = new User() { Username = "User three", Roles = new List<string>() }; userthree.Roles.Add("1"); userthree.Roles.Add("2"); PhraseQuery phraseQuery = new PhraseQuery(); phraseQuery.Add(new Term("content", "can")); phraseQuery.Add(new Term("content", "see")); IndexSearcher searcher = new IndexSearcher("C:\\example\\", true); Filter securityFilter = GetFilterForUser(userone); TopDocs results = searcher.Search(phraseQuery, securityFilter,25); Console.WriteLine("User One Results:"); foreach (var aResult in results.ScoreDocs) { Console.WriteLine( searcher.Doc(aResult.doc). Get("content") ); } Console.WriteLine("\n\n"); securityFilter = GetFilterForUser(usertwo); results = searcher.Search(phraseQuery, securityFilter, 25); Console.WriteLine("User two Results:"); foreach (var aResult in results.ScoreDocs) { Console.WriteLine( searcher.Doc(aResult.doc). Get("content") ); } Console.WriteLine("\n\n"); securityFilter = GetFilterForUser(userthree); results = searcher.Search(phraseQuery, securityFilter, 25); Console.WriteLine("User three Results (should see everything):"); foreach (var aResult in results.ScoreDocs) { Console.WriteLine( searcher.Doc(aResult.doc). Get("content") ); } Console.WriteLine("\n\n"); Console.ReadKey(); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7612618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can Someone Explain This Snippet (Why Are These Braces Here)? I apologize for this overly simplistic question, but I can't seem to figure out this example in the book I'm reading: void f5() { int x; { int y; } } What are the braces surrounding int y for? Can you put braces wherever you want? If so, when and why would you do so or is this just an error in the book? A: It's a matter of scoping variables, e.g.: void f5() { int x = 1; { int y = 3; y = y + x; // works x = x + y; // works } y = y + x; // fails x = x + y; // fails } A: It's defining scope. The variable Y is not accessible outside the braces. A: The braces denote scope, the variable x will be visible in the scope of the inner brace but y will not be visible outside of it's brace scope. A: The braces define a scope level. Outside of the braces, y will not be available. A: At the scope exit the inner objects are destructed. You can, for example, enclose a critical section in braces and construct a lock object there. Then you don't have to worry about forgetting to unlock it - the destructor is called automatically when exitting the scope - either normally or because of an exception. A: Braces like that indicate that the code inside the braces is now in a different scope. If you tried to access y outside of the braces, you would receive an error. A: That looks like an error (not knowing the context) Doing that you have boxed the value y inside those braces, and as such is NOT available outside it. Of course, if they are trying to explain scope, that could be a valid code
{ "language": "en", "url": "https://stackoverflow.com/questions/7612628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What is the best way to end a session? Possible Duplicate: How do I kill a PHP session? When ending a session, session_destroy() is supposed to destroy the session and the session variables registered with the session. But I've seen lots of codes that unsets all the registered session variables before destroying the session. Please who knows, what is the best practice? A: session_destroy erases all of the session data, so it won't be available at the next request. It does not unset $_SESSION however, and if your application relies on specific values in the $_SESSION array, it may behave in a wrong way. But you don't need to unset each session variable, you can end session in this way: $_SESSION = array(); session_destroy(); A: $_SESSION = array(); session_destroy(); setcookie('PHPSESSID', '', time()-3600,'/', '', 0, 0); Also unset the PHPSESSID cookie on the client-side. http://php.net/session_destroy A: Destroying the session will unset all of the variables. I am not aware of any reason to unset them before destroying. You MAY however want to unset the variables but KEEP the session. This can be done by calling session_unset();
{ "language": "en", "url": "https://stackoverflow.com/questions/7612629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to delete XML nodes based on an internal value comparison I have an XML file with the following structure: <tu> <tuv xml:lang="EN"> <seg>XXX</seg> </tuv> <tuv xml:lang="FR"> <seg>YYY</seg> </tuv> </tu> <tu> <tuv xml:lang="EN"> <seg>XXX</seg> </tuv> <tuv xml:lang="FR"> <seg>YYY</seg> </tuv> </tu> ... And I would like to delete the node <tu> when <seg>XXX</seg> is equal to <seg>YYY</seg> from a C# application. I have tried with linq and in some other ways, but I wasn't able to compare those internal valuea and then delete the parent node if necessary. Thanks a lot in advance! A: First of all your XML is invalid so I added a <root> node - then this worked for me: XDocument doc = XDocument.Load("test.xml"); var nodesWithMatchingElements = doc.Root.Elements("tu") .GroupBy(e => e) .Select(g => new { Element = g.Key, Count = g.Descendants("seg").Select(x => x.Value).Distinct().Count() }) .Where(x => x.Count == 1); foreach (var node in nodesWithMatchingElements) node.Element.Remove();
{ "language": "en", "url": "https://stackoverflow.com/questions/7612631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: how start developing iPhone web apps using its webviewer? How to develop an iPhone app using its webviewer? Like in Andriod we can develop an app using webviewer. It will help me in developing mobile app using Javascript. Please NOTE:- I want to keep the core logic JS FILE of APP "inside" the phone only. I don't want to host my application and redirect the user any where. EDIT:- I want to develop a mobile application using HTML,CSS and Javascript. I want to keep my APP LOGIC inside the phone only. A: Yes in iOS you can use UIWebView to display any HTML content, HTTP links etc. You could also invoke javascript from your Objective C code or vice versa... HTML data - NSString *html = @"<html><head><title>Should be half</title></head><body>I wish the answer were just 42</body></html>"; [webView loadHTMLString:html baseURL:nil]; To evaluate javascript from ObjectiveC code - NSString *title = [webView stringByEvaluatingJavaScriptFromString:@"document.title"]; This technique is not limited to one-liners, or accessing simple properties. Here’s an example of two lines of JavaScript code executed in order, as you would expect: [webView stringByEvaluatingJavaScriptFromString:@"var field = document.getElementById('field_2');" "field.value='Multiple statements - OK';"]; You can also call JavaScript functions this way. And if you want to call a JavaScript function that does not already exist in the web page that you’re downloading, you can “inject” it yourself with this technique: [webView stringByEvaluatingJavaScriptFromString:@"var script = document.createElement('script');" "script.type = 'text/javascript';" "script.text = \"function myFunction() { " "var field = document.getElementById('field_3');" "field.value='Calling function - OK';" "}\";" "document.getElementsByTagName('head')[0].appendChild(script);"]; [webView stringByEvaluatingJavaScriptFromString:@"myFunction();"]; Check out - Javascript in UIWebView A: To add on to what @RustyRyan said, you can also use frameworks like PhoneGap. They are excellent way to create apps using web technologies http://www.phonegap.com
{ "language": "en", "url": "https://stackoverflow.com/questions/7612638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Creating a hierarical tree and retuning as xml using SQL Server stored procedure I have a following table. CategoryId Category_name ParentId 1 Products 0 2 Property 0 3 Electronics 1 4 Computer 3 5 Camera 3 6 Books 1 How can I return a hierarchical xml from this, using a SQL Server CTE ? thanks in advance. A: If you try this, you can achieve some degree of what you're looking for: DECLARE @test TABLE (CategoryID INT, CatName VARCHAR(50), ParentID INT) INSERT INTO @test VALUES(1, 'Products', 0), (2, 'Property', 0), (3, 'Electronics', 1), (4, 'Computer', 3), (5, 'Camera', 3), (6, 'Books', 1), (7, 'Beach-front villa', 2), (8, 'DVD', 1) ;WITH Hierarchy AS ( -- anchor SELECT CategoryID, CatName, ParentID, 1 AS 'Level' FROM @test WHERE ParentID = 0 UNION ALL -- recurse SELECT t.CategoryID, t.CatName, t.ParentID, h.Level + 1 AS 'Level' FROM Hierarchy h INNER JOIN @test t ON t.ParentID = h.CategoryID ) SELECT h.CategoryID AS '@ID', h.CatName, (SELECT h2.CategoryID AS '@ID', h2.CatName AS '@Name', (SELECT h3.CategoryID AS '@ID', h3.CatName AS '@Name' FROM Hierarchy h3 WHERE h3.ParentID = h2.CategoryID FOR XML PATH('BottomLevelItem'), TYPE) AS 'SubItems' FROM Hierarchy h2 WHERE h2.ParentID = h.CategoryID FOR XML PATH('Item'), TYPE) AS 'Items' FROM Hierarchy h WHERE h.Level = 1 FOR XML PATH('TopLevelItem'), ROOT('AllItems') This gives you a result something like this: <AllItems> <TopLevelItem ID="1"> <CatName>Products</CatName> <Items> <Item ID="3" Name="Electronics"> <SubItems> <BottomLevelItem ID="4" Name="Computer" /> <BottomLevelItem ID="5" Name="Camera" /> </SubItems> </Item> <Item ID="6" Name="Books" /> <Item ID="8" Name="DVD" /> </Items> </TopLevelItem> <TopLevelItem ID="2"> <CatName>Property</CatName> <Items> <Item ID="7" Name="Beach-front villa" /> </Items> </TopLevelItem> </AllItems> With the FOR XML PATH() syntax in SQL Server 2005 and up, you can achieve a lot of flexibility - you can define certain columns to show up as attributes on the XML nodes, and other can be used as XML elements. What I haven't found a solution for as of yet is having an infinitely deep nesting of the various entries - here in my example, you have three levels deep - no more, no less. You can make that four or five levels - but I don't know how you could extend that to any depth of nesting.... A: I had to do the same for locations for our application and I did it by iterating through elements. I wonder if there is an easy way.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Spring MVC Request Life Cycle This seems like a pretty basic question but I unfortunately don't know the answer and can't seem to find it. I'm trying to understand the lifecycle of an initial POST/GET to a Java 6 server running Spring 2.5 on Tomcat 6. I noticed that for an HttpServletRequest, request.getRequestURL() in a controller returns the original request to which it is mapped (ex. "http://localhost:8080/computers/accessories.html"). That same invocation in the corresponding JSP returns the path to the JSP itself (ex. "http://localhost:8080/WEB-INF/jsp/category.jsp"). I was expecting to see the original HTML request! Am I missing something? A: This behavior is specified in the api doc: If this request has been forwarded using RequestDispatcher.forward(javax.servlet.ServletRequest, javax.servlet.ServletResponse), the server path in the reconstructed URL must reflect the path used to obtain the RequestDispatcher, and not the server path specified by the client. Look at getRequestURI.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: str_replace. replacing numbers from a string HI all I have this code here $pathinfo = pathinfo($fullpath); $tags = $shortpath; $tags = str_replace("/", " ", $tags); $tags = str_replace("__", " ", $tags); $tags = str_replace(".png", "", $tags); $tags = str_replace(".jpg", "", $tags); $tags = str_replace(".jpeg", "", $tags); $tags = str_replace(".gif", "", $tags); Everything works fine with the above but I also need it to replace some numbers at the start of the files I am adding example of a file would be 247991 - my_small_house.jpg its the numbers before the "-" I need gone Can this be done? Thanks A: You can use a regex with preg_replace() or preg_split(), but I think explode() would be better: $chunks = explode('-',$shortpath); // you just keep the part after the dash $tags = str_replace(array('/','__'),' ', $chunks[1]); $tags = str_replace(array('.png','.jpg','.jpeg','.gif'),'',$tags); /* used array to avoid code repetition */ A: Is the number you have to delete made of a fixed number of digits? If so, you could just do: $tags = substr($tags, 9); Else, if you're sure that every number ends with " - ", you could do: $tags = substr($tags, strrpos($tags," - ") + 3); A: Try this: preg_replace('/^[0-9]+(.+)/', '$1', $tags);
{ "language": "en", "url": "https://stackoverflow.com/questions/7612647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Heap memory keeps increasing When starting the application, the application consumes slowly more and more memory. I am trying to figure out why this is happening and haven't been very successful yet. Our WPF client gets the data pushed in from the server. The backend is in C++ and the data gets pushed into our C# model and wired down through the ViewModels up to the DataGrid. After a while of inactivity though, I can see that the heap memory and the Large Heap Size keep increasing for no reason. Well the data is pushed in, so maybe this is the reason but after 2 hours and 15 min the unnecessarily increased memory is freed up again, just to go slowly up again. On the right side of the graph (after over 24 hours), I have loaded even more tabs and more data, hence the massive increase but from then on there is no freeing up of memory anymore. The Graph is showing that System.Windows.EffectiveValueEntry[] is taking the most memory. From my understanding this class is related to the WPF dependency objects. But I have no idea what could be causing this. I am not expecting the memory to go down, as I am not closing anything. But why is it going up like this? What could be the cause? Many Thanks,
{ "language": "en", "url": "https://stackoverflow.com/questions/7612648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: OSX/Cocoa : get Absolute position of docktile icon in screen I am wondering if there is a way to find the absolute position of a docktile in the screen without using the accessibility API? Thanks in advance. A: Have you looked at Apple's DockTile example code?
{ "language": "en", "url": "https://stackoverflow.com/questions/7612657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is an HTTP header element? According to this Apache documentation "Some HTTP headers (such as the set-cookie header) have values that can be decomposed into multiple elements". I can't make much sense of this. For example, when I use the getElements() method on a "Set-Cookie" Header object that has a value of: SESSIONID=abcdefg01234; Path=/; Expires=Wed, 09 Jun 2021 10:18:14 GMT I get back an array of two HeaderElements, one header element is: SESSIONID=abcdefg01234; Path=/; Expires=Wed and the other one is: 09 Jun 2021 10:18:14 GMT How useful is this? On these HeaderElements I can invoke methods like getName(), getValue(), getParameterByName() but what will be the value or parameters of 09 Jun 2021 10:18:14 GMT ??? Also why is the valid parameter of the header Expires=Wed, 09 Jun 2021 10:18:14 GMT split into two? This seems wrong. Yet, when I do call header.getElements() on the header: Set-Cookie: SESSIONID=abcdefg01234; Path=/; Expires=Wed, 09 Jun 2021 10:18:14 GMT It becomes split into two header elements, as these are supposed to be comma separated... Still, I fail to find any better explanation of the concept of a header element than is mentioned here. What, then, are these header elements? Could anyone explain, please? A: What you're getting there is the header called "Cookie" which is one of the headers sent by the servers for previouslly set cookies. The format of this header's value is "cokkie1name=cookie1valie;cookie2name=cookie2value;" and so on for each of the cookies previouslly set.The actual value of the "Cookie" header is all that concatenated chain of cookie name/values separated by ";". Once you recuperate said value of said header, you can split it by ";" to get the name/value of each cookie. Now that being said, apparentlly when Apache's HttpCliont library's parsing of header values makes a known blunder here, and it wrongly splits by "," instead of ";". As the Apache guys sais on this forum thread, for them it's normal behaviour, if you want a different one make your own parser: https://issues.apache.org/jira/browse/HTTPCLIENT-810 A: What you are running into is a problem with Set-Cookie; it uses the delimiter "," in the wrong way. A better example would be "Allow" or "Accept". See http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p1-messaging-16.html#rfc.section.3.2.p.7 for more information.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: C# Transactions - best performance My webapplication has a class WorkItem with a RecordID (Guid as Primary Key) and a FriendlyID (string) that consists of Type-Date-RandomNumbers. If I create a new WorkItem, I create a new FriendlyID as well. The format of the FriendlyID cannot be changed (client specification) and is like <Type (one char)>-<Current Date (yyymmdd)>-<6 random numbers>. private string GenerateFriendlyID() { string res = String.Empty; // code omited // ... // IT'S NOT THE QUESTION HOW TO PROGRAM THIS METHOD! // It's about the fastest and best way/design to make // sure the generated ID is unique! (see below) return res; // sth like "K-20110930-158349" } public override void Create() { if (String.IsNullOrEmpty(friendlyID)) { GenerateFriendlyID(); } base.Create(); } This code does fails under heavy load, so I get the same FriendlyIDs multiple times. What is the best way to make sure that my friendly ID is unique? * *Make a UNIQUE-Constraint on FriendlyID in the DB. * *Begin a transaction, generate a FriendlyID, insert and commit *Rollback and try again if I get a SQLException. *Just create it. * *Select all WorkItems with this.FriendlyID. *If selection is > 1, repeat until it's == 1 I'm sure there is another way, but I guess #1 should be the preferred. Are there any ways I'm missing or is #1 the way to go? I hate to use Exceptions for my workflow though and I know that they're really slow. A: As your RecordID is already based on a GUID, I'd parse that to create the friendly ID. Guid.ToByteArray() may be a useful place to start. A: My suggestion is in any case, whatever kind of id you want to generate, do this in the SQL in a stored procedure and not from .NET client code. It is always better to have an atomic entry point which takes some parameters and does the job, so you can call the stored and get your record saved and the id back to you as out parameter, even more than one, like a unique code and a guid. in this way, then, you move concurrency issues from the .NET client code to the Database Server and db servers are designed to handle concurrency well. A: Use the KeyGenerator pattern from PoEAA by M.Fowler. Here is a sample for a file system solution and it uses a mutex for cross process locking. In a case of MS SQL you can use a transaction instead of mutex. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; using System.IO; using System.Runtime.CompilerServices; namespace ConsoleApplication1 { public class KeyGenerator { private string FileName; private long IncrementedBy; private long NextId; private long MaxId; public KeyGenerator(string filename, long incrementedby) { FileName = filename; IncrementedBy = incrementedby; NextId = MaxId = 0; } //[MethodImpl(MethodImplOptions.Synchronized)] public long NextID() { if (NextId == MaxId) { reserveIds(); } return NextId++; } private void reserveIds() { Mutex m = new Mutex(false, "Mutex " + FileName.Replace(Path.DirectorySeparatorChar, '_')); try { m.WaitOne(); string s = File.ReadAllText(FileName); long newNextId = long.Parse(s); long newMaxId = newNextId + IncrementedBy; File.WriteAllText(FileName, newMaxId.ToString()); NextId = newNextId; MaxId = newMaxId; // Simulate some work. Thread.Sleep(500); } finally { m.ReleaseMutex(); } } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7612669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: nditer: possible to manually handle dimensions with different lengths? Using numpy's nditer, is it possible to manually handle dimensions with different lengths? For example, lets say I had an array A[5, 100] and I wanted to sample every 10 along the second axis so I would end up with an array B[5,10]. Is it possible to do this with nditer, handling the iteration over the second axis manually of course (probably in cython)? Another way to ask this would be, is it possible to ask nditer to allow me to manually iterate over a set of dimensions I provide? I want to be able to do something like this (modified from this example) @cython.boundscheck(False) def sum_squares_cy(arr): cdef np.ndarray[double] x cdef np.ndarray[double] y cdef int size cdef double value cdef int j axeslist = list(arr.shape) axeslist[1] = -1 out = zeros((arr.shape[0], 10)) it = np.nditer([arr, out], flags=['reduce_ok', 'external_loop', 'buffered', 'delay_bufalloc'], op_flags=[['readonly'], ['readwrite', 'no_broadcast']], op_axes=[None, axeslist], op_dtypes=['float64', 'float64']) it.operands[1][...] = 0 it.reset() for xarr, yarr in it: x = xarr y = yarr size = x.shape[0] j = 0 for i in range(size): #some magic here involving indexing into x[i] and y[j] return it.operands[1] Does this make sense? Is it possible to do? A: a = numpy.arange(500.0).reshape((5,100)) numpy.lib.stride_tricks.as_strided(a, (5,10), (6400,64))
{ "language": "en", "url": "https://stackoverflow.com/questions/7612672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Cucumber - And I should see "first" before "last" - Rails 3.1 [Note: Using Rails 3.1.] I am trying to test that multiple models are being displayed in the proper order on my page, in my case, by desc date with the most recent on top. I know I can do the following to check if something exists on the page: And I should see "Some content on my page." But I want to do something along the lines of: And I should see "Most Recent" before "Really old" How would I go about writing steps to do that? I believe "And I should see" just scans the entire page for the specified argument, just not sure how to approach correct order. Thanks in advance! A: Use String#index on page.body to find the position of each, and assert first < second, e.g. And /^I should see "([^"]*)" before "([^"]*)"$/ do |phrase_1, phrase_2| first_position = page.body.index(phrase_1) second_position = page.body.index(phrase_2) first_position.should < second_position end A: A more flat version, with MiniTest assertion: And /^I should see "([^"]*)" before "([^"]*)"$/ do |phrase_1, phrase_2| assert page.body.index(phrase_1) < page.body.index(phrase_2) end
{ "language": "en", "url": "https://stackoverflow.com/questions/7612675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Rails 3 , Jquery.load with no layout My Ajax call is almost working, but i can't figure why my layout is still rendered? here is the context: I have a contact page on my site that use the linkedIn API. I want to load the LinkedIn info from an ajax call when my page is loaded. This work fine. But the result of my .load() is rendered with the layout of my site. I only want to render the html in my ajax_contact.html.erb.. contact.js $(document).ready(function() { $(".info_perso").load( '/contact/ajax_contact', {name:"benoit"}, function(data){ $(".info_perso").html( data );}) }); ajax_contact.html.erb: <%if [email protected]?%> <%=image_tag(@profile.picture_url)%><br /> <%[email protected]_name %> <%[email protected]_name %><br> <%[email protected]%><br> <%[email protected]%><br> <br /> <%[email protected]%><br> <label class="linkedin_title">Education</label><br> <% @profile.educations.all.each {|edu| %> Degre : <%=edu.degree%><br> Années : <%=edu.start_date.year%> - <%=edu.end_date.year%><br> Concentration : <%=edu.field_of_study%><br> Universite : <%=edu.school_name%><br> <%}%> <label class="linkedin_title">Experience</label><br> <% @profile.positions.all.each {|pos| %> <label class="linkedin_subtitle"> Company:</label> <%=pos.company.name%><br><br> <label class="linkedin_subtitle">Titre :</label> <%=pos.title%><br> <label class="linkedin_subtitle">Années :</label> <%=pos.start_date.month%>/<%=pos.start_date.year%> - <%= pos.end_date? ? pos.end_date.month.to_s + "/" + pos.end_date.year.to_s : "now"%><br> <label class="linkedin_subtitle">Description :</label> <%=pos.summary%><br/><br/> <%}%> <%else%> <%=link_to "Activer linkedIn", :controller=>"linkedinAuth", :action=>"benoit", :name=>"benoit", :callback=>"/linkedinauth/callbackbenoit"%> <%end%> contact_controller.rb: def ajax_contact name = params[:name] linkedinInfo = LinkedinApiInfo.find(1) linkedinCred = LinkedinCredential.find_by_name(name) if !linkedinCred.nil? client = LinkedIn::Client.new(linkedinInfo.apiKey, linkedinInfo.secretKey) client.authorize_from_access(linkedinCred.acctoken, linkedinCred.accsecret) # Pick some fields fields = ['first-name', 'last-name', 'headline', 'industry', 'num-connections','educations', 'num-recommenders','recommendations-received', 'summary', 'positions','picture-url'] @profile = client.profile :fields => fields @profile.recommendations_received.all.each {|rec| puts rec.recommendation_text} puts @profile end rescue SocketError puts "Unable to connect" render "ajax_content", :content_type=>"text/html", :layout=>false end Even with the :layout=>false, my layout is still showned!!! anyone can see the problem? A: add render 'ajax_contact', :layout=>false to your action, at the end, also partial views name should start with _, so your view should be named '_ajax_contact'
{ "language": "en", "url": "https://stackoverflow.com/questions/7612677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Unable to play sound (wav) on event I need to play a short sound. This is the code I wrote In my header I have: #import <AVFoundation/AVFoundation.h> @interface AddItemController : UIViewController < ... , AVAudioPlayerDelegate> { ... AVAudioPlayer *audio; } ... @property (nonatomic, retain) IBOutlet AVAudioPlayer *audio; @end In the .m file: - (void)playSound { NSURL *url; if (error) { url =[NSURL fileURLWithPath:[NSHomeDirectory() stringByAppendingPathComponent:@"Resources/error.wav"]]; } else { url =[NSURL fileURLWithPath:[NSHomeDirectory() stringByAppendingPathComponent:@"Resources/finished.wav"]]; } audio = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:nil]; audio.delegate = self; [audio play]; } -(IBAction)addItem:(id)sender { if ( ... ) { error = true; [self playSound]; return; } ... error = false; [self playSound]; } No warning nor issue with the code. All the other functions are working. Just I can't hear any sound. # Ok! I found the solution by myself. Sorry if I bothered you. Hope my solution will be useful for somebody. - (void)playSound { NSURL *url; NSString *path = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"error.aif"]; if ([[NSFileManager defaultManager] fileExistsAtPath:path]) { url = [NSURL fileURLWithPath:path]; audio = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:nil]; [audio prepareToPlay]; } [audio play]; } A: You may want to take a look at the AudioServicesCreateSystemSoundID and AudioServicesPlaySystemSound functions, from the AudioToolbox framework.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Servlet redirect resp.sendRedirect("/myurl"); req.getSession().setAttribute("foo", "bar"); In this case, do I have access to the foo attribute after the redirect? On generally speaking, a servlet is completely executed before the redirect is made or it stops it's execution after the redirect line? Thanks A: It continues execution. It's not a return, it just adds information to the response. A: after redirecting to that particular page, the control goes to that page and come back to old page and execute the req.getSession().setAttribute("foo", "bar"); also. this is the sendRedirect() bahaviour A: I found out a more generic approach that works for jsp files as well as for servlets. String url = "http://google.com"; response.reset(); response.setStatus(HttpServletResponse.SC_TEMPORARY_REDIRECT); response.setHeader("Location",url); response.getWriter().close(); response.getWriter().flush();
{ "language": "en", "url": "https://stackoverflow.com/questions/7612681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Function that returns, dynamically, how much code has been executed? I want to do a 'loading screen' that reflects exactly what percentage of my code has executed. I know that is how: distributing 'flags' by the code, and these flags would assume the value of the percentages. This in conexto static. What I would populate my code with 'flags', and worse, in a static manner. Is there any way to know 'how much' of my code has been processed? A: No, not in javascript. And it wouldn't make sense to do it line-by-line anyways. Some lines of code (ex. ajax calls, alerts) can take far longer than others to complete. I would recommend using flags. A: Check out JSCoverage. It's a little old, but it looks like it might give you an idea of how to go about doing something like that. A: Nope, unless it's instrumented in some way.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to deploy Visual C++ redistributables in WiX using Burn We just migrated our installer from WiX 2.x to WiX 3.6 and started using Burn. Previously, we were installing the Visual C++ redistributable by including the .msm files from C:\Program Files\Common Files\Merge Modules to our MSI. Those files are always in sync with the one we use to build our product (they are updated frequently by Microsoft to include security fixes). Now, we would like to have the Visual C++ redistributable downloaded only if required by using the Burn framework. However, Burn does not define a MsmPackage element to place inside Chain. What is the best approach for deploying Visual C++ redistributable using Burn? A: Merge modules can only be merged into an .msi; they can't be installed independently. You can use ExePackage to install the appropriate vcredist*.exe. A: This is what you should do: * *Create an MSI project that only includes the merge modules you need. *Clamp the MSI package version number, product code and upgrade code. *Use the MSI in your bundle. Now 2) will ensure that in upgrade scenarious the MSI won't be installed, or if it is an external payload, it won't be downloaded. The problem with packaging vcredist*.exe is that some user might think that it is an independent install and uninstall it and break your application.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Tomorrow in NSDateComponents isolate day I'm trying to retrieve tomorrows date based on the current date. I get the correct string, but how do I isolate the day? NSDateComponents *comps = [[NSDateComponents alloc] init]; [comps setDay:1]; NSDate *tommorrow = [cal dateByAddingComponents:comps toDate:[NSDate date] options:0]; [comps release]; NSLog(@"%@", tommorrow); Returns a string of: 2011-10-01 15:07:07 +0000 A: You can get the day like this, NSDateComponents *dcs = [cal components:NSDayCalendarUnit fromDate:tomorrow]; int day = [dcs day];
{ "language": "en", "url": "https://stackoverflow.com/questions/7612689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How I can compare 2 files without download then by using PHP? I have 1 image (source) stored in one server (this server is only an data server without php or anything), using gd I create another image in a php server with the source as base for creation, so I have one generated image and one source file, for performance I create some kind of a "cache" script thats make a copy of my generated image in my php server, the question is how I can compare if the source image have been update to update my cache? (without using a database, just files treatment, and I need speed and low bandwidth use) the "cache simple code" is: <?php if (!file_exists('cache_image.png')) { $img = file_get_contents('image_generator.png'); file_put_contents('cache_image.png',$img); }else{ //i need to test if the source image have been updated } ?> A: I suggest creating a MD5 hash of each file and compare the hashes.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to query an xml element with xpath with an existing namespace <?xml version="1.0" encoding="utf-8"?> <Report p1:schemaLocation="TemplateXXX http://localhost?language=en" Name="TemplateXXX" xmlns:p1="http://www.w3.org/2001/XMLSchema-instance" xmlns="TemplateXXX"> <HEADER attr1="one" attr2="two" /> <Table filename="12345.pdf"> <left> <details> <item/> <item/> </details> </left> <right> <details> <item/> <item/> </details> </right> </Table> </Report> I'm running into a strange problem when trying to query elements and attributes in an xml document where a namespace is in the xml. When I try to query the document to get the header element with the following xpath query I consistantly get null results XDocument root = XDocument.Load(filePath); var element = root.XPathSelectElement("/Report/HEADER"); This always returns null however the moment I remove the namespace from the document the query returns the exepcted element. What is it that I'm getting wrong as I'm getting some what frustrated. edit: updated xml to valid xml A: I would personally recommend that you didn't do this with XPath, but you can if you really want. Here's a short but complete program which works with your sample XML, after I'd fixed it up (it isn't valid XML at the moment... please try to provide working examples next time): using System; using System.Xml; using System.Xml.Linq; using System.Xml.XPath; class Test { static void Main() { var doc = XDocument.Load("test.xml"); var manager = new XmlNamespaceManager(new NameTable()); manager.AddNamespace("foo", "TemplateXXX"); var query = doc.XPathSelectElement("/foo:Report/foo:HEADER", manager); Console.WriteLine(query); } } In a normal LINQ to XML query you'd just use: XNamespace ns = "TemplateXXX"; XElement header = doc.RootElement.Element(ns + "HEADER"); No need for a namespace manager etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I select a schema based on a variable? Consider: SET @PREFIX='DEV_'; SET @REFRESHDB=CONCAT(@PREFIX,'Refresh'); CREATE TABLE @REFRESHDB.`Metadata` ( `Key` VARCHAR(30) NOT NULL, `Value` VARCHAR(30) NOT NULL, PRIMARY KEY (`Key`) ) ENGINE = InnoDB; INSERT INTO @REFRESDB.`Metadata` (`Key`, `Value`) VALUES ("Version", "0"); This does not seem to be valid: mysql comes back with: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@REFRESHDB.`Metadata` As far as I can tell I've done things correctly according to the documentation. Yet MySQL says it's not allowed. Is this some limitation of MySQL (not allowing use of variables as identifiers) or something else? A: You will have to use prepare statement/dynamic sql to do this. This article goes over both of these in excellent detail: http://rpbouman.blogspot.com/2005/11/mysql-5-prepared-statement-syntax-and.html Try this: SET @PREFIX='DEV_'; SET @REFRESHDB=CONCAT(@PREFIX,'Refresh'); SET @st = CONCAT('CREATE TABLE ', @REFRESHDB,'.`Metadata` ( `Key` VARCHAR(30) NOT NULL, `Value` VARCHAR(30) NOT NULL, PRIMARY KEY (`Key`) ) ENGINE = InnoDB'); PREPARE tStmt FROM @s; EXECUTE tStmt; SET @s = CONCAT('INSERT INTO ', @PREFIX, '.`Metadata` (`Key`, `Value`) VALUES ("Version", "0")'); PREPARE stmt FROM @s; EXECUTE stmt; A: Documentation states: "User variables can be assigned a value from a limited set of data types: integer, decimal, floating-point, binary or nonbinary string, or NULL value" You are trying to use the variable as an object. This is not supported. A: I would suggest you write a stored procedure: DELIMITER $$ CREATE PROCEDURE (IN DBname varchar(255) , IN AKey varchar(255) , IN AValue varchar(255)) BEGIN DECLARE query VARCHAR(1000); -- First check the DBName against a list of allowed DBnames, -- to prevent SQL-injection with dynamic tablenames. DECLARE NameAllowed BOOLEAN; SELECT 1 INTO NameAllowed WHERE DBName IN ('validDB1','validDB2'); IF (NameAllowed = 1) THEN BEGIN -- DBName is in the whitelist, it's safe to continue. SET query = CONCAT('INSERT INTO ' ,DBName ,'.MetaData (`key`,`value`) values (?,?)); -- note the use of parameter placeholders, to prevent SQL-injection. PREPARE stmt FROM query; EXECUTE stmt USING Akey, AValue; DEALLOCATE PREPARE stmt; -- clears the query and its result from the cache. END; END IF; END $$ DELIMITER ;
{ "language": "en", "url": "https://stackoverflow.com/questions/7612693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to get the length of the I/O queues which access disks in Linux kernel How to get the length of the I/O queue (all I/O access disks) in Linux kernel? Is there a function or a variable which can get the length?
{ "language": "en", "url": "https://stackoverflow.com/questions/7612694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Getting a Border Control to Have Sunken Border I want my to have a sunken border like a textbox. How to do this? Is there a way to get the controltemplate to mimc the parent border? A: There is no theme for you to use, but you can work around like this: Using this MSDN model (http://i.msdn.microsoft.com/dynimg/IC84967.gif): Here's my recommendation: (sunken inner) Just change the height/width of the outside border and you use this block of XAML like a TextBox. Reverse the two border tags if you want an outder border instead. Should be easy for you. <Border Width="100" Height="200" BorderBrush="Gainsboro" BorderThickness="0,0,5,5"> <Border BorderBrush="Gray" BorderThickness="5,5,0,0"> <TextBox Text="Hello World" BorderThickness="0" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" /> </Border> </Border> Special thanks to: Style a border with a different brush color for each corner Should look like this: A: You can try something like this <Border Margin="20" BorderThickness="0.5" BorderBrush="Gray"> <Border BorderThickness="1,1,0,0" BorderBrush="DarkGray"> <ContentPresenter /> </Border> </Border> You might need to play with the colours though.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Difference between db.collectionX.save and db.collectionX.insert > itemsA = { attrA : "vA", attrB : "vB" } { "attrA" : "vA", "attrB" : "vB" } > db.collectionA.insert(itemsA) > db.collectionA.find() { "_id" : ObjectId("4e85de174808245ad59cc83f"), "attrA" : "vA", "attrB" : "vB" } > itemsA { "attrA" : "vA", "attrB" : "vB" } > itemsB = { attrC : "vC", attrD : "vD" } { "attrC" : "vC", "attrD" : "vD" } > db.collectionB.save(itemsB) > db.collectionB.find() { "_id" : ObjectId("4e85de474808245ad59cc840"), "attrC" : "vC", "attrD" : "vD" } > itemsB { "attrC" : "vC", "attrD" : "vD", "_id" : ObjectId("4e85de474808245ad59cc840") } Here is my observation: After being inserted into collectionA, the value of itemsA is not modified. In the contrast, after being stored into collectionB, the value of itemB is changed! Is there any rule that guides those modifications so that I know what should be expected after a value is either inserted or saved into a collection? Thank you A: For save, if you provide _id, it will update. If you don't, it will insert. One neat feature about Mongo's JavaScript shell is if you call a function without the executing parentesis, it will return the implementation. This is useful if you are curious about any of MongoDB's methods and don't feel like sorting through all of the source code! In the JS shell: > db.test.save function (obj) { if (obj == null || typeof obj == "undefined") { throw "can't save a null"; } if (typeof obj == "number" || typeof obj == "string") { throw "can't save a number or string"; } if (typeof obj._id == "undefined") { obj._id = new ObjectId; return this.insert(obj); } else { return this.update({_id:obj._id}, obj, true); } } > db.test.insert function (obj, _allow_dot) { if (!obj) { throw "no object passed to insert!"; } if (!_allow_dot) { this._validateForStorage(obj); } if (typeof obj._id == "undefined") { var tmp = obj; obj = {_id:new ObjectId}; for (var key in tmp) { obj[key] = tmp[key]; } } this._mongo.insert(this._fullName, obj); this._lastID = obj._id; } From this, we can see that save is a wrapper for update and insert. Functionally, save and insert are very similar, especially if no _id value is passed. However, if an _id key is passed, save() will update the document, while insert() will throw a duplicate key error. A: For save, if you provide _id, it will update. If you don't, it will insert, that's the differences.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: make my application start when boot complete on/off Heloo everyone im new to android development and im developing an android application for my graduation project my application must start when the device boot up so to that i put these lines in the AndroidManifest file <!--this to make app run at start up--> <receiver android:name="MyIntentReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> <category android:name="android.intent.category.HOME" /> </intent-filter> </receiver> so my program run autmaticlly when boot complete. my question is how to stop this by user?? i want to put a toggle button on/off for this option so the user can chose if he want the app to start automaticlly in background or manually ??? thanks in advance A: This sounds pretty straight forward. Basically, when the phone starts, the Receiver class "MyIntentReceiver" will run. Inside this receiver you can put code based on user preferences to either start the application or do nothing. The toggle would be a CheckBoxPreference in the user preferences. Let me know if you have any questions. A: so my program run automatically when boot complete. I would say no to that. It is rather you receiver gets notified when boot is completed. From that point on, your program has to decide to fire up your activity/service in onReceive() method of your receiver. Thus, you will need to save a preference to give option to user. When your receiver gets notified, check the pref setting set by user. For more information about saving preference, you can refer to http://developer.android.com/guide/topics/data/data-storage.html#pref
{ "language": "en", "url": "https://stackoverflow.com/questions/7612715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: PHP shell_exec() result in txt file? I'd like to try to see the output of two shell_exec() calls in a .txt file.So i tried this: $data_server = shell_exec('./c5.0demo -f $username -r'); $errorFile = "error.txt"; $fileopen = fopen($errorfile, 'w') or die ("can't open file"); fwrite($fileopen, $data_server); $data_server2 = shell_exec('./predictBatch -f $username -r > $username.result'); $fileopen = fopen($errorfile, 'w') or die ("can't open file"); fwrite($fileopen, $data_server2); The executable "c5.0demo" and "predictBatch" are in the same directory of this PHP's script. The variable $username is retrieved by POST method: $user = $_POST['username']; Being an array i put the value inside another variable by this: foreach($user as $val) $username .= $val; I think this is correct but i don't have "error.txt" inside my directory. Why am i wrong? Thanks for all your support! A: Try the following: $data_server = shell_exec("./c5.0demo -f $username -r"); $data_server2 = shell_exec("./predictBatch -f $username -r > $username.result"); file_put_contents("/path/to/log/error.txt","{$data_server} : {$data_server2}\n",FILE_APPEND); Obviously $username needs to be defined - also, what is the datatype of $username? I ask, because $username.result looks very wrong. You should also take very serious note of @mobius warning.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Override libc functions called from another libc function with LD_PRELOAD I've a project aiming to run php-cgi chrooted for mass virtual hosting (more than 10k virtual host), with each virtual host having their own chroot, under Ubuntu Lucid x86_64. I would like to avoid creating the necessary environment inside each chroot for things like /dev/null, /dev/zero, locales, icons... and whatever which could be needed by php modules thinking that they run outside chroot. The goal is to make php-cgi run inside a chroot, but allowing him access to files outside the chroot as long as those files are (for most of them) opened in read-only mode, and on an allowed list (/dev/log, /dev/zero, /dev/null, path to the locales...) The obvious way seems to create (or use if it exists) a kernel module, which could hook and redirect trusted open() paths, outside of the chroot. But I don't think it's the easiest way: * *I've never done a kernel module, so I do not correctly estimate the difficulty. *There seems to be multiple syscall to hook file "open" (open, connect, mmap...), but I guess there is a common kernel function for everything related to file opening. I do want to minimize the number of patchs to php or it's module, to minimize the amount of work needed each time I will update our platform to the latest stable PHP release (and so update from upstream PHP releases more often and quickly), so I find better to patch the behavior of PHP from the outside (because we have a particular setup, so patching PHP and propose patch to upstream is not relevant). Instead, I'm currently trying an userland solution : hook libc functions with LD_PRELOAD, which works well in most cases and is really quick to implement, but I've encountered a problem which I'm unable to resolve alone. (The idea is to talk to a daemon running outside the chroot, and get file descriptor from it using ioctl SENDFD and RECVFD). When I call syslog() (without openlog() first), syslog() calls connect() to open a file. Example: folays@phenix:~/ldpreload$ strace logger test 2>&1 | grep connect connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory) connect(1, {sa_family=AF_FILE, path="/dev/log"}, 110) = 0 So far so good, I've tried to hook the connect() function of libc, without success. I've also tried to put some flags to dlopen() inside the _init() function of my preload library to test if some of them could make this work, without success Here is the relevant code of my preload library: void __attribute__((constructor)) my_init(void) { printf("INIT preloadz %s\n", __progname); dlopen(getenv("LD_PRELOAD"), RTLD_NOLOAD | RTLD_DEEPBIND | RTLD_GLOBAL | RTLD_NOW); } int connect(int sockfd, const struct sockaddr *addr, socklen_t addrlen) { printf("HOOKED connect\n"); int (*f)() = dlsym(RTLD_NEXT, "connect"); int ret = f(sockfd, addr, addrlen); return ret; } int __connect(int sockfd, const struct sockaddr *addr, socklen_t addrlen) { printf("HOOKED __connect\n"); int (*f)() = dlsym(RTLD_NEXT, "connect"); int ret = f(sockfd, addr, addrlen); return ret; } But the connect() function of the libc still takes precedence over mine: folays@phenix:~/ldpreload$ LD_PRELOAD=./lib-preload.so logger test INIT preloadz logger [...] no lines with "HOOKED connect..." [...] folays@phenix:~/ldpreload$ Looking at the code of syslog() (apt-get source libc6 , glibc-2.13/misc/syslog.c), it seems to call openlog_internal, which in turn call __connect(), at misc/syslog.c line 386: if (LogFile != -1 && !connected) { int old_errno = errno; if (__connect(LogFile, &SyslogAddr, sizeof(SyslogAddr)) == -1) { Well, objdump shows me connect and __connect in the dynamic symbol table of libc: folays@phenix:~/ldpreload$ objdump -T /lib/x86_64-linux-gnu/libc.so.6 |grep -i connec 00000000000e6d00 w DF .text 000000000000005e GLIBC_2.2.5 connect 00000000000e6d00 w DF .text 000000000000005e GLIBC_2.2.5 __connect But no connect symbol in the dynamic relocation entries, so I guess that it explains why I cannot successfully override the connect() used by openlog_internal(), it probably does not use dynamic symbol relocation, and probably has the address of the __connect() function in hard (a relative -fPIC offset?). folays@phenix:~/ldpreload$ objdump -R /lib/x86_64-linux-gnu/libc.so.6 |grep -i connec folays@phenix:~/ldpreload$ connect is a weak alias to __connect: eglibc-2.13/socket/connect.c:weak_alias (__connect, connect) gdb is still able to breakpoint on the libc connect symbol of the libc: folays@phenix:~/ldpreload$ gdb logger (gdb) b connect Breakpoint 1 at 0x400dc8 (gdb) r test Starting program: /usr/bin/logger Breakpoint 1, connect () at ../sysdeps/unix/syscall-template.S:82 82 ../sysdeps/unix/syscall-template.S: No such file or directory. in ../sysdeps/unix/syscall-template.S (gdb) c 2 Will ignore next crossing of breakpoint 1. Continuing. Breakpoint 1, connect () at ../sysdeps/unix/syscall-template.S:82 82 in ../sysdeps/unix/syscall-template.S (gdb) bt #0 connect () at ../sysdeps/unix/syscall-template.S:82 #1 0x00007ffff7b28974 in openlog_internal (ident=<value optimized out>, logstat=<value optimized out>, logfac=<value optimized out>) at ../misc/syslog.c:386 #2 0x00007ffff7b29187 in __vsyslog_chk (pri=<value optimized out>, flag=1, fmt=0x40198e "%s", ap=0x7fffffffdd40) at ../misc/syslog.c:274 #3 0x00007ffff7b293af in __syslog_chk (pri=<value optimized out>, flag=<value optimized out>, fmt=<value optimized out>) at ../misc/syslog.c:131 Of course, I could completely skip this particular problem by doing an openlog() myself, but I guess that I will encounter the same type of problem with some others functions. I don't really understand why openlog_internal does not use dynamic symbol relocation to call __connect(), and if it's even possible to hook this __connect() call by using simple LD_PRELOAD mechanism. The others way I see how it could be done: * *Load libc.so from an LD_PRELOAD with dlopen, get the address of the libc's __connect with dlsym() and then patch the function (ASM wise) to get the hook working. It seems really overkill and error prone. *Use a modified custom libc for PHP to fix those problems directly at the source (open / connect / mmap functions...) *Code a LKM, to redirect file access where I want. Pros : no need of ioctl(SENDFD) and no daemon outside the chroot. I would really appreciate to learn, if it is ever possible, how I could still hook the call to __connect() issued by openlog_internal, suggestions, or links to kernel documentation related to syscall hooking and redirection. My google searches related to "hook syscalls" found lot of references to LSM, but it seems to only allow ACLs answering "yes" or "no", but no redirection of open() paths. Thanks for reading. A: It's definitely not possible with LD_PRELOAD without building your own heavily-modified libc, in which case you might as well just put the redirection hacks directly inside. There are not necessarily calls to open, connect, etc. whatsoever. Instead there may be calls to a similar hidden function bound at library-creation time (not dynamically rebindable) or even inline syscalls, and this can of course change unpredictably with the version. Your options are either a kernel module, or perhaps using ptrace on everything inside the "chroot" and modifying the arguments to syscalls whenever the tracing process encounters one that needs patching up. Neither sounds easy... Or you could just accept that you need a minimal set of critical device nodes and files to exist inside a chroot for it to work. Using a different libc in place of glibc, if possible, would help you minimize the number of additional files needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: python subprocess and unicode execv() arg 2 must contain only strings I have a django site where I need to call a script using subprocess. The subprocess call works when I'm using ascii characters but when I try to issue arguments that are utf-8 encoded, I get an error: execv() arg 2 must contain only strings. The string u'Wiadomo\u015b\u0107' is coming from a postgres db. This example is using polish words. When I run it using english words, I have no issues. The call looks like this: subprocess.Popen(['/usr/lib/p3web2/src/post_n_campaigns.py', '-c', u'bm01', '-1', u'Twoja', '-2', u'Wiadomo\u015b\u0107', '-3', u'', '-4', u'', '-5', u'', '-6', u'', '-m', u'pl', '-p', 'yes']) I'm not sure how to handle the strings in this case. The odd thing is that this works fine when i run it through the python interpreter. A: You should encode the Unicode strings in the encoding your program expects. If you know the program expects UTF-8: u'Wiadomo\u015b\u0107'.encode('utf8') If you don't know what encoding you need, you could try your platform's default encoding: u'Wiadomo\u015b\u0107'.encode()
{ "language": "en", "url": "https://stackoverflow.com/questions/7612727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: how to perform tokenization and stopword removal in C#? Basically i want to tokenise each word of the paragraph and then perform stopword removal. Which will be preprocessed data for my algorithm. A: You can remove all punctuation and split the string for whitespace. string s = "This is, a sentence."; s = s.Replace(",","").Replace("."); string words[] = s.split(" "); A: if read from text file or any text you can: char[] dele = { ' ', ',', '.', '\t', ';', '#', '!' }; List<string> allLinesText = File.ReadAllText(text file).Split(dele).ToList(); then you can convert stop-words to dictionary and save your document to list then foreach (KeyValuePair<string, string> word in StopWords) { if (list.contain(word.key)) list.RemovAll(s=>s==word.key); } A: You can store all separation symbols and stopwords in constants or db: public static readonly char[] WordsSeparators = { ' ', '\t', '\n', '\n', '\r', '\u0085' }; public static readonly string[] StopWords = { "stop", "word", "is", "here" }; Remove all puctuations. Split text and filter: var words = new List<string>(); var stopWords = new HashSet<string>(TextOperationConstants.StopWords); foreach (var term in text.Split(TextOperationConstants.WordsSeparators)) { if (String.IsNullOrWhiteSpace(term)) continue; if (stopWords.Contains(term)) continue; words .Add(term); }
{ "language": "en", "url": "https://stackoverflow.com/questions/7612732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Loading images into TImage, via array I'm very new to delphi, doing a project for my A level. When I run my code the images just don't show, I've looked everywhere and my teacher can't help me. Can anyone tell me what I'm missing? const Animal : array[0..6] of string = ('Bears','Dogs','Cats','Chickens','Horses','Cows','Monkeys'); ImagePaths : array [0..6] of string = ('img0.JPG', 'img1.JPG', 'img2.JPG', 'img3.JPG', 'img4.JPG', 'img5.JPG', 'img6.JPG'); var i:integer; Images : array [0..11] of TImage; procedure LoadImages; var k,l:integer; begin Randomize; k:=Random(11); for l:= 0 to k do begin Images[l] := TImage.Create(nil); Images[l].Picture.LoadFromFile(ImagePaths[i]) end end; procedure TForm4.FormCreate(Sender: TObject); begin randomize; i:=random(6); QuestionLbl.Caption:=Format('How many %s are there?',[Animal[i]]); LoadImages; end; The idea is that a random number of images of the same randomly selected animal is displayed for a child to then count and input, if that helps. Much appreciate any help. edit. as this is only a prototype I have copied it all to a new application and this is all the code I didn't include: unit Unit1; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls,jpeg, ExtCtrls; type TForm1 = class(TForm) QuestionLbl: TLabel; procedure FormCreate(Sender: TObject); private { Private declarations } public { Public declarations } end; The same error is occurring and I'm afraid I'm too ignorant to follow what I'm sure were very clear instructions. A: What appears to be missing is that you need to tell the image which control is its parent so that it can appear on screen. Do that like this: Images[l].Parent := TheForm; Obviously your form variable will have a different name, but I'm sure you know what it's called. When you do this you will find that they all end up on top of each other. Assign to the Top and Left properties to position then. Finally you will likely want to set the Height and Width properties of the images to match the dimensions of the images, Images[l].Picture.Height and Images[l].Picture.Width. I can't imagine why your code produces an access violation but it's presumably unrelated to the question you asked. The following code proves that what I say above is correct: procedure TMyForm.FormCreate(Sender: TObject); var Image: TImage; begin Image := TImage.Create(Self); Image.Parent := Self; Image.Picture.LoadFromFile('C:\desktop\image.jpg'); Image.Top := 0; Image.Left := 0; Image.Height := Image.Picture.Height; Image.Width := Image.Picture.Width; end; Without your full code I cannot debug your AV. A: Why you just don't put your TImages on the form, and just LoadFromFile the ones you want to show ? Appear to me that would be easier. But: what you trying to accomplish? From the code, I can imagine you were trying to show a number of images to people count them and answer the question... So, if you add (and position) the 11 empty(no image) TImages in the form, you can do: // Any trouble in copying your FormCreate header, David? ;-) procedure TMyForm.FormCreate(Sender: TObject); begin Images[0] := Image_N1; // First TImage Images[1] := Image_N2; Images[2] := Image_N3; // Do that until the 12 slots are filled // As a exercise for Danny Robinson( the OP ), you can do that in a for..do using // the Form.Components array property to automate it instead of // doing one-at-a-line end; procedure ClearImages; var I: Integer; begin for I = Low(Images) to High(Images) do begin Images.Picture.Graphic := Nil; end; end; procedure LoadImages; var k,l:integer; begin ClearImages; Randomize; k:=Random(11); for l:= 0 to k do begin Images[l].Picture.LoadFromFile(ImagePaths[i]) end; end; If you still need to create the TImages on the fly, just create the 12 TImages once on FormCreate (as in David's answer) and keep calling the LoadImages. EDIT: Some ideas, since you are learning. Creating visual controls on-the-fly is a very boring(in my opinion, of course) task that involves: * *Creating the object, obviously *Assigning it to a parent control (forms doesn't need this pass) *Sizing it accordingly to your visual planning *Positioning it in the place of the parent control you want it to be *Set it's anchors, for it to reposition and/or resize when the parent control is resized (if needed) *Only after all this, make it do what you want it to do (in this case, showing images). Almost all those steps David Heffernan's answer show the code for it. But, unless you really need a dynamic layout, doing all those on design-time is more practical ;-) A: You need to set the Parent property of each TImage in order to see them onscreen. You can't use the global Form pointer variable, though, because it has not been assigned yet when the OnCreate event is triggered. So pass in the form's Self pointer as a parameter of LoadImages() instead. You have another bug - you declared a 12-element TImage array but declared a 7-element String array for the image paths. The way you are using Random(), if it generates a value above 6, you will go out of bounds of the String array. Try this instead: const ... ImagePaths : array [0..6] of string = ('img0.JPG', 'img1.JPG', 'img2.JPG', 'img3.JPG', 'img4.JPG', 'img5.JPG', 'img6.JPG'); var i: integer; Images : array [0..6] of TImage; procedure LoadImages(AParent: TWinControl); var i, k: integer; begin Randomize; k := Random(7); for i := 0 to k do begin Images[i] := TImage.Create(nil); Images[I].Parent := AParent; // set other properties, like Left/Top... Images[l].Picture.LoadFromFile(ImagePaths[i]); end; end; procedure TForm4.FormCreate(Sender: TObject); begin ... LoadImages(Self); end; A: TImage component must to be painted on the screen, for do so make a 1pixel panel as the parent to load the graphics. like in a loading screen so the images can be used as default TImage procedures.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can we have row addition to a grid in a web application using Microsoft technology The GridView in ASP.NET does not allow directly adding a row to the grid. For editing a row, the user has to click the 'Edit' button , make changes , then click 'Update'. To overcome this people use third party Grid from Telerik and others. Is there any way around this issue if we use Silverlight based Grids ? Does 'Dynamic Data Framework' or other frameworks from Microsoft help us overcome this issue ? Thanks and regards, Chak. A: You'd probably be better off using GridView combined with jQuery instead of having a Silverlight island in your page. A: You can add the row to the datasource and rebind or use a DataGrid control.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Maven : What should be the url value under the distributionManagement I am forced by Maven to specify a url under tag insde POM.xml file <distributionManagement> <repository> <id>nexus</id> <name>Nexus Staging Repo</name> <url>scp://home/maven2/html</url> </repository> </distributionManagement> I am running mvn deploy to deploy the war file under Tomcat Web-apps I don't have any domain , what should be the default to be provided here , and the username and the password inside , so that maven deploys my war into Tomcat . A: The deploy phase in the maven lifecycle refers to deploying artifacts to a maven repository, not deploying artifacts to an application server. If you want to deploy your webapp I suggest you have a look at the maven cargo plugin. Edit: Just to be extra clear: Deploying webapps to tomcat is not what "mvn deploy" is supposed to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I hide the Next anchor in my custom Nivo nav if it has less than six images? I'm working on a custom navigation thumbnail slider for the Nivo Slider Jquery plugin. I'm trying to hide the Next anchor when the thumbnail slider contains less than or equal to 6 thumbnails. .nivo-control is the anchor the thumbnail images are children of, and they are all children of .items. I've already tried: if ($('.items').children('.nivo-control') <= 6) { $('a.next').css('display', 'none !important'); } else { // Do something } A: Use if ($('.items').children('.nivo-control').length <= 6) { $('a.next').css('display', 'none !important'); } else { // Do something } A: Try this: if ($('.items').children('.nivo-control').length <= 6) { $('a.next').css('display', 'none !important'); } else { // Do something }
{ "language": "en", "url": "https://stackoverflow.com/questions/7612751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: PHP arrays - a 'set where key=?' type function? Is there a built in php function that allows me to set a value of an array based on a matching key? Maybe i've been writing too much SQL lately, but i wish I could perform the following logic without writing out nested foreach array like the following: foreach($array1 AS $k1 => $a1) { foreach($array2 AS $a2) { if($a1['id'] == $a2['id']) { $array[$k1]['new_key'] = $a2['value']; } } } Is there a better way to do this? In SQL logic, it would be "SET array1.new_key = x WHERE array1.id = array2.id". Again, i've been writing too much SQL lately :S A: When I need to do this, I use a function to first map the values of one array by id: function convertArrayToMap(&$list, $attribute='id') { $result = array(); foreach ($list as &$item) { if (is_array($item) && array_key_exists($attribute, $item)) { $result[$item[$attribute]] = &$item; } } return $result; } $map = convertArrayToMap($array1); Then iterate through the other array and assign the values: foreach ($array2 AS $a2) { $id = $a2['id']; $map[$id]['new_key'] = $a2['value']; } This are less loops overall even for one pass, and it's convenient for further operations in the future. A: This one is fine and correct foreach(&$array1 AS &$a1) { foreach($array2 AS $a2) { if($a1['id'] == $a2['id']) { $a1['new_key'] = $a2['value']; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7612754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is there any issue if I used sync connections in iPhone? I know that the pattern in iPhone is to use ASync connection calls (using the informal protocols that is implemented by the current class). In my case, I've created a utility class to do the networking staff and then return the data to the ViewController. I find is inadequate to implement the connection model as Async in a utility class because r I will write a block of code in the ViewControlle such following: (which IMHO is bad) MyUtilityConnection* utilConn = .... while (true) { if ([utilConn checkUnderlyingAsyncConnectionFinishedLoading]) break; } NSData* dataFromUrl = [utilConn dataFromUnderlayingConn]; So, the question is, Does using Sync connection model in iPhone could causes problem? and solutions? (What about the drawing will stril hanging until the data come???) A: AVOID by all means to do synchronous connections! This will obviously freeze your UI (and it gets worse if you don't have a good bandwidth of course). What you could do is to use the blocks syntax to write more readable code when you need to download data. Create a class that implements the NSURLConnection delegate methods, and then call the block when the data is done. See my OHURLLoader class on github for example that does exactly that (and that's only one solution). Usage example: NSURL* url = ... NSURLRequest* req = [NSURLRequest requestWithURL:url]; OHURLLoader* loader = [OHURLLoader URLLoaderWithRequest:req]; [loader startRequestWithCompletion:^(NSData* receivedData, NSInteger httpStatusCode) { NSLog(@"Download of %@ done (statusCode:%d)",url,statusCode); outputTextView.text = loader.receivedString; } errorHandler:^(NSError *error) { NSLog(@"Error while downloading %@: %@",url,error); outputTextView.text = [error localizedDescription]; }]; A: During sync methods (sendSynchronousRequest:returningResponse:error:) the UI is non-responsive (assuming that the sync method is called on the main thread). But they are fine on background threads, the easiest way to accomplish sync calls on a background thread is with GCD.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: mod_rewrite... renaming urls and getting them to work properly Here is the url that I have that works: example.com/php/statePage.php?region_slug=new-york-colleges Here is what I am trying to get the url to appear as: example.com/new-york-colleges.html I have tried several things so far and I think I am getting close but this is my first PHP / MySQL site and first time using .htaccess. Here is my .htaccess so far: RewriteEngine on RewriteRule ^/([^/.]+).html?$ php/statePage.php?region_slug=$1 [L] Any help would be greatly appreciated. If you could explain what I was missing or what I did wrong, that would help me more... I am trying to learn how to do this and simply giving me the cut and paste method will only help me today. THANK YOU, this means a lot to me! A: RewriteEngine on RewriteRule ^([^/\.]+)\.html/?$ php/statePage.php?region_slug=$1 [L] Two things to point out: When . is used in rewrite syntax, it represents any character. So your current rule says replace anything that's NOT any character, followed by '.html'. To work around this, escape your dot like so \.. There are several special characters in rewrite syntax such as this. It would benefit you to pay attention to them. Second, I'm not sure if this is rewrite canon or not, but I've never personally used forward slashes at the start of a rewrite rule before, and that may or may not impact the rule itself. Any user with a clarification on this is encouraged to reply. EDIT: The following characters are marked as special characters in rewrite syntax (and all regular expressions, for that matter), and as such, need to be escaped with a backslash \ in order to be used literally - [\^$.|?*+() Also, you may find this reference useful in further understanding the correct syntax for regular expressions.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: post form values to a phpgd script without refreshing the page and without button I am NOT good with server-side technologies (have a hard time wrapping my mind around them at points). I am pretty decent with PHP. I have a form that offers color options (now in drop-down format, but in future will be an image click). There are multiple choices in the form, for instance you can choose a frame color in one select menu, then choose a top color in another select menu in this illustration. Depending on which page you are on, there can be up to 12 of these choices, all named a,b,c,d...through l. I have an image that is being created by phpgd library. Here is the current setup for the php gd: $a = $_POST['a'];//1 $b = $_POST['b'];//2 $c = $_POST['c'];//3 $d = $_POST['d'];//4 $e = $_POST['e'];//5 $f = $_POST['f'];//6 $g = $_POST['g'];//7 $h = $_POST['h'];//8 $i = $_POST['i'];//9 $j = $_POST['j'];//10 $k = $_POST['k'];//1 $l = $_POST['l'];//12 $default = imagecreatefrompng('../configurator-testing/11ta-503/default.png'); $defaulta = imagecreatefrompng('../configurator-testing/11ta-503/black_a.png'); $defaultb = imagecreatefrompng('../configurator-testing/11ta-503/black_b.png'); header('Content-Type: image/png'); $x = imagesx($default); $y = imagesy($default); imagecopy($default, $defaulta,0, 0, 0, 0, $x, $y); imagecopy($default, $defaultb, 0, 0, 0, 0, $x, $y); imagepng($default); imagedestroy($default); imagedestroy($defaulta); imagedestroy($defaultb); Right now, it only posts a "default" image, which has a black frame, black top. What I want it to do is take the form input, and without refreshing the page or using a submit button, use the submitted values to change what image is created. File names are formatted according to the submitted value (ex. submitted black will correlate to files black_a.php and black_b.php, etc). Here is the form I am testing with: <img src="config-gd.php"/> <form id="configform" name="configform"> <label>Frame Color</label> <select name="a" id="a"> <option value="black">black</option> <option value="red">red</option> <option value="yellow">yellow</option> <option value="blue">blue</option> </select> <label>Top Color</label> <select name="b"> <option value="black">black</option> <option value="red">red</option> <option value="yellow">yellow</option> <option value="blue">blue</option> </select> Notice that it is pulling my image (phpgd) file in the first line above the form, so I want the choices to be processed through my phpgd script, and throw out the new color choices in the image above the form. I can figure out how to get it to process in the phpgd script, no problem. I am having problems with the posting with no refresh and without a button part. Anyone know some jquery/ajax and willing to help? I've been trying to put something together off of tutorials on the web, i'm having an awful time. I had found some simple functions to model after, and here is what I came up with, but I can't figure out how best to implement, and so far my efforts have not turned out: function postImg(layer1, layer2){ $.ajax({ url: "config-gd.php", data: { id: layer1, rate: layer2 }, type: "POST", success: function(){ alert('Done!'); } }); } A: If I am right you want an image created by php to load on an event without page reload. I think it is simple. ==================edited part code is replaced================================== Okay I see you need to send form data to php, it is trivial with Ajax, it's also easy to create the image too however what seems difficult for me is how to turn the raw data sent by php to a real image using js. So I don't include the ajax here, however you can still sent the form data dynamically, without reload of course and get the desired image, using the src attribute of the image, check this code, maybe it can be of some help if customized to your needs. image.php if($_GET['color']=='red') {$red=223; $green=32; $blue=3;} else {$red=0; $green=0; $blue=0;} $im = @imagecreate(110, 20) or die("Cannot Initialize new GD image stream"); $background_color = imagecolorallocate($im, $red, $green, $blue); $text_color = imagecolorallocate($im, 233, 233, 231); imagestring($im, 11, 5, 5, $_GET['txt'], $text_color); imagepng($im); imagedestroy($im); ?> index.htm <html> <head> <script type="text/javascript"> function updateImage(color) { var txt = document.getElementById("myTxtId").value; document.getElementById("imgElement").src='image.php?txt='+txt+'&color='+color; } </script> </head> <body > Write something here and I will get it back as an image created by php.<br> <form > <input type="text" name="txt" id="myTxtId"> <input type=submit value="Get it it black" onclick="updateImage('black');return false;" > <input type=submit value="Get it it red" onclick="updateImage('red');return false;" > </form> <img src="" border=24 color=gold height="111" width="222" id="imgElement"> </body> </html>
{ "language": "en", "url": "https://stackoverflow.com/questions/7612758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MongoDB connectivity with PHP dropping after idle time I'm building a webapp with MongoDB/PHP, and everything's going great... except one thing. My database connection is flaky. After X amount of time, when I refresh the page I get errors because queries are failing. I check mongod.exe and what I see is "Connection accepted from 127.0.0.1" - then I go back, refresh again, and everything's running all well and good. What could be causing this? Database connectivity issues are something I never had to deal with in MySQL - but that's a whole different beast. A: I would highly recommend you do your development with mongodb in a unix environment as they update the code the most often and you won't have to worry about strange bugs. Long ago i decided that doing dev in windows was much too inconvenient and moved my work environment to linux. If this sounds daunting, you might look into running a virtual machine with a local mount via samba such that you can run directly on a linux server on your local machine. Then you will have an environment similar to your production env. Hope this is helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Column_match([[1],[1,1]]) <--- how to make dimensions match with NA values? Any flag for this? Please, see the intended. >>> numpy.column_stack([[1], [1,2]]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/pymodules/python2.7/numpy/lib/shape_base.py", line 296, in column_stack return _nx.concatenate(arrays,1) ValueError: array dimensions must agree except for d_0 Input [[1],[1,2]] Intended Output [[NA,1], [1,2]] In general [[1],[2,2],[3,3,3],...,[n,n,n,n,n...,n]] to [[NA, NA, NA,..., NA,1], [NA, NA, ..., 2, 2], ...[n,n,n,n,n]] where the columns may be a triangluar zero matrix initially. Yes you can understand the term NA as None. I got the triangular matrix almost below. >>> a=[[1],[2,2],[3,3,3]] >>> a [[1], [2, 2], [3, 3, 3]] >>> len(a) 3 >>> [aa+['']*(N-len(aa)) for ... KeyboardInterrupt >>> N=len(a) >>> [aa+['']*(N-len(aa)) for aa in a] [[1, '', ''], [2, 2, ''], [3, 3, 3]] >>> transpose([aa+['']*(N-len(aa)) for aa in a]) array([['1', '2', '3'], ['', '2', '3'], ['', '', '3']], dtype='|S4') A: a pure numpy solution: >>> lili = [[1],[2,2],[3,3,3],[4,4,4,4]] >>> y = np.nan*np.ones((4,4)) >>> y[np.tril_indices(4)] = np.concatenate(lili) >>> y array([[ 1., nan, nan, nan], [ 2., 2., nan, nan], [ 3., 3., 3., nan], [ 4., 4., 4., 4.]]) >>> y[:,::-1] array([[ nan, nan, nan, 1.], [ nan, nan, 2., 2.], [ nan, 3., 3., 3.], [ 4., 4., 4., 4.]]) I'm not sure which triangular array you want, there is also np.triu_indices (maybe not always faster, but easy to read) A: column_stack adds a column to an array. That column is supposed to be a smalled (1D) array. When I try : from numpy import * x = array([0]) z = array([1, 2]) if you do this : r = column_stack ((x,z)) You'll get this : >>> array([0,1,2]) So, in order to add a column to your first array, maybe this : n = array([9]) arr = ([column_stack((n, x))], z) It shows up this : >>> arr ([array([[9, 0]])], array([[1, 2]])) It has the same look as your "intended output" Hope this was helpful !
{ "language": "en", "url": "https://stackoverflow.com/questions/7612767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: c# outline numbering I am trying to develop an algorithm in C# that can take an array list of URL's and output them in an outline numbered list. As you can imagine I need some help. Does anyone have any suggestions on the logic to use to generate this list? Example Output: 1 - http://www.example.com/aboutus 1.2 - http://www.example.com/aboutus/page1 1.3 - http://www.example.com/aboutus/page2 1.3.1 - http://www.example.com/aboutus/page2/page3 1.3.1.1 - http://www.example.com/aboutus/page2/page3/page4 1.3.2 - http://www.example.com/aboutus/page5/page6 1.3.2.1 - http://www.example.com/aboutus/page5/page7/page9 1.3.2.2 - http://www.example.com/aboutus/page5/page8/page10 1.4 - http://www.example.com/aboutus/page10 1.4.1 - http://www.example.com/aboutus/page10/page11 1.4.2 - http://www.example.com/aboutus/page10/page12 1.1.5 - http://www.example.com/aboutus/page13 1.1.6 - http://www.example.com/aboutus/page14 1.1.6.1 - http://www.example.com/aboutus/page14/page15 1.1.6.2 - http://www.example.com/aboutus/page14/page16 1.1.6.3 - http://www.example.com/aboutus/page14/page17 ... and so on A: Take a look at the System.URI class. It should have some methods and properites that should be usefull, like the segments property that splilts the uri into its segmented parts (split by slash bascially). You could create a list of the segment arrays, sort the list, then simply iterate the list adjusting the numbers depending on wether the current list index segments match the previous list index segments. A: You'll probably have to strip protocol and query string parameters, so +1 to the advise to use System.URI class to handle that. As for the printing it in tree-shape - a direct approach is to use a Dictionary<string, string> to keep association of child (key) to a parent (value). Another way is to take advantage of List<T>.Sort, e.g. like this: public static void Print(List<string> list) { var path = new Stack<string>(); var count = new Stack<int>(); path.Push(""); count.Push(0); list.Sort(new Comparison<string>(UrlComparison)); foreach (var x in list) { while (!x.StartsWith(path.Peek())) { path.Pop(); count.Pop(); } count.Push(count.Pop() + 1); foreach(var n in count.Reverse()) Console.Write("{0}.", n); Console.WriteLine(" {0}", x); path.Push(x); count.Push(0); } } Unfortunately, p.campbell is right, a custom comparison is actually required here, which makes this implementation still pretty performant, but more bulky (?:-abuse warning): public static int UrlComparison(string x, string y) { if (x == null && y == null) return 0; if (x == null) return -1; if (y == null) return 1; for(int n = 0; n < Math.Min(x.Length, y.Length); n++) { char cx = x[n], cy = y[n]; if(cx == cy) continue; return (cx == '/' || cx == '.' || cx == '?') ? -1 : (cy == '/' || cy == '.' || cy == '?') ? 1 : (cx > cy) ? 1 : -1; } return (x.Length == y.Length) ? 0 : (x.Length > y.Length) ? 1 : -1; } PS: Just to put a disclaimer, I feel that Stacks logic is consize, but a bit more complex to understand. In a long-term project, I'd stick with a child-parent dictionary. A: May be this Generic Tree Collection will be helpful. There are few Tree Collections on the Code Project: one, two. A: I think you need to implement some sort of tree collection to handle the order. Because if you added a new link called http://www.example.com, that would become 1 instead of http://www.example.com/aboutus. Then you can print the in-order traversal of the tree and it will be exceedingly simple.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Get values with Perl So I have a reporting tool that spits out job scheduling statistics in an HTML file, and I'm looking to consume this data using Perl. I don't know how to step through a HTML table though. I know how to do this with jQuery using $.find('<tr>').each(function(){ variable = $(this).find('<td>').text }); But I don't know how to do this same logic with Perl. What should I do? Below is a sample of the HTML output. Each table row includes the three same stats: object name, status, and return code. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN"> <HTML> <HEAD> <meta name="GENERATOR" content="UC4 Reporting Tool V8.00A"> <Title></Title> <style type="text/css"> th,td { font-family: arial; font-size: 0.8em; } th { background: rgb(77,148,255); color: white; } td { border: 1px solid rgb(208,213,217); } table { border: 1px solid grey; background: white; } body { background: rgb(208,213,217); } </style> </HEAD> <BODY> <table> <tr> <th>Object name</th> <th>Status</th> <th>Return code</th> </tr> <tr> <td>JOBS.UNIX.S_SITEVIEW.WF_M_SITEVIEW_CHK_FACILITIES_REGISTRY</td> <td>ENDED_OK - ended normally</td> <td>0</td> </tr> <tr> <td>JOBS.UNIX.ADMIN.INFA_CHK_REP_SERVICE</td> <td>ENDED_OK - ended normally</td> <td>0</td> </tr> <tr> <td>JOBS.UNIX.S_SITEVIEW.WF_M_SITEVIEW_CHK_FACILITIES_REGISTRY</td> <td>ENDED_OK - ended normally</td> <td>0</td> </tr> A: You could use a RegExp but Perl already has modules built for this specific task. Check out HTML::TableContentParser You would probably do something like this: use HTML::TableContentParser; $tcp = HTML::TableContentParser->new; $tables = $tcp->parse($HTML); foreach $table (@$tables) { foreach $row (@{ $tables->{rows} }) { foreach $col (@{ $row->{cols} }) { # each <td> $data = $col->{data}; } } } A: Here I use the HTML::Parser, is a little verbose, but guaranteed to work. I am using the diamond operator so, you can use it as a filter. If you call this Perl source extractTd, here are a couple of ways to call it. $ extractTd test.html or $ extractTd < test.html will both work, output will go on standard output and you can redirect it to a file. #!/usr/bin/perl -w use strict; package ExtractTd; use 5.010; use base "HTML::Parser"; my $td_flag = 0; sub start { my ($self, $tag, $attr, $attrseq, $origtext) = @_; if ($tag =~ /^td$/i) { $td_flag = 1; } } sub end { my ($self, $tag, $origtext) = @_; if ($tag =~ /^td$/i) { $td_flag = 0; } } sub text { my ($self, $text) = @_; if ($td_flag) { say $text; } } my $extractTd = new ExtractTd; while (<>) { $extractTd->parse($_); } $extractTd->eof; A: Have you tried looking at cpan for HTML libraries? This seems to do what your wanting http://search.cpan.org/~msisk/HTML-TableExtract-2.11/lib/HTML/TableExtract.pm Also here is a whole page of different HTML related libraries to use http://search.cpan.org/search?m=all&q=html+&s=1&n=100 A: Perl CPAN module HTML::TreeBuilder. I use it extensively to parse a lot of HTML documents. The concept is that you get an HTML::Element (the root node by example). From it, you can look for other nodes: * *Get a list of children nodes with ->content_list() *Get the parent node with ->parent() Disclaimer: The following code has not been tested, but it's the idea. my $root = HTML::TreeBuilder->new; $root->utf8_mode(1); $root->parse($content); $root->eof(); # This gets you an HTML::Element, of the root document $root->elementify(); my @td = $root->look_down("_tag", "td"); foreach my $td_elem (@td) { printf "-> %s\n", $td_elem->as_trimmed_text(); } If your table is more complex than that, you could first find the TABLE element, then iterate over each TR children, and for each TR children, iterate over TD elements... http://metacpan.org/pod/HTML::TreeBuilder A: The HTML::Query module is a wrapper around the HTML parser that provides a querying interface that is familiar to jQuery users. So you could write something like use HTML::Query qw(Query); my $docName = "test.html"; my $doc = Query(file => $docName); for my $tr ($doc->query("td")) { for my $td (Query($tr)->query("td")) { # $td is now an HTML::Element object for the td element print $td->as_text, "\n"; } } Read the HTML::Query documentation to get a better idea of how to use it--- the above is hardly the prettiest example.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Access: Multi-value field I am trying to design a form where the user can search records to filter a report. The user must be able to select many values from a particular field (multivalued field). I understand I can use a list box, but the field has a total of 3,000 records and cycling through is too much. I just want to know what other ways I can let the user insert multiple values? I have these ideas, but maybe you guys have another better way: Creating multiple combo boxes and keep them hidden until the user hits an “add” button, but this limits me to the amount of values I can have. If I have 10 hidden combo boxes I can only enter a total of 11 (10 hidden plus the original visible) values. Is it possible to have a temporary data grid where the user just enters the values. Then comes the problem of getting this into the SQL Record Source. I am thinking of the SQL IN clause. Any help or ideas, will be greatly appreciated. A: I think that you should create Comboboxes where values from next combo are dynamically populated when value in previous Combo has been changed so that way you can create hierarchy of values to select. A: I've done something similar for a few different applications in slightly different ways. Basically, I present the user with a table, allow them to right-click > filter (the same could be accomplished by providing a filter textbox for each corresponding field in the table you want to allow filtering on... in your case it sounds like you only need one). The filter box allows them to use 'and' and 'or' along with the actual text of what they're looking for. Then they click a button that opens the report and fills the report's filter field with whatever filter they had applied. Of course, this assumes the user is familiar with the data they're filtering, and requires a bit of training, but for me it was a simpler alternative than displaying a list with a bajillion entries in it. Your mileage of course may vary :)
{ "language": "en", "url": "https://stackoverflow.com/questions/7612779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Best way to read oracle number from stored procedure into a variable in c#? I have a number of stored procedures, and I've been trying to find the best way to get an int in C# out of a stored procedure out parameter. So does anyone have advice on the best way to do this? Also, in some of the procedures the returned value can be null, so using nullable ints for them would be preferred. I have been doing it like this, but is there a better way? (Also, this is how I'm dealing with nulls currently, since 0 isn't a valid result for those procedures) int sequence = 0; int.TryParse(comm.Parameters["osequence"].Value.ToString(), out sequence); Basically I am wondering if there is a way to cast without having to parse. I had been trying but eventually gave up and settled with this, since it seems to work. A: I am pretty sure you are doing it the best way. I don't know if using something like Dapper or Massive would gain you the "auto" mapping to the data type correctly or not, but you may want to give it a try.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Installing Passenger when Nginx is already installed; Possible? Rather a simple question I believe, is it possible to install passenger when nginx is already installed on your webserver? If the answer is Yes, I already performed these actions: At this very moment I already have nginx installed (for my PHP applications) and next I did a checkout of the passenger's git repository: mkdir /repositories cd /repositories/ git clone https://github.com/FooBarWidget/passenger.git cd passenger/ and then add this snippet to /etc/nginx/conf/nginx.conf http { ... passenger_root /repositories/passenger; passenger_ruby /usr/local/rvm/wrappers/ruby-1.9.2-p290/ruby; ... } However when I want to restart nginx I get the following error: * Starting Web Server nginx nginx: [emerg] unknown directive "passenger_root" in /etc/nginx/nginx.conf:19 Which concludes me to say that there is still some config I need to set, for nginx to be aware that we're using passenger. My server block server { listen 80; server_name rails.kreatude.com; root /srv/www/my_test_app; passenger_enabled on; } A: In Passenger docs the chapter "Generic installation, upgrade and downgrade method: via RubyGems" discusses this. Basically, once the Passenger gem is installed, nginx needs to be recompiled (and then used instead of the yum/apt-get-installed nginx if one exists). Passenger's compilation/configuration utility "passenger-install-nginx-module" does it for you (it's part of the Passenger gem), and it automatically includes the necessary switches for Passenger. It also gives you the option to add your own switches (such as for extra modules, or to enable/disable NGiNX's built-in features). A: I think your problem is that the passenger module is not present in nginx. All the passenger dependent directives you've described (passenger_root, passenger_ruby, passenger_enabled) are available only when the passenger module is attached to nginx. This is why you have to compile nginx with --add-module='/path/to/passenger-3.0.9/ext/nginx'. Unfortunately, I don't know of any method to enable passenger module without re-installing nginx. But, according to http://wiki.nginx.org/Modules, "Nginx modules must be selected at compile-time.", so there could be a chance that there isn't a way to do that. A: With rvm, you could do this simply by running rvmsudo passenger-install-nginx-module. For more detail: https://www.digitalocean.com/community/tutorials/how-to-install-rails-and-nginx-with-passenger-on-ubuntu. A: I confirm ion-br's answer, I'm facing the same kind of problems and PhusionPassenger's site states: Before you begin, you should know that installing Passenger in its Nginx integration mode involves extending Nginx with code from Passenger. However, Nginx does not support loadable modules. This means that in order to install Passenger's Nginx integration mode, it is necessary to recompile Nginx from source. The only solution is thus to properly reinstall Nginx, if your machine is an AWS AMI instance the solution lies here. A: There is a way install nginx passenger module without reinstalling/recompiling nginx https://www.phusionpassenger.com/library/walkthroughs/deploy/ruby/ownserver/nginx/oss/bionic/install_passenger.html A: passenger_enabled on; in server, http, or location block. http://modrails.com/documentation/Users%20guide%20Nginx.html#_important_deployment_options
{ "language": "en", "url": "https://stackoverflow.com/questions/7612787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How to serialize to json on C# application project (no Asp.net)? I wrote an application project in c#. Is there a way to serialize a collection to Json string format? If I use C#4? C#3? Another q: within Visual Studio 2010 ultimate I remember I could search and download dll from the web: Web Downlaoder or so. I cnnot find it again. Does any one know it? TIA A: JavaScriptSerializer is one way to go: MyType[] collection = ... string json = new JavaScriptSerializer().Serialize(collection); A: Take a look at Json.net for the Json serialising/deserialising For downloading the .dlls you probably saw nuget. Once it's installed you can right click on the references folder in the solution explorer and select manage packages. A: One way is to Use the DataContractJsonSerializer. http://msdn.microsoft.com/en-us/library/bb410770.aspx A: Install Nuget http://nuget.org/ using visual studio extension manager, look for json.net in nuget (right click on your project and select Manage nuget packages), add it to your project, you can serialize using this library without adding dependency on system.web A: Create a new JsonResult Or take a look at Json Serialization
{ "language": "en", "url": "https://stackoverflow.com/questions/7612791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I calculate deciles with a range of 12,000 cells in excel? I have a column of 12,000+ numbers, both positive and negative, sorted from highest to lowest in an Excel spreadsheet. Is there an easy way to go about dividing this range into deciles? A: Assuming your data is in column A, in a neighboring column in row 1 put this formula and then fill down: =IF(A1<PERCENTILE(A:A,0.1),1 ,IF(A1<PERCENTILE(A:A,0.2),2 ,IF(A1<PERCENTILE(A:A,0.3),3 ,IF(A1<PERCENTILE(A:A,0.4),4 ,IF(A1<PERCENTILE(A:A,0.5),5 ,IF(A1<PERCENTILE(A:A,0.6),6 ,IF(A1<PERCENTILE(A:A,0.7),7 ,IF(A1<PERCENTILE(A:A,0.8),8 ,IF(A1<PERCENTILE(A:A,0.9),9,10 ))))))))) This will display a 1 for the first decile, 2 for the second, 3 for the third, etc. A: This may not be the most efficient solution, but you might try the following: * *Assuming your numbers are in cells A1 through A12000, enter the following formula in cell B1 =PERCENTRANK($A$1:$A$12000,A1,1). This calculates the percent rank, with the set of values in cells $A$1:$A$12000, of the value in cell A1, rounded down to 1 decimal place (which is all you need to identify the decile). *Copy the formula in cell B1 to cells B2 through B12000. *Use the values in column B to identify the decile for the corresponding value in column A. 0 identifies values greater than or equal to the 0th percentile and less than the 10th percentile, 0.1 identifies values greater than or equal to the 10th percentile and less than the 20th percentile, and so on. Depending on the size of your set and whether or not there are duplicates, there may or may not be a value that gets assigned a PERCENTRANK of exactly 1. If you are using Excel 2010, you might, depending on your needs, consider using the new functions PERCENTRANK.INC and PERCENTRANK.EXC that are supposed to supercede PERCENTRANK. Hope this helps. A: I had same query, found answer on this forum: https://www.mrexcel.com/forum/excel-questions/581682-create-decile-segments.html Try: =INT((ROWS($A$1:A1) - 1) * 10 / ROWS($A$1:$A$3890))+1
{ "language": "en", "url": "https://stackoverflow.com/questions/7612792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Print/Preview Ignoring my Print.css I have an issue thats causing me some headaches. I'm trying to print a report and format it correctly with a print.css but it completely ignores my css everytime. Has anyone had this issue before? I made sure the CSS file is in the correct directory, etc but still no luck. Here is my template: Note: I use javascript to control the print button and inside the javascript is where I have included the CSS link. I have also tried putting it just on the HTML page but that didn't help. ... <script type="text/javascript"> function printContent(id){ str=document.getElementById(id).innerHTML newwin=window.open('','printwin','left=100,top=100,'+ 'width=900,height=400, scrollbars=1') newwin.document.write('<HTML>\n<HEAD>\n') newwin.document.write('<TITLE>Print Page</TITLE>\n') newwin.document.write('<link rel="stylesheet" type="text/css" '+ 'href="/media/css/print.css" media="print"/>\n') newwin.document.write('<script>\n') ... Now for this project I am using Ubuntu 10.10, and Firefox 7. If that helps at all. Edit I installed the web developer toolbar for firefox. It allows you to view the page as different medias. Now when I click print, it shows all my style changes, but when I print, it doesn't follow them. A: <html> <head> <title>your website title</title> <link rel="stylesheet" media="screen" href="/media/css/mainStyle.css" type="text/css"> <link rel="stylesheet" media="print" href="/media/css/print.css" type="text/css"> </head> <body> <input type="button" value="Print" onClick="javascript:window.print();" /> </body> </html> Maybe you might wanna give above HTML template a go, and see if that works for you or suits your needs. In my opinion, your proposed function is actually better to be written on the server side rather than the client side with javascript, as you are trying to dynamically generate html page in there. You can output that page as print.html or something, and once it gets sent to the client, you then apply the print.css style and do the printing. Anyway, just a few ideas here, hopefully it helps a bit in your case. Cheers. A: Not sure if this helps, but the @media print{} is supposed to encapsulate all styles during a print job. <style type="text/css"> @media print{ /* Make the HR tag have 50 px spacing above and below */ hr{ padding: 50px 0px; } } </style> This is SUPPOSED to handle this type of styling. The script could still be responsible for including the css file, but the @media print{} would tell all styles embedded in it to apply only to print jobs.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Suggestions for Optimizing code public Object getValue() { ValueItem valueItem = null; Object returnValue = null; if(this.value instanceof StringValueImpl) { valueItem = (StringValueImpl) this.value; } else if(this.value instanceof ListValueImpl) { valueItem = (ListValueImpl) this.value; } else if(this.value instanceof MapValueImpl) { valueItem = (MapValueImpl) this.value; } if(valueItem!=null) returnValue = valueItem.getValue(); return returnValue; } ValueItem is an interface which is implemented by ListValueImpl, MapValueImpl etc .. I want return value which is an object. The code works fine but i was wondering if this can be improved in any way ? A: What is the type of this.value? If it is ValueItem then you don't need to do any of this and can replace the method with this: public Object getValue() { Object returnValue = null; if(this.value!=null) returnValue = this.value.getValue(); return returnValue; } Or even shorter: public Object getValue() { return this.value!=null ? this.value.getValue() : null; } If this.value is not of type ValueItem but it has to contain a ValueItem, then you have a design problem at your hand. A: My inclination is that your getValue() isn't doing anything for you at all. You're detecting what class it is, casting it to that class, then shoving it into an Object again. ...so you'll have to do the same kind of detection on the caller's side of getValue() anyways! Personally, I'd do it like this: public boolean isaStringValueImpl() { return (this.value instanceof StringValueImpl); } public boolean isaListValueImpl() { return (this.value instanceof ListValueImpl); } public boolean isaMapValueImpl() { return (this.value instanceof MapValueImpl); } public StringValueImpl getAsaStringValueImpl() { return (StringValueImpl)this.value; } public ListValueImpl getAsaListValueImpl() { return (ListValueImpl)this.value; } public MapValueImpl getAsaMapValueImpl() { return (MapValueImpl)this.value; } In addition to the regular getter: public ValueItem getValueItem() { return this.value; } But even with all this, I'd say that you might have a larger design issue that could be cleaned up. A: How about a generics based type safe version. public abstract class ValueItem<T> { public abstract T getValue(); public class StringValueImpl extends ValueItem<String> { private String value; public String getValue() { return value; } } public class ListValueImpl extends ValueItem<List<?>> { private List<?> value; public List<?> getValue() { return value; } } public class MapValueImpl extends ValueItem<Map<?, ?>> { private Map<?, ?> value; public Map<?, ?> getValue() { return value; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7612795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Response based xml -- PHP I submit a value to the Server.. something like this abc.php <?php $input = "some value" //Some functions The above generates some unique response say 111 Now i need to upload a xml based on this response something like this $xml='<?xml version= "1.0 .....> <id>111</id>(this id should be dynamic and based on the response)' ?> this xml is in above php file(abc.php) How can i do this? A: If you have an script abc.php <?php // Gets a value from a URL GET parameter. $value = $_GET[ 'value' ]; echo $value; Then you could have a second script upload.php like this <?php // Retrieves the response from /abc.php $urlResponse = file_get_contents( 'http://localhost/abc.php?value=foobar' ); header( 'Content-type: text/xml; charset=utf-8' ); echo '<?xml version="1.0" encoding="utf-8" ?>', PHP_EOL; echo '<doc>', PHP_EOL; echo '<id>', $urlResponse, '</id>', PHP_EOL; echo '</doc>', PHP_EOL; The script upload.php retrieves the response (using the file_get_contents function) of the abc script and generates an XML output. More info: http://www.php.net/file_get_contents A: Based in your comment, this might help you Alternative A: Write a XML file from the abc.php script <?php // Gets a value from a URL GET parameter. $value = $_GET[ 'value' ]; $xml = <<<XML <?xml version="1.0" encoding="utf-8" ?> <doc> <id>$value</id> </doc> XML; // The location of the XML that is about to be generated. $xmlPath = 'output.xml'; if ( file_put_contents( $xmlPath, $xml ) === false ) { trigger_error( 'There was an error trying to write the XML file (check permissions)', E_USER_NOTICE ); } Alternative B: Output an XML response from script upload.php using abc.php abc.php <?php function doSomethingHere() { return rand(); } upload.php <?php // Include your PHP library. require 'abc.php'; // Set the correct Content type to instruct browsers to treat the document as XML file. header( 'Content-type: text/xml; charset=utf-8' ); ?> <?xml version="1.0" encoding="utf-8" ?> <doc> <id><?php echo doSomethingHere(); ?></id> </doc>
{ "language": "en", "url": "https://stackoverflow.com/questions/7612798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Rails 3.0.9 : ActiveRecord Uniqueness Constraint failing on every updated, doesn't matter if the unique column isn't touched I have a Profile model class Profile < ActiveRecord::Base attr_accessible :user_id, :race_id, :nickname, :first_name, :last_name, :gender, :birth_date, :eighteen, :time_zone, :metric_scale, :referral_code, :referrer_id, :tag_line # Relationships belongs_to :user belongs_to :race belongs_to :referred_by, :class_name => "Profile", :foreign_key => "referral_code" has_many :referrals, :class_name => "Profile", :foreign_key => "referrer_id" # Validations validates :user_id, :race_id, :nickname, :first_name, :last_name, :time_zone, :gender, :presence => true validates :referral_code, :nickname, :uniqueness => { :case_sensitive => false } # Instance Methods def full_name first_name + " " + last_name end # Class Methods def self.search(search) search_condition = "%" + search + "%" find(:all, :conditions => ['nickname LIKE ?', search_condition]) end def self.find_by_referral_code(referrer_code) find(:one, :conditions => ['referral_code LIKE ?', referrer_code]) end end No matter which column I am updated the Uniqueness Constraint on 'referral_code' false and I cannot update the model and I can't figure out why. From what I read online as of Rails 3 ActiveRecord was supposed to be tracking dirty objects and only generating update queries containing the altered columns leaving all others alone. Because it should only be performing update queries on columns other than the Unique ones the validation should not be failing. Unfortunately it is. Here is Rails Console session displaying this: Loading development environment (Rails 3.0.9) ruby-1.9.2-p180 :001 > profile = Profile.find(3) => #<Profile id: 3, user_id: 3, race_id: 2, nickname: "Premium-User", first_name: "Premium", last_name: "User", gender: "M", birth_date: "1977-01-01", eighteen: true, complete: true, time_zone: "Kuala Lumpur", metric_scale: false, referral_code: "bo", referrer_id: nil, tag_line: "This is what its like.", created_at: "2011-09-21 04:08:00", updated_at: "2011-09-21 04:08:00"> ruby-1.9.2-p180 :002 > update = {"tag_line"=>"Changed to this"} => {"tag_line"=>"Changed to this"} ruby-1.9.2-p180 :003 > profile.update_attributes(update) => false ruby-1.9.2-p180 :004 > profile.errors => {:referral_code=>["has already been taken"]} Even performing an update directly on a single column which is not unique causes the uniqueness constraint to fail and the record will not be updated, here is a console session: Loading development environment (Rails 3.0.9) ruby-1.9.2-p180 :001 > profile = Profile.find(3) => #<Profile id: 3, user_id: 3, race_id: 2, nickname: "Premium-User", first_name: "Premium", last_name: "User", gender: "M", birth_date: "1977-01-01", eighteen: true, complete: true, time_zone: "Kuala Lumpur", metric_scale: false, referral_code: "bo", referrer_id: nil, tag_line: "This is what its like.", created_at: "2011-09-21 04:08:00", updated_at: "2011-09-21 04:08:00"> ruby-1.9.2-p180 :002 > profile.tag_line = "changed to this" => "changed to this" ruby-1.9.2-p180 :003 > profile.save => false ruby-1.9.2-p180 :004 > profile.errors => {:referral_code=>["has already been taken"]} I also ran a check to see if ActiveRecord was actually tracking the dirty object and it appears to be, here is the console session: Loading development environment (Rails 3.0.9) ruby-1.9.2-p180 :001 > profile = Profile.find(3) => #<Profile id: 3, user_id: 3, race_id: 2, nickname: "Premium-User", first_name: "Premium", last_name: "User", gender: "M", birth_date: "1977-01-01", eighteen: true, complete: true, time_zone: "Kuala Lumpur", metric_scale: false, referral_code: "bo", referrer_id: nil, tag_line: "This is what its like.", created_at: "2011-09-21 04:08:00", updated_at: "2011-09-21 04:08:00"> ruby-1.9.2-p180 :002 > profile.tag_line = "change to this" => "change to this" ruby-1.9.2-p180 :003 > profile.changes => {"tag_line"=>["This is what its like.", "change to this"]} ruby-1.9.2-p180 :004 > profile.save => false ruby-1.9.2-p180 :005 > profile.errors => {:referral_code=>["has already been taken"]} I honestly am at a loss, I have spent quite a bit of time digging into it as well as searching Google and I cannot find an answer as to why this is happening. A: You are right, Rails does only "track" the dirty columns and generates the minimum update statement necessary. If you look in your log/development.log file you will see the actual SQL that is being generated, and you'll see that the update statement is only touching the columns you have edited. At least you would if your code was getting that far. Before saving your model, Rails will run all the validations on it, and that includes seeing if the referral code is unique. To do this it will run a select SQL statement against the database to check; if you look in the development.log file you will definitely see this query. So Rails is working correctly here. If your referral codes are supposed to be unique, why aren't they? My guess would be that you are trying to save models with a nil or blank code. If that is the case, try adding :allow_nil => true or :allow_blank => true to the :uniqueness hash.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Overriding continuation state storage/restore algorithm? When i saw the first news about await, i was very excited and i thought many ways of using it. One of these is to use it in my web framework to hide the asynchronous aspect of client/server exchanges like it's done in several frameworks. So here is the deal: I would like to write things like that: { Page p = new Page(); FormResponse response = await p.Show(); var field1 = reponse.inputField["input1"]; ... } I would like the dev to be able to write this code on the server. As you guess p.Show() write in the HttpResponse the html code displaying the page with the form, and send the response to the client, so, the thread is killed and i never reach the next instruction (FormResponse response =). So here is my question: Is there any way of doing such a thing ? I know await cut the code, pack it in a continuation, make the closure for us, and store it somewhere to call it back when p.Show() is done. But here, the thread is going to be killed, and this is my code which recieve the submit response from Page which has to deal with it. So i have to restore the continuation that "await" created and execute it myself. Am i getting high or is it possible ? Edit : additional infos I can explain a bit more, but we need an example. Imagine you want to make an async call to a webservice, you just use await and then call the webs. A webs doesn't display any page, it returns pieces of information and you can continue the next instructions, so with a webs we have : Client -> Server A [-callwebs-> Server B ->] Server A -> Client. Now, imagine a webs wich has to display a user interface to grab some information from the user, we can call this kind of webs a UIwebs (a reusable interface called by several webapp), it displays the ui, grabs the info, and sends it back to the caller. So with a UI webs we have : Client -> Server A [-response_redirect-> Client -get-> Server B (here is the UIwebs, the client inputs whatever) -response_redirect-> Client -get-> ] Server A -> Client What i put between brackets has to be handle in the way by the developper : so for a classic webs, i can imagine the asynchronous page is "sleeping" waiting the webs to response, but with a UI webs we have to response à redirect to the client, so the page is done for asp.net, and SynchronizationContext says that there is no more async instruction to wait for. In fact, my need here is the same as turning on the web server, and sending a request to it wich coule restore everything needed to execute the code just after the await. Regards, Julien A: I'm not sure what the problem is. If you have, e.g., ASP.NET asynchronous pages, then any top-level (async void) function will properly notify ASP.NET that the page is incomplete and release the thread. Later, the continuation will run on a (possibly another) thread, restore the request context, and finish the request. The async design was carefully done to enable this exact behavior. In particular, async void increments the outstanding asynchronous operation count in SynchronizationContext, as I described in a recent MSDN article. If you're running your own host (i.e., not using ASP.NET), then you'll have to implement SynchronizationContext. It's non-trivial but not extremely hard, either. Once this is done, async and await will "just work". :) Updated answer in response to edit: Keep in mind that await/async are just syntactical sugar; they don't enable anything that wasn't possible before - they just make it easier. If I understand your situation correctly, you want a web service to return a UI and then respond to it. This is an inversion of how HTTP works, so you'd have to do some funky stuff with viewstate. I'll think about it...
{ "language": "en", "url": "https://stackoverflow.com/questions/7612819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Grails/Dojo Progress bar getting progress from controller/service Currently I'm using Grails to create a web-based data loading application that in short, takes an excel sheet of arbitrary rows and runs them through a backend system to prep the data for testers. Everything is working fine, but the last thing I need is some method to inform the user (especially on LARGE data files) of how many rows of data it has processed. If there's more than 200 rows, the app will (appear) to time out even though its still chugging along. This is a problem because it's very likely the user will reload the file and mess up processing... duplicate test data rows will cause a bunch of downstream issues. I'm playing with the code here. <g:actionSubmit action="${appContext}/FileUploader.processFile" value="Upload File" onclick="download()"></g:actionSubmit> <script type="text/javascript"> dojo.require("dijit.ProgressBar"); dojo.require("dojo.parser"); var i = 0; function download() { jsProgress.update({ maximum: 10, progress: ++i }); if (i < 10) { setTimeout(download, 100 + Math.floor(Math.random() * 100)); } } </script> Currently in my controller I have a small method that does this: def updateStatus = { render uploaderService.rowsLoaded / uploaderService.listToSend.size() } What I can't seem to figure out is the correct way to call the method to get the percentage to link to the progress bar. (replacing the boilerplate progress code.) I know Java well enough but getting this to work just seems a little mystifying. I'm willing to entertain ANY idea of getting progress out there whether or not its technically best-practice... I'm to the point where I just need SOMETHING to display this information. It doesn't have to be dojo, it's just the direction that I had the most initial success with. A: First, you may want to kick off the processes in a background job and return a processing message to the user Here's an example of how to do this using the jprogress and executer plugins. Unfortunately, this uses a polling solution. I haven't figured out how to use JMS to trigger the updates yet. Domain package jprogressdemo class Event { String name Integer duration = 100 String status = "New" Integer percentComplete = 0 static mapping = { cache false } static constraints = { name(size:1..45, unique:true ) duration() status(size:1..5) percentComplete() } } Controller package jprogressdemo class EventController { //static allowedMethods = [save: "POST", update: "POST", delete: "POST"] def progressService def jmsService static exposes = ['jms'] static destination = "queue.notification" def executeAction = { println "executeAction" def theEvent = Event.get(params.id) def duration = theEvent?.duration ?: 10 def name = theEvent?.name.trim() ?: "none" def startAt = theEvent?.percentComplete ?: 0 toEvent(name,duration,startAt,true) render "the progress is done" } /* * Start the backgorund task then * while %complete < 100, query db and update progressbar. */ //the progress bar id needs to the same value that's passed into .setProgressBarValue def backgroundAction = { println "backgroundAction" println "isDisabled():${jmsService.isDisabled()}" def theEvent = Event.get(params.id) def duration = theEvent?.duration ?: 10 def name = theEvent?.name ?: "none" def barName = "${name}b" def percentComplete = theEvent?.percentComplete ?: 0 def lastPct = -1 runAsync { toEvent(name,duration,percentComplete,false) } if (percentComplete > 100) {progressService.setProgressBarValue(barName, 100)} //can't be factored out because it's this function that // gets called from the client ???? while(percentComplete <= 100) { println "percentComplete:${percentComplete}" if (percentComplete != lastPct ) { progressService.setProgressBarValue(barName, percentComplete) lastPct = percentComplete } def newEvent = Event.get(params.id) newEvent.refresh() percentComplete = theEvent.percentComplete } render "the progress is done" } //the progress bar id needs to the same value that's passed into .setProgressBarValue def backgroundProgress = { def theEvent = Event.get(params.id) def duration = theEvent?.duration ?: 10 def name = theEvent?.name ?: "none" def barName = "${name}p" def percentComplete = theEvent?.percentComplete ?: 0 def lastPct = -1 if (percentComplete > 100) {progressService.setProgressBarValue(barName, 100)} while(percentComplete <= 100) { println "percentComplete:${percentComplete}" if (percentComplete != lastPct ) { progressService.setProgressBarValue(barName, percentComplete) lastPct = percentComplete } def newEvent = Event.get(params.id) newEvent.refresh() percentComplete = theEvent.percentComplete } } /* % complete needs to get to 101 to avoid infinit loop in polling logic And you can't go from 0-99 because the progress bar doesn't register a 0 */ def toEvent(name,duration,startAt,updateBar) { println "duration:${duration}" println "name:${name}" println "startat:${startAt}" for (int i = startAt; i < 102; i++) { println "i:${i}" def theEvent = Event.findByName(name) theEvent.percentComplete = i theEvent.save(flush:true) println "theEvent.percentComplete:${theEvent.percentComplete}" if(updateBar){ progressService.setProgressBarValue(name, i) } else { sendJMSMessage("queue.notification", "${i}") } //let's waste some time for (int a = 0; a < duration; a++) { for (int b = 0; b < 1000; b++) { } } } } def index = { redirect(action: "list", params: params) } def list = { params.max = Math.min(params.max ? params.int('max') : 10, 100) [eventInstanceList: Event.list(params), eventInstanceTotal: Event.count()] } def create = { def eventInstance = new Event() eventInstance.properties = params return [eventInstance: eventInstance] } def save = { def eventInstance = new Event(params) if (eventInstance.save(flush: true)) { flash.message = "${message(code: 'default.created.message', args: [message(code: 'event.label', default: 'Event'), eventInstance.id])}" redirect(action: "show", id: eventInstance.id) } else { render(view: "create", model: [eventInstance: eventInstance]) } } def show = { def eventInstance = Event.get(params.id) if (!eventInstance) { flash.message = "${message(code: 'default.not.found.message', args: [message(code: 'event.label', default: 'Event'), params.id])}" redirect(action: "list") } else { [eventInstance: eventInstance] } } def edit = { def eventInstance = Event.get(params.id) if (!eventInstance) { flash.message = "${message(code: 'default.not.found.message', args: [message(code: 'event.label', default: 'Event'), params.id])}" redirect(action: "list") } else { return [eventInstance: eventInstance] } } def update = { def eventInstance = Event.get(params.id) if (eventInstance) { if (params.version) { def version = params.version.toLong() if (eventInstance.version > version) { eventInstance.errors.rejectValue("version", "default.optimistic.locking.failure", [message(code: 'event.label', default: 'Event')] as Object[], "Another user has updated this Event while you were editing") render(view: "edit", model: [eventInstance: eventInstance]) return } } eventInstance.properties = params if (!eventInstance.hasErrors() && eventInstance.save(flush: true)) { flash.message = "${message(code: 'default.updated.message', args: [message(code: 'event.label', default: 'Event'), eventInstance.id])}" redirect(action: "show", id: eventInstance.id) } else { render(view: "edit", model: [eventInstance: eventInstance]) } } else { flash.message = "${message(code: 'default.not.found.message', args: [message(code: 'event.label', default: 'Event'), params.id])}" redirect(action: "list") } } def delete = { def eventInstance = Event.get(params.id) if (eventInstance) { try { eventInstance.delete(flush: true) flash.message = "${message(code: 'default.deleted.message', args: [message(code: 'event.label', default: 'Event'), params.id])}" redirect(action: "list") } catch (org.springframework.dao.DataIntegrityViolationException e) { flash.message = "${message(code: 'default.not.deleted.message', args: [message(code: 'event.label', default: 'Event'), params.id])}" redirect(action: "show", id: params.id) } } else { flash.message = "${message(code: 'default.not.found.message', args: [message(code: 'event.label', default: 'Event'), params.id])}" redirect(action: "list") } } } View <%@ page import="jprogressdemo.Event" %> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta name="layout" content="main" /> <g:javascript library="jquery" plugin="jquery"/> <jqui:resources/> <g:set var="entityName" value="${message(code: 'event.label', default: 'Event')}" /> <title><g:message code="default.show.label" args="[entityName]" /></title> </head> <body> <div class="nav"> <span class="menuButton"><a class="home" href="${createLink(uri: '/')}"><g:message code="default.home.label"/></a></span> <span class="menuButton"><g:link class="list" action="list"><g:message code="default.list.label" args="[entityName]" /></g:link></span> <span class="menuButton"><g:link class="create" action="create"><g:message code="default.new.label" args="[entityName]" /></g:link></span> </div> <div class="body"> <h1><g:message code="default.show.label" args="[entityName]" /></h1> <g:if test="${flash.message}"> <div class="message">${flash.message}</div> </g:if> <div class="dialog"> <table> <tbody> <tr class="prop"> <td valign="top" class="name"><g:message code="event.id.label" default="Id" /></td> <td valign="top" class="value">${fieldValue(bean: eventInstance, field: "id")}</td> </tr> <tr class="prop"> <td valign="top" class="name"><g:message code="event.name.label" default="Name" /></td> <td valign="top" class="value">${fieldValue(bean: eventInstance, field: "name")}</td> </tr> <tr class="prop"> <td valign="top" class="name"><g:message code="event.duration.label" default="Duration" /></td> <td valign="top" class="value">${fieldValue(bean: eventInstance, field: "duration")}</td> </tr> <tr class="prop"> <td valign="top" class="name"><g:message code="event.status.label" default="Status" /></td> <td valign="top" class="value">${fieldValue(bean: eventInstance, field: "status")}</td> </tr> <tr class="prop"> <td valign="top" class="name"><g:message code="event.percentComplete.label" default="Percent Complete" /></td> <td valign="top" class="value">${fieldValue(bean: eventInstance, field: "percentComplete")}</td> </tr> </tbody> </table> </div> <div class="buttons"> <g:form> <g:hiddenField name="id" value="${eventInstance?.id}" /> <span class="button"><g:actionSubmit class="edit" action="edit" value="${message(code: 'default.button.edit.label', default: 'Edit')}" /></span> <span class="button"><g:actionSubmit class="delete" action="delete" value="${message(code: 'default.button.delete.label', default: 'Delete')}" onclick="return confirm('${message(code: 'default.button.delete.confirm.message', default: 'Are you sure?')}');" /></span> </g:form> </div> <p> <HR WIDTH="75%" COLOR="#FF0000" SIZE="4"/> <g:form> <g:hiddenField name="id" value="${eventInstance?.id}"/> <g:submitToRemote action="executeAction" name="startButton" value="start...."/> <g:submitToRemote action="backgroundAction" name="backgroundButton" value="background...."/> <g:submitToRemote action="backgroundProgress" name="progressButton" value="progress...."/> </g:form> <g:jprogress progressId="${eventInstance?.name}" trigger="startButton"/> <g:jprogress progressId="${eventInstance?.name}b" trigger="backgroundButton"/> <g:jprogress progressId="${eventInstance?.name}p" trigger="progressButton"/> </div> </body> </html> A: You could also use the CometD plugin to publish messages to a topic, and have your progressbar subscribe to that topic... http://metasieve.wordpress.com/2010/08/25/using-cometd-2-x-with-grails/
{ "language": "en", "url": "https://stackoverflow.com/questions/7612820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Suggestions for Querying Database for Names I have an Oracle database that, like many, has a table containing biographical information. On which, I would like to search by name in a "natural" way. The table has forename and surname fields and, currently, I am using something like this: select id, forename, surname from mytable where upper(forename) like '%JOHN%' and upper(surname) like '%SMITH%'; This works, but it can be very slow because the indices on this table obviously can't account for the preceding wildcard. Also, users will usually be searching for people based on what they tell them over the phone -- including a huge number of non-English names -- so it would be nice to also do some phonetic analysis. As such, I have been experimenting with Oracle Text: create index forenameFTX on mytable(forename) indextype is ctxsys.context; create index surnameFTX on mytable(surname) indextype is ctxsys.context; select score(1)+score(2) relevance, id, forename, surname from mytable where contains(forename,'!%john%',1) > 0 and contains(surname,'!%smith%',2) > 0 order by relevance desc; This has the advantage of using the Soundex algorithm as well as full text indices, so it should be a little more efficient. (Although, my anecdotal results show it to be pretty slow!) The only apprehensions I have about this are: * *Firstly, the text indices need to be refreshed in some meaningful way. Using on commit would be too slow and might interfere with how the frontend software -- which is out of my control -- interacts with the database; so requires some thinking about... *The results that are returned by Oracle aren't exactly very naturally sorted; I'm not really sure about this score function. For example, my development data is showing "Jonathan Peter Jason Smith" at the top -- fine -- but also "Jane Margaret Simpson" at the same level as "John Terrance Smith" I'm thinking that removing the preceding wildcard might improve performance without degrading the results as, in real life, you would never search for a chunk in the middle of a name. However, otherwise, I'm open to ideas... This scenario must have been implemented ad nauseam! Can anyone suggest a better approach to what I'm doing/considering now? Thanks :) A: I have come up with a solution which works pretty well, following the suggestions in the comments. Particularly, @X-Zero's suggestion of creating a table of Soundexes: In my case, I can create new tables, but altering the existing schema is not allowed! So, my process is as follows: * *Create a new table with columns: ID, token, sound and position; with the primary key over (ID, sound,position) and an additional index over (ID,sound). *Go through each person in the biographical table: * *Concatenate their forename and surname. *Change the codepage to us7ascii, so accented characters are normalised. This is because the Soundex algorithm doesn't work with accented characters. *Convert all non-alphabetic characters into whitespace and consider this the boundary between tokens. *Tokenise this string and insert into the table the token (in lowercase), the Soundex of the token and the position the token comes in the original string; associate this with ID. Like so: declare nameString varchar2(82); token varchar2(40); posn integer; cursor myNames is select id, forename||' '||surname person_name from mypeople; begin for person in myNames loop nameString := trim( utl_i18n.escape_reference( regexp_replace( regexp_replace(person.person_name,'[^[:alpha:]]',' '), '\s+',' '), 'us7ascii') )||' '; posn := 1; while nameString is not null loop token := substr(nameString,1,instr(nameString,' ') - 1); insert into personsearch values (person.id,lower(token),soundex(token),posn); nameString := substr(nameString,instr(nameString,' ') + 1); posn := posn + 1; end loop; end loop; end; / So, for example, "Siân O'Conner" gets tokenised into "sian" (position 1), "o" (position 2) and "conner" (position 3) and those three entries, with their Soundex, get inserted into personsearch along with their ID. * *To search, we do the same process: tokenise the search criteria and then return results where the Soundexes and relative positions match. We order by the position and then the Levenshtein distance (ld) from the original search for each token, in turn. This query, for example, will search against two tokens (i.e., pre-tokenised search string): with searchcriteria as ( select 'john' token1, 'smith' token2 from dual) select alpha.id, mypeople.forename||' '||mypeople.surname from peoplesearch alpha join mypeople on mypeople.student_id = alpha.student_id join peoplesearch beta on beta.student_id = alpha.student_id and beta.position > alpha.position join searchcriteria on 1 = 1 where alpha.sound = soundex(searchcriteria.token1) and beta.sound = soundex(searchcriteria.token2) order by alpha.position, ld(alpha.token,searchcriteria.token1), beta.position, ld(beta.token,searchcriteria.token2), alpha.student_id; To search against an arbitrary number of tokens, we would need to use dynamic SQL: joining the search table as many times as there are tokens, where the position field in the joined table must be greater than the position of the previously joined table... I plan to write a function to do this -- as well as the search string tokenisation -- which will return a table of IDs. However, I just post this here so you get the idea :) As I say, this works pretty well: It returns good results pretty quickly. Even searching for "John Smith", once cached by the server, runs in less than 0.2s; returning over 200 rows... I'm pretty pleased with it and will be looking to put it into production. The only issues are: * *The precalculation of tokens takes a while, but it's a one-off process, so not too much of a problem. A related problem however is that a trigger needs to be put on the mypeople table to insert/update/delete tokens into the search table whenever the corresponding operation is performed on mypeople. This may slow up the system; but as this should only happen during a few periods in a year, perhaps a better solution would be to rebuild the search table on a scheduled basis. *No stemming is being done, so the Soundex algorithm only matches on full tokens. For example, a search for "chris" will not return any "christopher"s. A possible solution to this is to only store the Soundex of the stem of the token, but calculating the stem is not a simple problem! This will be a future upgrade, possibly using the hyphenation engine used by TeX... Anyway, hope that helps :) Comments welcome! EDIT My full solution (write up and implementation) is now here, using Metaphone and the Damerau-Levenshtein Distance.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Adding event tracking to site for PDF/Microsoft Word downloads I am having some difficulty setting up event tracking. I have a website where people can download PDFs and Word docs of various content. I inserted the event tracking like so: <a href=files/8399039122.pdf onClick='_gaq.push(['_trackEvent', 'downloads', 'all', 'nofilter']);' >File #1</a> <a href=files/8329384939.doc onClick='_gaq.push(['_trackEvent', 'downloads', 'all', 'nofilter']);' >File #2</a> However, after four days data is still not showing up on my analytics profile. Did I install this wrong? Also, do I need to add the _gaq.push(['_trackEvent', 'downloads', 'all', nofilter]);' to the analytics script in the header of my page? A: Looks like it's your use of single quotes (not nested properly). Try this: <a href="files/8399039122.pdf" onClick="_gaq.push(['_trackEvent', 'downloads', 'all', 'nofilter']);" >File #1</a> <a href="files/8329384939.doc" onClick="_gaq.push(['_trackEvent', 'downloads', 'all', 'nofilter']);" >File #2</a> Wrap the entire onClick in double quotes. And, the path to your links (href) should be quoted as well. To delay the onclick without using target="blank" <a href="pdfs/my-file.pdf" onclick="var that=this;_gaq.push(['_trackEvent,'Download','PDF',this.href]);setTimeout(function(){location.href=that.href;},200);return false;">Download my file</a> A: Another way to resolve this issue is to add target="_blank" to the <a> tag: <a href="files/2117802037.pdf" onclick="_gaq.push(['_trackEvent', 'downloads', 'all', 'nofilter']);" target="_blank">File #1</a> <a href="files/8329384939.doc" onclick="_gaq.push(['_trackEvent', 'downloads', 'all', 'nofilter']);" target="_blank">File #2</a>
{ "language": "en", "url": "https://stackoverflow.com/questions/7612824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Scroll bar css style in Internet Explorer I've got this problem while doing the scroll bar for both iPad and browsers. The style is working fine with Opera, Safari, Firefox and Chrome. However the style doesn't show in Internet Explorer and it appears the wired horizontal scroll bar on the page too. You can find a link to the page here. Does anybody know how to fix this issue? A: Looking at the iScroll website and demos it looks like IE isn't supported. Go ahead and try it in IE http://cubiq.org/dropbox/iscroll4/examples/simple/. You may want to try another library such as this one http://www.hesido.com/web.php?page=customscrollbar which seems to work just fine in IE.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: prefuse: useful layout for directed mostly-acyclic graph? I have a moderately complex graph (500-1000 nodes) representing function calls in a program, so it is almost completely acyclic, and mostly tree-like (e.g. occasionally there are multiple paths from one subroutine to another, but usually not). I would like to visualize it. My first thought was to use Prefuse, and within an hour I got something running based on the examples. But it uses ForceDirectedLayout, which visually looks like a mess for this size graph, and it slows down my PC. Are there other useful Prefuse layouts for this kind of graph?
{ "language": "en", "url": "https://stackoverflow.com/questions/7612830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Pointers of two dimensional array There is such code: int (*ptr_)[1] = new int[1][1]; ptr_[0][0] = 100; std::cout << "to: " << &ptr_ << ' ' << ptr_ << ' ' << *ptr_ << ' ' << &(*ptr_) << ' ' << **ptr_ << std::endl; Result is: to: 0xbfda6db4 0x9ee9028 0x9ee9028 0x9ee9028 100 Why values of ptr_ and *ptr_ are the same? Value of ptr_ equals to 0x9ee9028, so value of memory cell 0x9ee9028 is *ptr_ which is 0x9ee9028, however **ptr_ gives result 100. Is it logical? A: ptr_ is a pointer to an array of length one. Variables of array type in C and C++ simply degrade to pointers when printed (among other things). So when you print ptr_ you get the address of the array. When you print *ptr_ you get the array itself, which then degrades right back into that same pointer again. But in C++ please use smart pointers and standard containers. A: int main() { int test[2][3] = { {1,2,3}, {4, 5, 6} }; int (*pnt)[3] = test; //*pnt has type int[3] //printArray writes array to stdout printArray(3, *pnt); //returns 1 2 3 printArray(3, *(pnt+1)); //returns 4 5 6 return 0; } mutl-dimentional arrays are really arrays for arrays, for example test[2][3] is an array with two elements that are of type int[3] which in turn have 3 integer elements. In your case you have a pointer to a pointer to a variable. In other words your array looks like this: array = {{100}} * *ptr_ points to array *&ptr_ is the address of outer array *ptr_ is the address of first element (which is to another array) **ptr_ same as above *&(*ptr_) gets first element of outer array which is the innter array, then returns the address of the innter array ***ptr_ gets first element of outer array (which is the inner array) then dereferences the innter array which is an actuall value
{ "language": "en", "url": "https://stackoverflow.com/questions/7612834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: A copy of ApplicationHelper has been removed from the module tree but is still active? I'm using rails 2.3.4 and ruby 1.8.7. In my application when i try to purchase an item via mobile Api(it's a metal request), purchase is successful , now when i go to web interface and make any changes in the entities like purchase limit or price of the item and then try to purchase through mobile api again, it throws an error A copy of ApplicationHelper has been removed from the module tree but is still active! [RAILS_ROOT]/vendor/rails/activesupport/lib/active_support/dependencies.rb:414:in `load_missing_constant' [RAILS_ROOT]/vendor/rails/activesupport/lib/active_support/dependencies.rb:80:in `const_missing_not_from_s3_library' /home/user/.rvm/gems/ruby-1.8.7-p334/gems/aws-s3-0.6.2/lib/aws/s3/extensions.rb:206:in `const_missing' [RAILS_ROOT]/app/helpers/web_application_helper.rb:414:in `purchases_left' [RAILS_ROOT]/app/helpers/web_application_helper.rb:83:in `accept_purchase_direct' [RAILS_ROOT]/lib/api/publisher/v1_purchases_helper.rb:49:in `purchases_handler' [RAILS_ROOT]/app/metal/v1_purchases_controller.rb:54:in `call' [RAILS_ROOT]/vendor/rails/railties/lib/rails/rack/metal.rb:44:in `call' [RAILS_ROOT]/vendor/rails/railties/lib/rails/rack/metal.rb:43:in `each' [RAILS_ROOT]/vendor/rails/railties/lib/rails/rack/metal.rb:43:in `call' [RAILS_ROOT]/vendor/rails/actionpack/lib/action_controller/session/abstract_store.rb:122:in `call'. I looked around in the web as no of people have faced the same issue but unable to find the fix for the same.Now when the server is restarted and i tried to purchase , it is successful.I can't figure out the reason but still like to put my ideas, can it be fixed if after every action we connect to the database as error is thrown from the line where database query is made : def purchases_left accepted = AcceptedOffer.find_all_by_offer_id_and_user_id(offer.id, current_user.id) end i have printed the value of offer and is coming correct.I cannot think of what more piece of code to put up.Any queries are welcomed. A: Error occurs in development mode as cache is set to false, so after setting the cache to true atleast in dev mode, error will not occur but query caching will take place so you need to work upon it.Also in mine it was a metal request which was not loading the classes after the next action is called . In case of production, query caching takes place while error doesn't appear in this mode, but results are not as expected as changes in database were not reflected. So in both the cases the solution which worked or me is bypass query caching.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Div content Page titles in Veritcal Scroll Site Whats the easiest way to enable my page title to update per content div areas it's on? I have a vertical scrolling website and would like the page title to change when the user navs to each content area (The content areas are within div & article) Essentially, I'm trying to keep 'Orginal Site Title | + Home/About etc' I'm thinking it's something I'd have to call with php to remember a set title attribute per div link? Any suggestions on setting this up (If possible) A: I think you should take a look at the Viewport plugin. This way you have 4 selectors you can use: $(":in-viewport") $(":below-the-fold") $(":above-the-top") $(":left-of-screen") $(":right-of-screen") Now you could do something like this: //Get the id of element(div) that is currently in view var inview = $('div:in-viewport:first').attr('id'); //Define titles if (inview == 'home'){ var newtitle = 'Home' } else if (inview == 'about') { var newtitle = 'About us' } //Lets rename the page title document.title = 'Orginal Site Title |' + newtitle; The above code should now always be called when you scroll ($(window).scroll(function () { ... });) to update page title according to the div that is currently in view. This is just a generic example and it can be completely changed to your needs. I hope it helps in some way. A: You might be able to do this by writing a plugin based on the scrollspy plugin that comes as part of twitter bootstrap: http://twitter.github.com/bootstrap/javascript.html#scrollspy You could tweak it so rather than setting a class on the target it updates the title with whatever the content is in the current viewable panel
{ "language": "en", "url": "https://stackoverflow.com/questions/7612837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: simple XML error URL file-access is disabled in the server I'm having problems trying to load a XML file using simpleXML in PHP, it works fine on my host but when I uploaded it to the live server it showed me the error in the title, I looked around and it looks like allow_url_fopen is disabled in the live server but I don't have access to the php.ini file. I tried adding php_value allow_url_fopen 1 to my .htaccess file but I still got the same error, could it be that the host has disabled changing this from outsite the .ini file? Thanks in advance!
{ "language": "en", "url": "https://stackoverflow.com/questions/7612839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Associative array passing as empty string I am trying to pass an array to PHP via Ajax, but the array is being passed as an empty string. This is my code when creating the array: var params = new Array(); var inputs = new Array(); inputs = $(":input"); $.each(inputs, function(key, value) { params[value.id] = value.value; }); alert(params); Before that there are around 20 inputs that look like this: <input name="first_name" id="first_name" type="text" class="medium"/> <input name="last_name" id="last_name" type="text" class="medium"/> The alert(params) is just giving me an empty string. However, alert(params['first_name']) actually gives me the first_name input value. Why isn't the array going through? A: Can you try this - $(document).ready(function() { var params = new Array(); var inputs = $(":input"); $.each(inputs, function(key, value) { //I guess the problem was that params is array and your id's were strings //array's elements are arr[0], arr[1], //not like arr[firstname], arr[lastname], ... params[value.id] = value.value; }); alert(params); }); //moved everything inside $(document).ready with this - <input name="first_name" id="0" value="1" type="text" class="medium"/> <input name="last_name" id="1" value="2" type="text" class="medium"/> <!-- Changed id's from string to numbers. --> Update: Also, try this it might help you understand whats going on - $(document).ready(function() { var params = {}; //use object instead of array var inputs = $(":input"); $.each(inputs, function(key, value) { params[value.id] = value.value; }); for(var prop in params) { alert(prop + " = " + params[prop]); } }); Notice: params is an object now not an array. With this - <input name="first_name" id="firstname" value="Your first name." type="text" class="medium"/> <input name="last_name" id="lastname" value="Your last name." type="text" class="medium"/> A: You can't simply pass a variable to the server, you need to serialize it into name, value pairs, or use JSON. A: I'm slightly unsure what you're trying to accomplish by creating an array in Javascript like this. Your best bet is probably to use the JQuery command serialize then grab the data using $_GET
{ "language": "en", "url": "https://stackoverflow.com/questions/7612850", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: OnTextChange Partial Postback not occurring I have the following code for an OnTextChanged event: protected void CustomTextBox_OnTextChanged(object sender, EventArgs e) { if (tick.Attributes["class"] == "tick displayBlock") { tick.Attributes["class"] = "displayNone"; tick.Attributes.Add("class", "displayNone"); } checkAvailability.Attributes.Add("class", "displayBlock"); checkAvailability.Attributes["class"] = "displayBlock"; } And: <asp:UpdatePanel ID="upMyUpdatePanel" runat="server"> <ContentTemplate> <uc:CustomTextBox ID="txtUserName" OnTextChanged="CustomTextBox_OnTextChanged" AutoPostBack="True" class="someClass"> </uc:CustomTextBox> </ContentTemplate> </asp:UpdatePanel> So I have the above code works perfectly fine in Chrome, IE 8, 9. However Firefox 6 doesn't seem to do a partial postback. Before anyone asks I have bubbled up events ontextchanges and autopostback to be used by my customtextbox instances. You can see how on related question: Exposing and then using OnTextChange Event handler A: This issue was being cause by a double AutoPostBack. Parent control: <uc:CustomTextBox ID="ctbMyTextBox" OnTextChanged="CustomTextBox_OnTextChanged" AutoPostBack="True" class="someClass"> </uc:CustomTextBox> Child Control: <asp:UpdatePanel ID="upMyUpdatePanel" runat="server"> <ContentTemplate> <uc:CustomTextBoxChild ID="ctbcMyTextBox" OnTextChanged="CustomTextBox_OnTextChanged" AutoPostBack="True" class="someClass"> </uc:CustomTextBoxChild> </ContentTemplate> </asp:UpdatePanel> In the parent control I removed AutoPostBack="True" and this fixed the issue for me. If someone can give further explanation as to why a Double AutoPostback can cause this I would be happy to check your answer as correct. A: Remove the autopostback from the parent and add it to child(your custom one). That will solve the issue. Further, since its a custom control so you are inheriting the properties from your parent. Even if you remove the Autopostback from the custo control, i think it might work as the property is true in its parent, by default. A: set UpdatePanel Mode="Conditional" and AutoPostBack="True" and enableviewstate="true" now it will work
{ "language": "en", "url": "https://stackoverflow.com/questions/7612858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: .NET component for color PDF to grayscale conversion Currently i use Ghostscript to convert color PDF's to grayscale PDF's. Now i'm looking for reliable .NET commercial or not commercial component/library for ghostscript replacement. I googled and I did not find any component/library that is able to do that easily or to do that at all. EDIT #1: Why Ghostscript does not work for me: I implemented Ghostscript and I'm using it's native API's. The problem is that Ghostscript does not support multiple instances of the interpreter within a single process. -dJOBSERVER mode also does not work for me because i don't collect all job and them process them all at once. It happens that Ghostscript is processing large job which takes around 20 minutes and meanwhile i get some smaller job which has to be processed ASAP and cannot wait 20 minutes. Other problem is that Ghostscript page processed events are not easily to catch. I wrote a parser for ghostscript stdout messages and i can read out processed page number but not for each page when it's processed as ghostscript pushes message for group of processed pages. There are couple of more problems with Ghostscript like producing bad pdf's, duplicating font problems..... You can find one more problem i had with ghostscript here: Ghostscript - PS to PDF - Inverted images problem - a year after UPDATE: Before a year a go i asked this question. Later i made my own solution by using iTextSharp. You can take a look at the converting PDF to grayscale solution here: http://habjan.blogspot.com/2013/09/proof-of-concept-converting-pdf-files.html or https://itextsharpextended.codeplex.com/ Works for me in most cases :) A: Not quite an answer, but I think you dismiss Ghostscript too quickly. Are you aware of the GhostScript API (for in-process Ghostscript)? Or of the -dJOBSERVER mode that can take a series of PS commands piped to its standard in? That still won't get you your callbacks however, and it's still not multi-threaded. As previously stated, iText could do it, but it would be a matter of walking through all the content and images looking for non-grayscale color spaces and converting them in a space-specific manner. You'd also have to replace the pixel data in any images you might find. The good news is that iText[Sharp] is capable of operating in multiple threads, provided each document is used from one thread at a time. I suspect this is also the case for the suggested commercial library, which isn't such a good deal. And then a light went on above my head... drawn in gray scale. Blending modes and transparency groups! Take all the current page content and stick it in a transparency group that is blended with a solid black rectangle that covers the page. I think there's even a luminosity to alpha blend mode... lets see here. Yep, PDF reference section 11.6.5.2 "Soft Mask Dictionaries". You'll want a "luminosity" group. Now, the bad news. If your goal in switching to gray scale is to save space, this will fail utterly. It'll actually make each file a little larger... say a 100 bytes per page, give or take. The software rendering the PDF better be pretty hot stuff too. Your cousin's undergrad rendering project need not apply. This is advanced graphics stuff here, infrequently used by Common PDF Files, so the last sort of thing to be implemented. So... For each original page * *Create a new page. *Cover it with a black background. *Cover it with a white rectangle (had it backwards earlier) in a transparency group that uses a soft mask dictionary set to be the luminosity of the original page's content (now stashed in an XObject Form). Because this is all your own code, you'll have ample opportunity to do whatever it is you want to do at the beginning or end of each page. By golly, that's just crazy enough to work! It does require some PDF-Fu, but not nearly as much as the "convert each color space and image in various ways as I step through the document". Deeper knowledge, less code to write. A: This isn't a .net library, but rather a potential work-around. You could install a virtual printer that is capable of writing PDF files. I would suggest CutePDF, as it's free, easy to use and does a great job 'printing' a large number of file formats to PDF. You can do nearly everything with CutePDF that you can do with a normal printer, including printing to grayscale. After the virtual printer is installed, you can use c# to 'print' a greyscale version. Edit: I just remembered that the free version is not silent. Once you print to the CutePDF printer, it will ask you to 'Save As'. They do have an SDK available for purchase, but I couldn't say whether it would be able to help you convert to grayscale. A: If a commercial product is a valid option for you, allow me to recommend Amyuni PDF Creator .Net. By using it you will be able to enumerate all items inside the page and change their colors accordingly, images can also be set as grayscale. Usual disclaimers apply Sample code using Amyuni PDF Creator ActiveX, the .Net version would be similar: pdfdoc.ReportState = ReportStateConstants.acReportStateDesign; object[] page_items = (object[])pdfdoc.get_ObjectAttribute("Pages[1]", "Objects"); string[] color_attributes = new string[] { "TextColor", "BackColor", "BorderColor", "StrokeColor" }; foreach (acObject page_item in page_items) { object _type = page_item["ObjectType"]; if ((ACPDFCREACTIVEX.ObjectTypeConstants)_type == ACPDFCREACTIVEX.ObjectTypeConstants.acObjectTypePicture) { page_item["GrayScale"] = true; } else foreach (string attr_name in color_attributes) { try { Color color = System.Drawing.ColorTranslator.FromWin32((int)page_item[attr_name]); int grayColor = (int)(0.3 * color.R + 0.59 * color.G + 0.11 * color.B); int newColorRef = System.Drawing.ColorTranslator.ToWin32(Color.FromArgb(grayColor, grayColor, grayColor)); page_item[attr_name] = newColorRef; } catch { } //not all items have all kinds of color attributes } } A: Before a year a go i asked this question. Later i made my own solution by using iTextSharp. You can take a look at the converting PDF to grayscale solution here: https://itextsharpextended.codeplex.com/ A: iTextPdf a good product for creating/managing pdf it has got both commercial and free versions. Have a look at aspose.pdf for .net it provides below features and a lot more. * *Add and remove watermarks from PDF document *Set page margin, size, orientation, transition type, zoom factor and appearance of PDF document *.. And here is a list of open source pdf libraries. A: After a lot of investigation i found out about ABCpdf from Websupergoo. Their component can easily convert any PDF page to grayscale by simple call to Recolor method. The component is commercial.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Liberating email addresses from outlook I have outlook 2007, and I recently sent an email with a BCC of about 75 addresses. I would like to copy and paste this BCC list into a spreadsheet to keep track of the names AND email addresses of the people I contacted. The problem is, when I copy the BCC field from outlook, it only copies the names in the list, NOT the email addresses. How do I liberate these email addresses from outlook's cold, dead hands? Is there any way to force outlook to copy the email addresses, rather than the names? Does this useless feature have any purpose, other than frustrating users and preventing them from using competing email services? Thank you. A: I ended up using NK2Edit to solve this problem. I had to manually select the emails I wanted, which was a little tedious, but it allowed me to copy the names and email addresses as a tab-delimited file. I'm still curious why outlook has this strange behavior, and it there's a good way to stop it.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Unable to open URLs using python webdriver for selenium 2.0 I am new to both Python as well as Selenium and and still in the learning phase. I have been trying to launch both IE8 as well as Firefox using the new Python Webdriver for Selenium with the following code. from selenium import webdriver from selenium.common.exceptions import NoSuchElementException from selenium.webdriver.common.keys import Keys driver = webdriver.Ie() #driver = webdriver.Firefox() driver.get("http://www.google.com") In case of Firefox, it launches the broswer with my home page while it does not even launch the IE8 browser. In either case I can see this exception in my command prompt window. File "C:\Documents and Settings\user.name\My Documents\seleniumScripts\test1.py", line > 8, in <module> driver = webdriver.Ie() File "C:\Python27\lib\site-packages\selenium-2.7.0 -py2.7.egg\selenium\webdriver\ie\webdriver.py", line 58, in __init_ desired_capabilities=DesiredCapabilities.INTERNETEXPLORER) File "C:\Python27\lib\site-packages\selenium-2.7.0- py2.7.egg\selenium\webdriver\remote\webdriver.py", line 61, in __i it__self.start_session(desired_capabilities, browser_profile) File "C:\Python27\lib\site-packages\selenium-2.7.0 - py2.7.egg\selenium\webdriver\remote\webdriver.py", line 98, in sta t_session 'desiredCapabilities': desired_capabilities, File "C:\Python27\lib\site-packages\selenium-2.7.0- py2.7.egg\selenium\webdriver\remote\webdriver.py", line 144, in excute self.error_handler.check_response(response) File "C:\Python27\lib\site-packages\selenium-2.7.0- py2.7.egg\selenium\webdriver\remote\errorhandler.py", line 100, in check_response raise exception_class(value) selenium.common.exceptions.WebDriverException: Message: '<!DOCTYPE HTML PUBLIC "- //W3C//DTD HTML 4.0 Transitional//EN"> r\n<HTML><HEAD><TITLE>Error Message</TITLE>\r\n<META http-equiv=Content-Type content="text/html; charset=UTF-8">\r\n<ST LE id=L_default_1>A {\r\n\tFONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: #005a80; FONT- FAMILY: tahoma\r\n}\r\nA:hover {\r\ \tFONT-WEIGHT: bold; FONT-SIZE: 10pt; COLOR: #0d3372; FONT-FAMILY: tahoma\r\n}\r\nTD {\r\n\tFONT-SIZE: 8pt; FONT-FAMILY tahoma\r\n}\r\nTD.titleBorder {\r\n\tBORDER-RIGHT: #955319 1px solid; BORDER-TOP: #955319 1px solid; PADDING-LEFT: 8px FONT-WEIGHT: bold; FONT-SIZE: 12pt; VERTICAL-ALIGN: middle; BORDER-LEFT: #955319 0px solid; COLOR: #955319; BORDER-BOT OM: #955319 1px solid; FONT-FAMILY: tahoma; HEIGHT: 35px; BACKGROUND-COLOR: #d2b87a; TEXT-ALIGN: left\r\n}\r\nTD.titleB rder_x {\r\n\tBORDER-RIGHT: #955319 0px solid; BORDER-TOP: #955319 1px solid; PADDING- LEFT: 8px; FONT-WEIGHT: bold; FON -SIZE: 12pt; VERTICAL-ALIGN: middle; BORDER-LEFT: #955319 1px solid; COLOR: #978c79; BORDER-BOTTOM: #955319 1px solid; ONT-FAMILY: tahoma; HEIGHT: 35px; BACKGROUND-COLOR: #d2b87a; TEXT-ALIGN: left\r\n} \r\n.TitleDescription {\r\n\tFONT-WEI HT: bold; FONT-SIZE: 12pt; COLOR: black; FONT-FAMILY: tahoma\r\n} \r\nSPAN.explain {\r\n\tFONT-WEIGHT: normal; FONT-SIZE 10pt; COLOR: #934225\r\n}\r\nSPAN.TryThings {\r\n\tFONT-WEIGHT: normal; FONT-SIZE: 10pt; COLOR: #934225\r\n}\r\n.TryLi t {\r\n\tMARGIN-TOP: 5px; FONT-WEIGHT: normal; FONT-SIZE: 8pt; COLOR: black; FONT-FAMILY: tahoma\r\n}\r\n.X {\r\n\tBORD R-RIGHT: #955319 1px solid; BORDER-TOP: #955319 1px solid; FONT-WEIGHT: normal; FONT- SIZE: 12pt; BORDER-LEFT: #955319 1 x solid; COLOR: #7b3807; BORDER-BOTTOM: #955319 1px solid; FONT-FAMILY: verdana; BACKGROUND-COLOR: #d1c2b4\r\n}\r\n.adm nList {\r\n\tMARGIN-TOP: 2px\r\n}\r\n</STYLE>\r\n<META content="MSHTML 6.00.2800.1170" name=GENERATOR></HEAD>\r\n<BODY gColor=#f3f3ed>\r\n<TABLE cellSpacing=0 cellPadding=0 width="100%">\r\n <TBODY>\r\n <TR>\r\n <TD class=titleborder x width=30>\r\n <TABLE height=25 cellSpacing=2 cellPadding=0 width=25 bgColor=black>\r\n <TBODY>\r\n <TR>\r\n <TD class=x vAlign=center align=middle>X</TD>\r\n </TR>\r\n </TBODY>\r\n </TABLE>\ \n </TD>\r\n <TD class=titleBorder id=L_default_2>Network Access Message:<SPAN class=TitleDescription> The page cannot be displayed</SPAN> </TD>\r\n </TR>\r\n </TBODY>\r\n</TABLE>\r\n\r\n<TABLE id=spacer>\r\n <TBODY>\r\n <TR>\r\n <TD height=10></TD></TR></TBODY></TABLE>\r\n<TABLE width=400>\r\n <TBODY>\r\n <TR>\r\n <TD noWrap width=25></TD\r\n <TD width=400><SPAN class=explain><ID id=L_default_3><B>Explanation:</B></ID></SPAN><ID id=L_default_4> There i a problem with the page you are trying to reach and it cannot be displayed. </ID><BR><BR>\r\n <B><SPAN class=tryThi gs><ID id=L_default_5><B>Try the following:</B></ID></SPAN></B> \r\n <UL class=TryList>\r\n <LI id=L_defaul _6><B>Refresh page:</B> Search for the page again by clicking the Refresh button. The timeout may have occurred due to nternet congestion.\r\n<LI id=L_default_7><B>Check spelling:</B> Check that you typed the Web page address correctly. The address may have been mistyped.\r\n<LI id=L_default_8><B>Access from a link:</B> If there is a link to the page you a e looking for, try accessing the page from that link.\r\n\r\n </UL>\r\n<ID id=L_default_9>If you are still not abl to view the requested page, try contacting your administrator or Helpdesk.</ID> <BR><BR>\r\n </TD>\r\n </TR>\r\n /TBODY>\r\n</TABLE>\r\n\r\n<TABLE id=spacer><TBODY><TR><TD height=15></TD></TR></TBODY></TABLE>\r\n\r\n<TABLE width=400 \r\n <TBODY>\r\n <TR>\r\n <TD noWrap width=25></TD>\r\n <TD width=400 id=L_default_10><B>Technical Information for support personnel)</B> \r\n <UL class=adminList>\r\n <LI id=L_default_11>Error Code: 407 Proxy Authenti ation Required. The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. (12209)\r\n<LI id=L_default_12>IP Address: 11.1.11.111\r\n<LI id=L_default_13>Date: 9/30/2011 3:23:59 PM [GMT]\r\n<LI i =L_default_14>Server: servername.com\r\n<LI id=L_default_15>Source: proxy\r\n\r\n </UL>\r\n </TD> r\n </TR>\r\n </TBODY>\r\n</TABLE>\r\n\r\n</BODY>\r\n</HTML>\r\n\r\n' ANy help would be really appeciated as I am completely stuck and kind of desperate now. Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/7612863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to test (automatedly) that an operation occurs after browser repaint? According to the comments of this blog post, the following technique executes an operation asynchronously but waits for a repaint: function nextTick(callback) { var img = new Image; img.onerror = callback; img.src = 'data:image/png,' + Math.random(); } whereas this one does not wait for a repaint: var mc = new MessageChannel; function nextTick(callback) { mc.port1.onmessage = callback; mc.port2.postMessage(0); } How could I verify this, programmatically, in a way that automated tests running on multiple platforms/browsers could check? A: You may want to use requestAnimationFrame instead of the workaround in the blog post. Read more about it at Paul Irish's blog http://paulirish.com/2011/requestanimationframe-for-smart-animating/
{ "language": "en", "url": "https://stackoverflow.com/questions/7612864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Setting multiple depth layers in AS3 I get how to set depth in as3 - but with as2 i could begin multiple 'depth points' using numbers - where in as3 all i can seem to do is set this object to a higher/lower depth than that object. The problem is (when dealing with a stack of isometric boxes, which can be placed by the user on a grid in any order) i don't want to deal with the added complexity of having every element know where every other element is, then adjust appropriately. What I'm trying to do is set up 6 total depth numbers/positions, one for each column in a 6 x 6 grid. So anything in column 1 will begin it's depth placement at say 500, anything in column 2 will begin its depth at 1000, column 3 would be 1500 and so on. That way, the second i place an object on a particular column, it would tuck itself under, or place itself above all surrounding items in other columns, this to me is much much easier than somehow figuring out where 15 different sized boxes are, how they relate to one another, then figure out what depth order they need to go in. Any ideas? as3 seems to have removed the ability to set a depth to a specific number :p A: The approach can be simplified. You basically want to create 3 'container' clips and add them in order. The last one added is the top-most. Bonus: if you want to rearrange, you can call addChild() on any clip (even already added ones) and that one will go to the top. //// IMPORTANT STUFF //// import flash.display.Sprite; var top:Sprite = new Sprite; var mid:Sprite = new Sprite; var bot:Sprite = new Sprite; addChild(bot); addChild(mid); addChild(top); //// END IMPORTANT STUFF //// // Move Stuff so we can visualize how this works. // Then add some boxes so we can see what's going on. mid.x = 20; mid.y = 20; bot.x = 40; bot.y = 40; // Add Top box var t:Sprite = new Sprite; t.graphics.beginFill(0xFF0000); t.graphics.drawRect(0,0,100,100); top.addChild(t); // Add Middle box var m:Sprite = new Sprite; m.graphics.beginFill(0x00FF00); m.graphics.drawRect(0,0,100,100); mid.addChild(m); // Add Bottom box var b:Sprite = new Sprite; b.graphics.beginFill(0x0000FF); b.graphics.drawRect(0,0,100,100); bot.addChild(b);
{ "language": "en", "url": "https://stackoverflow.com/questions/7612866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to show image in jqgrid in edit mode jqGrid contains image column defined using colmodel below. image id is passed in cell value from server in json. grid shows images properly if not in edit modes. Inline and form edit mode show wrong image since editoptions src property contains fixed id 1 How to show image from editable row id in edit mode ? How to pass cell value to editoptions src property like in formatter function? name:"Image", edittype:"image", editoptions:{ src: "GetImage?id=1"}, formatter:function(cell,options,row) { return "<img src='GetImage?id=" + cell + "'/>" } A: I can suggest you to change the value of the src property of the editoptions immediately before staring editing. Look at the answer for details. In case of form editing you can use beforeInitData to modify src: beforeInitData: function () { var cm = grid.jqGrid('getColProp', 'flag'), selRowId = grid.jqGrid('getGridParam', 'selrow'); cm.editoptions.src = 'http://www.ok-soft-gmbh.com/img/flag_' + selRowId + '.gif'; } So you will receive the edit form like for the grid See the corresponding demo here.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I check if headTitle is already used in Zend Framework? How to check if headTitle is already used? To avoid appending or overwriting existing title, which was set earlier in parent views/layout. Thanks ;) Update Example: $this->headTitle('First title'); // index.phtml $this->headTitle('Second title'); // some-nested-tpl.phtml Check whether First title is set and assign Second if not. A: You can simply check the content of headTitle and if it's the default then write something else like: if($this->headTitle() == '<title></title>') { $this->headTitle('foo') } Or write yourself a view-helper to safe yourself some writing time and have a function like $this->headTitleIfEmpty('foo'); which does the above, so you have a short tag in your templates.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Format special text with in a HTML textarea I am developing a web editor for SQL, I would like to be able to style special SQL text with in text (select, create, from) differently from other texts, as a user types. Is this possible in HTML? Is there a third party libary/plugin I can use for this? A: Try EditErea, it has built-it SQL highlighting support. A: Formatting inside a textarea is minimal at best. You might try the approach that Stack Overflow uses, which is to let your user input in a textarea and then display another box with the input formatted. This is done at SO using Google Prettify. This includes SQL formatting as well as a list of other languages, detected and formatted automatically.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: SortOn an m-array of objects Alright, so I have an m-array (Or an array of arrays in actionscript as it doesn't really have m-arrays) and each array in the superarray has a number of objects created at different times in it. I want to sort the superarray by in descending order of the value of the "time" paramater of the object at index 0 in each subarray. I've tried superarray.sortOn([0].time, Array.DESCENDING); and superarray.sortOn("0.time", Array.DESCENDING); but this doesn't seem to work. Any suggestions? Will I just have to write my own sort function to do this? If so what's the best way to go about it? A: Try using the Array.sort function passing a compare function. Something like this: var superarray:Array = [ [{time:900}, {time:715}, {time:655}], [{time:450}, {time:333}, {time:100}], [{time:999}, {time:75}, {time:30}] ]; var sorted:Array = superarray.sort( function(A:Array,B:Array):int { return ObjectUtil.numericCompare(A[0].time, B[0].time); });
{ "language": "en", "url": "https://stackoverflow.com/questions/7612886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: hyperlink who's path is only a forward slash (/) I have been asked to make some changes to a friend's company website. It uses a PHP insert file for the header on each page, which is useful as the navigation etc is the same on every page. The following code designates the company logo on every page: <div id="logo"> <a href="/"></a> </div> As you can see, the href of the a tag contains only a forward slash / as it's path. The link is working fine, and connects to the index.php page. I'm wondering how it is doing this? Seeing as the default page for the domain is controlled by the server config file, is this a shortcut to link to whatever the default page is designated as? I've never seen this done before, and I can't seem to find any documentation concerning it. I appreciate any information you can provide. A: That link brings you to the public root, and then the default file kicks in. It's the relative equivalent of an absolute path, such as http://stackoverflow.com/ In Linux and other Unix-like operating systems, a forward slash is used to represent the root directory, which is the directory that is at the top of the directory hierarchy and that contains all other directories and files on the system. Thus every absolute path, which is the address of a filesystem object (e.g., file or directory) relative to the root directory, begins with a forward slash. Forward slashes are also used in URLs (universal resource locators) to separate directories and files, because URLs are based on the UNIX directory structure. A major difference from the UNIX usage is that they begin with a scheme (e.g., http or ftp) rather than a root directory represented by a forward slash and that the scheme is followed directly by the sequence of a colon and two consecutive forward slashes to indicate the start of the directories and file portion of the URL. via: http://www.linfo.org/forward_slash.html A: It is a relative URI. Since it consists of just the path part, it maintains the current scheme, host, port etc and so takes you to http://www.example.com/ (assuming you were on http://www.example.com/foo/bar?baz=x#123 ) The browser then requests / from www.example.com using http on the default port (80). The server then devices what send back. How it does this depends on what the server is and how it is configured. Since you mention index.php there will be something that tells the server to use that. If we use Apache as an example, that will be a combination of the DirectoryIndex directive and something to tell Apache how to handle PHP programmes.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Solr: exact phrase query with a EdgeNGramFilterFactory In Solr (3.3), is it possible to make a field letter-by-letter searchable through a EdgeNGramFilterFactory and also sensitive to phrase queries? By example, I'm looking for a field that, if containing "contrat informatique", will be found if the user types: * *contrat *informatique *contr *informa *"contrat informatique" *"contrat info" Currently, I made something like this: <fieldtype name="terms" class="solr.TextField"> <analyzer type="index"> <charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/> <tokenizer class="solr.LowerCaseTokenizerFactory"/> <filter class="solr.EdgeNGramFilterFactory" minGramSize="2" maxGramSize="15" side="front"/> </analyzer> <analyzer type="query"> <charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/> <tokenizer class="solr.LowerCaseTokenizerFactory"/> </analyzer> </fieldtype> ...but it failed on phrase queries. When I look in the schema analyzer in solr admin, I find that "contrat informatique" generated the followings tokens: [...] contr contra contrat in inf info infor inform [...] So the query works with "contrat in" (consecutive tokens), but not "contrat inf" (because this two tokens are separated). I'm pretty sure any kind of stemming can work with phrase queries, but I cannot find the right tokenizer of filter to use before the EdgeNGramFilterFactory. A: Exact phrase search does not work because of query slop parameter = 0 by default. Searching for a phrase '"Hello World"' it searches for terms with sequential positions. I wish EdgeNGramFilter had a parameter to control output positioning, this looks like an old question. By setting qs parameter to some very high value (more than maximum distance between ngrams) you can get phrases back. This partially solves problem allowing phrases, but not exact, permutations will be found as well. So that search for "contrat informatique" would match text like "...contract abandoned. Informatique..." To support exact phrase query i end up to use separate fields for ngrams. Steps required: Define separate field types to index regular values and grams: <fieldType name="text" class="solr.TextField" omitNorms="false"> <analyzer> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> </analyzer> </fieldType> <fieldType name="ngrams" class="solr.TextField" omitNorms="false"> <analyzer type="index"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.EdgeNGramFilterFactory" minGramSize="2" maxGramSize="15" side="front"/> </analyzer> <analyzer type="query"> <tokenizer class="solr.StandardTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> </analyzer> </fieldType> Tell solr to copy fields when indexing: You can define separate ngrams reflection for each field: <field name="contact_ngrams" type="ngrams" indexed="true" stored="false"/> <field name="product_ngrams" type="ngrams" indexed="true" stored="false"/> <copyField source="contact_text" dest="contact_ngrams"/> <copyField source="product_text" dest="product_ngrams"/> Or you can put all ngrams into one field: <field name="heap_ngrams" type="ngrams" indexed="true" stored="false"/> <copyField source="*_text" dest="heap_ngrams"/> Note that you'll not be able to separate boosters in this case. And the last thing is to specify ngrams fields and boosters in the query. One way is to configure your application. Another way is to specify "appends" params in the solrconfig.xml <lst name="appends"> <str name="qf">heap_ngrams</str> </lst> A: As alas I could not manage to use a PositionFilter right like Jayendra Patil suggested (PositionFilter makes any query a OR boolean query), I used a different approach. Still with the EdgeNGramFilter, I added the fact that each keyword the user typed in is mandatory, and disabled all phrases. So if the user ask for "cont info", it transforms to +cont +info. It's a bit more permissive that a true phrase would be, but it managed to do what I want (and doesn't return results with only one term from the two). The only con against this workaround is that terms can be permutated in the results (so a document with "informatique contrat" will also be found), but it's not that a big deal. A: Here is what I was thinking - For the ngrams to be phrase matched the position of the tokens generated for each word should be the same. I checked for the edge grams filter and it increments the tokens, and didn't find any parameter to prevent it. There is a position filter available and this maintains the tokens position to the same token as to the begining. So if the following configuration is used all tokens are at the same position and it matches the phrase query (same token positions are matched as phrases) I checked it through the anaylsis tool and the queries matched. So you might want to try the hint :- <analyzer type="index"> <tokenizer class="solr.WhitespaceTokenizerFactory" /> <charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt" /> <filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/> <filter class="solr.LowerCaseFilterFactory" /> <filter class="solr.EdgeNGramFilterFactory" minGramSize="2" maxGramSize="15" side="front"/> <filter class="solr.PositionFilterFactory" /> </analyzer> A: I've made a fix to EdgeNGramFilter so positions within a token are not incremented anymore: public class CustomEdgeNGramTokenFilterFactory extends TokenFilterFactory { private int maxGramSize = 0; private int minGramSize = 0; @Override public void init(Map<String, String> args) { super.init(args); String maxArg = args.get("maxGramSize"); maxGramSize = (maxArg != null ? Integer.parseInt(maxArg) : EdgeNGramTokenFilter.DEFAULT_MAX_GRAM_SIZE); String minArg = args.get("minGramSize"); minGramSize = (minArg != null ? Integer.parseInt(minArg) : EdgeNGramTokenFilter.DEFAULT_MIN_GRAM_SIZE); } @Override public CustomEdgeNGramTokenFilter create(TokenStream input) { return new CustomEdgeNGramTokenFilter(input, minGramSize, maxGramSize); } } public class CustomEdgeNGramTokenFilter extends TokenFilter { private final int minGram; private final int maxGram; private char[] curTermBuffer; private int curTermLength; private int curGramSize; private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class); private final OffsetAttribute offsetAtt = addAttribute(OffsetAttribute.class); private final PositionIncrementAttribute positionIncrementAttribute = addAttribute(PositionIncrementAttribute.class); /** * Creates EdgeNGramTokenFilter that can generate n-grams in the sizes of the given range * * @param input {@link org.apache.lucene.analysis.TokenStream} holding the input to be tokenized * @param minGram the smallest n-gram to generate * @param maxGram the largest n-gram to generate */ public CustomEdgeNGramTokenFilter(TokenStream input, int minGram, int maxGram) { super(input); if (minGram < 1) { throw new IllegalArgumentException("minGram must be greater than zero"); } if (minGram > maxGram) { throw new IllegalArgumentException("minGram must not be greater than maxGram"); } this.minGram = minGram; this.maxGram = maxGram; } @Override public final boolean incrementToken() throws IOException { while (true) { int positionIncrement = 0; if (curTermBuffer == null) { if (!input.incrementToken()) { return false; } else { positionIncrement = positionIncrementAttribute.getPositionIncrement(); curTermBuffer = termAtt.buffer().clone(); curTermLength = termAtt.length(); curGramSize = minGram; } } if (curGramSize <= maxGram) { if (!(curGramSize > curTermLength // if the remaining input is too short, we can't generate any n-grams || curGramSize > maxGram)) { // if we have hit the end of our n-gram size range, quit // grab gramSize chars from front int start = 0; int end = start + curGramSize; offsetAtt.setOffset(start, end); positionIncrementAttribute.setPositionIncrement(positionIncrement); termAtt.copyBuffer(curTermBuffer, start, curGramSize); curGramSize++; return true; } } curTermBuffer = null; } } @Override public void reset() throws IOException { super.reset(); curTermBuffer = null; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7612889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Does instantiating and initializing work only when tied to button click? Beginner question. How come I can do this: Public Class Form1 Private StudentsInMyRoom As New ArrayList Public Class student Public name As String Public courses As ArrayList End Class Private Sub btnCreateStudent_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnCreateStudent.Click Dim objStudent As New student objStudent.name = "Ivan" objStudent.courses = New ArrayList StudentsInMyRoom.Add(objStudent) End Sub End Class But I CANNOT do this: Public Class Form1 Private StudentsInMyRoom As New ArrayList Public Class student Public name As String Public courses As ArrayList End Class Dim objStudent As New student objStudent.name = "Ivan" objStudent.courses = New ArrayList StudentsInMyRoom.Add(objStudent) End Class In the second example, all of the objStudent.etc get squiggly underlined and it says "declaration expected" when I hover over it. It's the same code except now it is not tied to clicking a button. Can't figure out what is the difference. A: It's because the implementation needs to be in a method, the way you have it means the code couldn't possibly be executed, how would you reference this code from elsewhere? It doesn't have to be tied to a click however: Private Sub AnyNameYouLike Dim objStudent As New student objStudent.name = "Ivan" objStudent.courses = New ArrayList StudentsInMyRoom.Add(objStudent) End Sub Will work. A: Rather than tell you how to fix this code directly, I'm going to explain what I think is going wrong with your thought process, so you can also do a better job writing code in the future. What I see here is a simple misunderstanding for someone new to programming of how classes work. When you build and define a class, you are not (yet) allocating any memory in the computer, and you are not yet telling the computer to do anything. All you are doing is telling the computer about how an object might look at some point in the future. It's not until you actually create an instance of that class that anything happens: Public Class MyClass Public MyField As String End Class 'Nothing has happened yet Public myInstance As New MyClass() 'Now finally we have something we can work with, ' but we still haven't done anything myInstance.MyField = "Hello World" 'It's only after this last line that we put a string into memory Classes can only hold a few specific kinds of things: Fields, Properties, Delegates (events), and Methods (Subs and Functions). All of these things in the class are declarations of something, rather than the thing itself. Looking at your samples, the code from your second example belongs inside of a method. If you want this code to run every time you work with a new instance of your class, then there is a special method, called a constructor, that you can use. That is declared like this: Public Class MyClass Public MyField As String 'This is a constructor Public Sub New() MyField = "Hello World" End Sub End Class However, even after this last example you still haven't told the computer to do any work. Again, you must create an instance of the class before the code in that constructor will run. This is true of all code in all .Net programs anywhere. The way your program starts out is that the .Net framework creates an instance of a special object or form and then calls (runs) a specific method in that form to sort of get the ball rolling for your program. Everything else comes from there. Eventually you will also learn about Shared items and Modules, that can (sort of) break this rule, in that you don't have to create an instance of the object before using it. But until you are comfortable working with instances, you should not worry about that too much. Finally, I want to point out two practices in your code that professional programmers would consider poor practice. The first is ArrayLists. I can forgive this, because I suspect you are following a course of study that just hasn't covered generics yet. I only bring it up so you can know not to get too attached to them: there is something better coming. The second is your "obj" prefix. This was considered good practice once upon a time, but is no longer fashionable and now thought to be harmful to the readability of your code. You should not use these prefixes.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is text-underline a property? I got a stylesheet which has a text-underline:none property, is it really a CSS property? A: No, it isn't a valid CSS property. Instead, you can use text-decoration: none. The text-underline property is often found in CSS generated by Microsoft Word.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to get the modified time of a file being uploaded in JavaScript? Is there ever a way possible to get the actual creation / modification time of the file being uploaded, using JavaScript? As for PHP, using filectime() and filemtime(), it only shows the date / time the file is uploaded, and not the time the file is actually created / modified on the source. In short, what I want is to check the m-time of a file before/during/after upload (where-ever possible) and decide whether or not to store the file on the server, and report the same back to the client. A: If you're talking about the file date/time on the user's machine, you can get that via the File API (support), which provides lastModified, which is the date/time as a number of milliseconds since The Epoch (if you want a Date, you can pass that into new Date). (There's also the deprecated lastModifiedDate, but that is deprecated and not supported on Safari [at least].) The File API is universally supported in modern browsers (the particular feature you'd be using is the File object). You'd get the value from the File object and include that information in a separate (for instance, hidden) field. Here's a rough-but-complete example of reading the last modified date (live copy): <!DOCTYPE HTML> <html> <head> <meta http-equiv="Content-type" content="text/html;charset=UTF-8"> <title>Show File Modified</title> <style type='text/css'> body { font-family: sans-serif; } </style> <script type='text/javascript'> function showFileModified() { var input, file; // Testing for 'function' is more specific and correct, but doesn't work with Safari 6.x if (typeof window.FileReader !== 'function' && typeof window.FileReader !== 'object') { write("The file API isn't supported on this browser yet."); return; } input = document.getElementById('filename'); if (!input) { write("Um, couldn't find the filename element."); } else if (!input.files) { write("This browser doesn't seem to support the `files` property of file inputs."); } else if (!input.files[0]) { write("Please select a file before clicking 'Show Modified'"); } else { file = input.files[0]; write("The last modified date of file '" + file.name + "' is " + new Date(file.lastModified)); } function write(msg) { var p = document.createElement('p'); p.innerHTML = msg; document.body.appendChild(p); } } </script> </head> <body> <form action='#' onsubmit="return false;"> <input type='file' id='filename'> <input type='button' id='btnShowModified' value='Show Modified' onclick='showFileModified();'> </form> </body> </html> The reason you couldn't get the time from the uploaded file on the server is that only the content of the file is transmitted in the request, not the client's filesystem metadata. A: Perhaps you could use javascript to get the last modified time, then use that in some other javacript to sort on that. This time will be in GMT. var xmlhttp = createXMLHTTPObject(); xmlhttp.open("HEAD", "http://myurl/interesting_image.jpg" ,true); xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4) { alert("Last modified: "+ var lastModTimeForInterestingImage = xmlhttp.getResponseHeader("Last-Modified")) } } xmlhttp.send(null); A: JavaScript does not have access to the local filesystem, so you can't get to this information without using Flash, Java or Active-x.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Google Maps Grey Squares Apear I have search the web for this kind of problem an fixes but with no luck. My problem is not the all google map grey squares that indecates an api problem the problem is that I see part of the map and the other parts are grey and its random: sometimes the map is at the top and the grey spot is down sometime in the left etc. A: I have this a lot, generally on slower internet for some reason. If you check the console log in your browser (Firebug, I believe IE/Chrome/FF have one built in?), there should be some sort of error going on in Google Maps API. I haven't found a reliable programming fix for this - I generally just move the map around a bit and it forced Google to reload the section and it generally works then.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: In Excel how to fill remaining cell with "~" after name for a specific character size of 15 In Excel 2003 I have a positional file and I want to space fill with a delimiter of "~" for 15 total characters. How can I ensure that each first name has exactly 15 characters prepended with "~" at the end for space fill. I tried to do this in custom format mode, but it doesn't work correctly. Examples: SALLY~~~~~~~~~~ TOM~~~~~~~~~~~~ FRED~~~~~~~~~~~ etc... A: Alternative; =A1 & REPT("~", 15 - LEN(A1)) A: Let's say the names are in column A Put this in cell A2 =LEFT(A1&"~~~~~~~~~~~~~~~",15) and drag it down I'm just adding fifteen squiggles to the right and then cutting it down to the leftmost 15 characters to give you equal widths.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: IEquatables implementation only called if the base Equals is overridden I have the following class class Product : IEquatable<Product> { public Guid Id { get; set; } public bool Equals(Product other) { return Id.Equals(other.Id); } } If i try and create a unique list of the items of a list as follows Guid a = Guid.NewGuid(); List<Product> listA = new List<Product>(); listA.Add(new Product(){Id = a}); List<Product> listB = new List<Product>(); listB.Add(new Product() { Id = a }); Debug.Assert(listA.Union(listB).Count()==1); two items are returned, this occurs until I override the object.Equals method, once i do this and my code is as follows class Product : IEquatable<Product> { public Guid Id { get; set; } public bool Equals(Product other) { if (ReferenceEquals(null, other)) return false; if (ReferenceEquals(this, other)) return true; return other.Id.Equals(Id); } public override bool Equals(object obj) { if (ReferenceEquals(null, obj)) return false; if (ReferenceEquals(this, obj)) return true; if (obj.GetType() != typeof (Product)) return false; return Equals((Product) obj); } public override int GetHashCode() { return Id.GetHashCode(); } } my IEquatable Equals method is now called, but only if i override the base method, furthermore if i put a breakpoint on the object equals method it is never called. Why is this? ----UPDATE So with the product class class Product : IEquatable<Product> { public Guid Id { get; set; } public bool Equals(Product other) { return Id.Equals(other.Id); } public override int GetHashCode() { return Id.GetHashCode(); } } If GetHashCode is removed, The IEquatable implementation of Equals is never hit I understand you should generally implement Equals and GetHashCode together, is this why? A: Documentation for Enumerable.Union says: The default equality comparer, Default, is used to compare values of the types that implement the IEqualityComparer(Of T) generic interface. To compare a custom data type, you need to implement this interface and provide your own GetHashCode and Equals methods for the type. You're implementing IEquatable. You need to implement IEqualityComparer. A: The problem is in the original implementation you're not overriding the GetHashcode method. Under the hood Union uses a Set<T> style structure to remove duplicates. This structure puts objects into buckets based on the value returned from GetHashCode. If the hash code doesnt't match up between equal objects (which it must do) then they can potentially be put in different buckets and never compared with Equals In general if you implement IEquatable<T> you should always * *Override Object.Equals *Override Object.GetHashCode Not doing both will land you in situations like this. Note, your implementation could be simplified a bit class Product : IEquatable<Product> { public Guid Id { get; set; } public bool Equals(Product other) { if (ReferenceEquals(null, other)) { return false; } return other.Id == this.Id; } public override bool Equals(object obj) { return Equals(obj as Product); } public override int GetHashCode() { return Id.GetHashCode(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7612910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Set UTF-8 as default string encoding in Heroku I need to change the default ruby string encoding to UTF-8 in Heroku. For some reason it is US-ASCII. $ heroku console Ruby console for myapp.heroku.com >> "a".encoding => #<Encoding:ASCII-8BIT> However, if I run irb locally I get a different result: $ irb ruby-1.9.2-p136 :001 > "a".encoding => #<Encoding:UTF-8> Both run on ruby 1.9.2. I've tried setting this as well, but didn't work: Encoding.default_internal = Encoding.default_external = "UTF-8" Ideas? Thanks, Felipe A: As per the Heroku support staff, this is the magic thing: heroku config:add LANG=en_US.UTF-8 Although heroku console will keep reporting strings encoding as ASCII-8BIT, your actuall app will be running with the correct encoding, based on the LANG config var. You can double check that by doing this: $ heroku run bash Running bash attached to terminal... up, run.2 u20415@022e95bf-3ab6-4291-97b1-741f95e7fbda:/app$ irb irb(main):001:0> "a".encoding => #<Encoding:UTF-8>
{ "language": "en", "url": "https://stackoverflow.com/questions/7612912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Silverlight + WCF: configure collection type per call? When configuring our service references in Silverlight, there is an option to choose the collection type that is generated for calls that return arrays, like so: It is an option that you can see when you use the 'configure service reference' context menu item. I'd upload an image, but I can't do that from work... I was wondering if there was a way to configure them on a per-call basis, so that I could have an observable collection in some cases, or an array in others? Is this type of thing possible? A: It seems that out of the box, no, this type of configuration is not possible. I did stumble across a similar question while doing research on third party tools however. How can I customize WCF client code generation? Its answer lists a number of different option for generating client code that is custom tailored to your needs. I haven't tried any of them out though.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there any value in creating an Integer object from an int? I was walking through some code and came across these lines: public final static int CM_VALUE = 0x200; public final static Integer CM = new Integer(CM_VALUE); Do anyone know why the author held the value in hex before passing it back to Integer? Does doing it this way add any benefits to the code? A: My vote is for convenience. The Author may have needed to use it in a Collection and wanted to init these statics at class init time for speed. ...tons of speculation on this one! A: This depends on the usage of the value of CM. It could add clarity if it was being used as a bitmask or some other bit related operation. It makes no difference to the compiler input what base value a hard coded value is entered as. However it is strange that the author would then convert it to an Integer object instead of using it as a plain int. A: Hexadecimal literals represent exactly the same bits as their decimal counterparts. So no, it's not really any better for the computer. Depending on the use it might be better readable for the developer, however. For example this: private final int FLAG_A = 0x01; private final int FLAG_B = 0x02; private final int FLAG_C = 0x04; private final int FLAG_D = 0x08; private final int FLAG_E = 0x10; private final int FLAG_F = 0x20; private final int FLAG_G = 0x40; private final int FLAG_H = 0x80; is probably easier to grasp than this (which is equivalent, however!): private final int FLAG_A = 1; private final int FLAG_B = 2; private final int FLAG_C = 4; private final int FLAG_D = 8; private final int FLAG_A = 16; private final int FLAG_B = 32; private final int FLAG_C = 64; private final int FLAG_D = 128; A: The value might be a bit mask or might be an externally defined constant. In the case of a bit mask, it's best to store them as hex values to make it more obvious what you are doing. If it was an external constant then it makes sense to use the same base as the definition to make finding any typos easier. A: Using hexadecimal is useful when you want to deal with binary representation, so 0x200 is simpler than the binary representation 1000000000. A: Maybe the place where the programmer who wrote this has a rule that there must be no hard-coded constants in source code, and that constants should always be used by defining a static final variable for them. In my opinion, such a rule can be good, but in this example it has probably been taken too far. Note that in general it's better to never use new Integer(...), use Integer.valueOf(...) instead. Class Integer can reuse objects if you use valueOf rather than explicitly creating a new Integer object. public final static int CM_VALUE = 0x200; public final static Integer CM = Integer.valueOf(CM_VALUE); Even better, just use autoboxing, which make it even less necessary to have CM_VALUE: public final static Integer CM = 0x200;
{ "language": "en", "url": "https://stackoverflow.com/questions/7612928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Assemblies Visual Studio 2010 using in monodevelop with monotouch I'm new in monodevelop and I have a question. I have some assemblies developed in Visual Studio 2010 in C# and I would like to use them with monotouch in Mac, my question is: do I have to use the source and generate the assemblies with monodevelop in Mac or just I need the assemblies and add them to my solution as a reference? A: The framework profile used by MonoTouch was originally based on the Silverlight profile (aka 2.1) and was updated to include some, but not all, of the new API provided by the .NET framework 4.0. As such you might be able to reuse assemblies, without recompiling them. That will depends if all the API are available, if you refer to assemblies not available in MonoTouch, under what profile (3.5 or 4.0) you're building the code... However things would be a lot easier if you have the source code and are able to re-compile it inside MonoDevelop. That would provide you with debugging symbols (the .mdb files) also also catch, at compile time (not at run time), and fix code using any missing API (from MonoTouch). A: You should be able to use the same assemblies as they are (no need for a recompile). If the assemblies depend on other nonstandard assemblies it might get tricky and you may have to deploy other assemblies along side the ones you want and then that may cause it's own problems if they are not open source or licenses are required to redistribute, etc.. Give it a shot, see what happens.
{ "language": "en", "url": "https://stackoverflow.com/questions/7612933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: LINQ to SQL in ASP.NET MVC and Repository pattern I'm trying to follow a tutorial from the Asp.NET MVC website which uses LINQ to Entities but I decided to use LINQ to SQL instead. I'm at the point where a new table is created called Groups which has a relationship to the Contacts table. Esentially it's a One to Many relationship where a Group can have many contacts and a Contact can only have 1 Group. Please see below the example code with CRUD operations. I'm not sure how to implement this in LINQ to SQL. For example, how do you do this in LINQ to SQL: return _entities.GroupSet.Include("Contacts").FirstOrDefault(); Are you supposed to do a JOIN for the two tables or is there another way? Example CODE: using System.Collections.Generic; using System.Linq; using System; namespace ContactManager.Models { public class EntityContactManagerRepository : ContactManager.Models.IContactManagerRepository { private ContactManagerDBEntities _entities = new ContactManagerDBEntities(); // Contact methods public Contact GetContact(int id) { return (from c in _entities.ContactSet.Include("Group") where c.Id == id select c).FirstOrDefault(); } public Contact CreateContact(int groupId, Contact contactToCreate) { // Associate group with contact contactToCreate.Group = GetGroup(groupId); // Save new contact _entities.AddToContactSet(contactToCreate); _entities.SaveChanges(); return contactToCreate; } public Contact EditContact(int groupId, Contact contactToEdit) { // Get original contact var originalContact = GetContact(contactToEdit.Id); // Update with new group originalContact.Group = GetGroup(groupId); // Save changes _entities.ApplyPropertyChanges(originalContact.EntityKey.EntitySetName, contactToEdit); _entities.SaveChanges(); return contactToEdit; } public void DeleteContact(Contact contactToDelete) { var originalContact = GetContact(contactToDelete.Id); _entities.DeleteObject(originalContact); _entities.SaveChanges(); } public Group CreateGroup(Group groupToCreate) { _entities.AddToGroupSet(groupToCreate); _entities.SaveChanges(); return groupToCreate; } // Group Methods public IEnumerable<Group> ListGroups() { return _entities.GroupSet.ToList(); } public Group GetFirstGroup() { return _entities.GroupSet.Include("Contacts").FirstOrDefault(); } public Group GetGroup(int id) { return (from g in _entities.GroupSet.Include("Contacts") where g.Id == id select g).FirstOrDefault(); } public void DeleteGroup(Group groupToDelete) { var originalGroup = GetGroup(groupToDelete.Id); _entities.DeleteObject(originalGroup); _entities.SaveChanges(); } } } A: You need to specify some DataLoadOptions to create the join for you: So to do this, you have to create a DataContext for each type of query with the correct DataLoadOptions: var db = new WhateverDbDataContext(); DataLoadOptions options = new DataLoadOptions(); db.LoadOptions = options; options.LoadWith(x => x.Contacts); return db.SomeTable.FirstorDefault(); A: Linq to sql does not support the Include method. If you don't care if the relationship is lazy loaded, then you don't have to do anything. If you want it to be eager loaded, then you have use the more convoluted DataLoadOptions. See this article: http://blog.stevensanderson.com/2007/12/02/linq-to-sql-lazy-and-eager-loading-hiccups/
{ "language": "en", "url": "https://stackoverflow.com/questions/7612934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Jquery Mobile slide list I'm trying to do a list with jQueryMobile like in the twitter application. Video of what I'm looking for: http://www.youtube.com/watch?v=l7gTNpPTChM But I have 2 problems: 1) Each row has a class .mailRow and the .live("tap") event works but .live("swipe") doesn't work on the mobile and does work on the computer when I do it with the right button. 2) I managed to "hide" the row with $('.mailRow').live('swipe', function(e){ $(this).animate({ marginLeft: "100%"} , 800); }); But I don't know how to put another div underneath so it'll be visible when the animation ends. This is how the list elements looks like in HTML: <li data-theme="c" class="ui-btn ui-btn-icon-right ui-li ui-btn-up-c"> <div id="12345" class="mailRow" style="margin-left: 100%; "> <div class="ui-btn-inner ui-li"><div class="ui-btn-text"> <a href="" class="ui-link-inherit"> <p class="ui-li-aside ui-li-desc"><strong>30/09/2011 11:09:34</strong></p> <h3 class="ui-li-heading">USER1</h3> <p class="ui-li-desc"><strong>Re: this is a test</strong></p> <p class="ui-li-desc">TESTING THE MOBILE VERSION...</p> </a> </div><span class="ui-icon ui-icon-arrow-r ui-icon-shadow"></span></div> </div> </li> UPDATE : I found that the swipe event is not working becase there's an "a" tag inside the div. I don't know how to fix that. A: Well I found the solution myself, and I would like to share it, just in case somebody will face the same problem: New Style added: <style type="text/css"> .hidden { visibility: hidden; height: 0px !important; padding: 0px !important; margin: 0px !important; } </style> List elements HTML: <li data-theme="c" mail-id="12345" class="mailRow"> <div class="buttonsRow hidden"> <a href="#" data-role="button" data-iconpos="top" data-icon="back" data-inline="true">Reply</a> <a href="#" data-role="button" data-iconpos="top" data-icon="delete" data-inline="true">Delete</a> </div> <a href="#" class="messageRow"> <p data-role="desc" class="ui-li-aside"><strong>30/09/2011 11:09:34</strong></p> <h3 data-role="heading">USER1</h3> <p data-role="desc" ><strong>Re: this is a test/strong></p> <p data-role="desc" >TESTING THE MOBILE VERSION...</p> </a> </li> Javascript code: function mailLinks() { $('.mailRow').live('swiperight', function(e){ $(this).find('.messageRow').animate({ marginLeft: "100%"} , 800, function(){ $(this).parentsUntil('li').find(".ui-icon-arrow-r").addClass("ui-icon-arrow-l").removeClass("ui-icon-arrow-r"); $(this).parent().find('.buttonsRow').removeClass("hidden"); $(this).addClass("hidden"); }); }); $('.mailRow').live('swipeleft', function(e){ $(this).find('.buttonsRow').addClass("hidden"); $(this).find('.messageRow').removeClass("hidden"); $(this).find('.messageRow').animate({ marginLeft: "0%"} , 800, function(){ $(this).parentsUntil('li').find(".ui-icon-arrow-l").addClass("ui-icon-arrow-r").removeClass("ui-icon-arrow-l"); }); }); $('.mailRow').live('tap', function(e){ e.preventDefault(); idMail = $(this).attr('mail-id'); loadPage('read'); }); } It's not pretty, but it does work. A: don't know if it's important currently, but after jquery mobile 1.0 final was released, there is a tutorial which describes your "swipe menu". http://andymatthews.net/read/2011/02/23/Add-a-Twitter-for-iPhone-Style-Swipe-Menu-to-jQuery-Mobile
{ "language": "en", "url": "https://stackoverflow.com/questions/7612945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Will pthread_join halt the parent program if the child thread is still working?.. Will pthread_detach make it faster?? instead of using join? My program is something like this.. I wanted to know will I make my program slow, if I call pthread join??? void* a(void *a) { do---something();//which is a very long procedure, I mean takes a lot of time... pthread_exit(); } main() { while(1) { pthread_create(a); pthread_join(a); } } So, if I call pthread_join, will I halt at that point until the child thread finishes it execution, or do I go on and create one more thread ????? A: From the POSIX spec: The pthread_join() function shall suspend execution of the calling thread until the target thread terminates, unless the target thread has already terminated. If this is not what you want, either defer the call to pthread_join (putting all of the pthread_t's in a container so you can join them later) or use pthread_detach so you do not need to join them at all. A: the join call blocks until the thread exits A: You probably want to use non-blocking sockets to handle many clients without having to create a thread for each. See often quoted The C10K problem for more details. A: pthread_join() blocks until the child thread has exited, but you still want/need to call it in order for the child thread to be cleaned up properly once it has gone away. But as you say you don't want to have your main thread blocked, so how to deal with this? The way I would do it is to have the child thread send a message (via socket-pair or some other mechanism) back to its parent thread just before it exits.... when the parent thread receives this message, then it knows that now is a good time to call pthread_join(), since the child thread is already gone (or almost gone) and therefore pthread_join will never block (or at least not for more than a few milliseconds).
{ "language": "en", "url": "https://stackoverflow.com/questions/7612951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }