text
stringlengths
15
59.8k
meta
dict
Q: Reading from a .pts file that has no header into a point cloud I have a file with .pts ending which contains 6 columns of number. it is practically a text file since it has no defined header or anything else. A snipit for example: 497074.93 5419772.04 266.04 12 1 1 497074.93 5419772.08 266.02 15 1 1 497074.93 5419772.09 266.05 13 1 1 497074.93 5419772.11 266.05 13 1 1 497074.94 5419772.14 266.02 11 1 1 497074.94 5419772.15 266.04 13 1 1 497074.94 5419772.17 266.04 14 1 1 497074.94 5419772.18 266.05 14 1 1 I have two questions here: * *Are these xyz values with RGB attached? *How can I load them into matlab and save it as a point cloud? The data was obtained from this link and as best as I can tell is supposed to contain a point cloud of some format. A: For what concerns the reading in, you can use textscan: filename = '*** Full path to your text file ***'; fid = fopen(filename, 'r'); if fid == -1, error('Cannot open file! Check filename or location.'); end readdata = cell2mat(textscan(fid,'%f%f%f%f%f%f')); fclose(fid); This code will save the data in a matrix with 6 columns, if the file does not contain data other than the numbers. The format %f can be changed according to your needs (see https://de.mathworks.com/help/matlab/ref/textscan.html). About the meaning of the data: when I click on the link I don't see any data so please be more specific on that. Besides that, why would you use data of which you don't know the meaning?
{ "language": "en", "url": "https://stackoverflow.com/questions/45166024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to fix text overflow from a flex box I am working with table. height of the row and width of column can be changed (configurable). every time If height/width changes , text within the cell should be vertically/horizontally centered every time . I have a working code with css for a cell: { display: flex, align-items: center // vertical alignment justify-content: center // horizontal align } The problem here is ,for the cell value 90.45678423 now If i minimize column width. then its looking like that (pic)left part getting truncated I don't want left part of the text truncate on column Resize..and even with minimun width it shouldleft part of value shpuld be there be like . I have achive this behavior by using css with display block: but problem with this css is , I don't have line height access at this Js code and failed to achieve vertical alignment at middle if user change row height.. { text-align : center // horizontal align vertical-align : 'middle' // verical align line-height : ? } A: If you want to center the text if it is smaller than the container and align it to the left and overflow it when the text is bigger than the container, you can use justify-content: safe center; { display : flex; align-items : center; justify-content : safe center; }
{ "language": "en", "url": "https://stackoverflow.com/questions/63002183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to dynamically set and modify CSS in JavaScript? I have some JavaScript code that creates some div elements and it sets their CSS properties. Because I would like to decouple CSS logic from my JavaScript code and because CSS is easier to read in its own .css file, I would like to set the CSS className of my element and then dynamically inject some values into the defined CSS property. Here is what I would like to do : style.css: .myClass { width: $insertedFromJS } script.js: var myElement = document.createElement("div"); myElement.className = "myClass"; I want to do something like this but at that point myElement.style.width is empty myElement.style.width.replaceAll("$insertedFromJS", "400px"); I think my problem here is that after the call to myElement.className = "myClass", the CSS is not yet applied. A: Setting the style, might be accomplished defining the inner-page style declaration. Here is what i mean var style = document.createElement('style'); style.type = 'text/css'; style.cssText = '.cssClass { color: #F00; }'; document.getElementsByTagName('head')[0].appendChild(style); document.getElementById('someElementId').className = 'cssClass'; However the part of modifying it can be a lot of tricky than you think. Some regex solutions might do a good job. But here is another way, I found. if (!document.styleSheets) return; var csses = new Array(); if (document.styleSheets[0].cssRules) // Standards Compliant { csses = document.styleSheets[0].cssRules; } else { csses = document.styleSheets[0].rules; // IE } for (i=0;i<csses.length;i++) { if ((csses[i].selectorText.toLowerCase()=='.cssClass') || (thecss[i].selectorText.toLowerCase()=='.borders')) { thecss[i].style.cssText="color:#000"; } } A: If I understand your question properly, it sounds like you're trying to set placeholder text in your css file, and then use javascript to parse out the text with the css value you want to set for that class. You can't do that in the way you're trying to do it. In order to do that, you'd have to grab the content of the CSS file out of the dom, manipulate the text, and then save it back to the DOM. But that's a really overly-complicated way to go about doing something that... myElement.style.width = "400px"; ...can do for you in a couple of seconds. I know it doesn't really address the issue of decoupling css from js, but there's not really a whole lot you can do about that. You're trying to set css dynamically, after all. Depending on what you're trying to accomplish, you might want to try defining multiple classes and just changing the className property in your js. A: could you use jQuery on this? You could use $(".class").css("property", val); /* or use the .width property */ A: There is a jQuery plugin called jQuery Rule, http://flesler.blogspot.com/2007/11/jqueryrule.html I tried it to dynamically set some div sizes of a board game. It works in FireFox, not in Chrome. I didn't try IE9.
{ "language": "en", "url": "https://stackoverflow.com/questions/10420606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Parse Login self.logInViewController ambiguous reference to member ' logInViewController' I am using parse's provided login code for their latest ParseUI. I am confused why I am getting this error every single time I use self.logInViewController and self.signUpViewController, the error being: Ambiguous reference to member 'logInViewController' and 'signUpViewController' Here is my code: import Parse import ParseUI import UIKit class ViewController: UIViewController, PFLogInViewControllerDelegate, PFSignUpViewControllerDelegate { @IBAction func simpleAction(sender: AnyObject) { self.presentViewController(self.logInViewController, animated: true, completion: nil) } @IBAction func customAction(sender: AnyObject) { self.performSegueWithIdentifier("custom", sender: self) } @IBAction func logoutAction(sender: AnyObject) { PFUser.logOut() } override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. } override func viewDidAppear(animated: Bool) { super.viewDidAppear(animated) if (PFUser.currentUser() == nil) { self.logInViewController.fields = PFLogInFields.UsernameAndPassword | PFLogInFields.LogInButton | PFLogInFields.SignUpButton | PFLogInFields.PasswordForgotten | PFLogInFields.DismissButton var logInLogoTitle = UILabel() logInLogoTitle.text = "Parse" self.logInViewController.logInView.logo = logInLogoTitle self.logInViewController.delegate = self var SignUpLogoTitle = UILabel() SignUpLogoTitle.text = "Parse" self.signUpViewController.signUpView.logo = SignUpLogoTitle self.signUpViewController.delegate = self self.logInViewController.signUpController = self.signUpViewController } } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } func logInViewController(logInController: PFLogInViewController!, shouldBeginLogInWithUsername username: String!, password: String!) -> Bool { if (!username.isEmpty || !password.isEmpty) { return true }else { return false } } func logInViewController(logInController: PFLogInViewController!, didLogInUser user: PFUser!) { self.dismissViewControllerAnimated(true, completion: nil) } func logInViewController(logInController: PFLogInViewController!, didFailToLogInWithError error: NSError!) { print("Failed to login...") } func logInViewControllerDidCancelLogIn(logInController: PFLogInViewController!) { } func signUpViewController(signUpController: PFSignUpViewController!, didSignUpUser user: PFUser!) { self.dismissViewControllerAnimated(true, completion: nil) } func signUpViewController(signUpController: PFSignUpViewController!, didFailToSignUpWithError error: NSError!) { print("FAiled to sign up...") } func signUpViewControllerDidCancelSignUp(signUpController: PFSignUpViewController!) { print("User dismissed sign up.") } } A: logInViewController and signUpViewController are both undefined. You need to define them with: let logInViewController = PFLogInViewController() let signUpViewController = PFSignUpViewController()
{ "language": "en", "url": "https://stackoverflow.com/questions/34375935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: OpenCV Ubuntu installation undefined symbol After installing OpenCV on Ubuntu I run the python code import cv2 and I get the following error: Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: /usr/local/lib/python2.7/dist-packages/cv2.so: undefined symbol: _ZTIN2cv12_OutputArrayE This is how I installed OpenCV. mkdir opencv cd opencv git clone git://github.com/Itseez/opencv.git mkdir build cd build ccmake .. make sudo make install What can I do? A: You can try getting the latest release tag, rather than the HEAD code, like this: git clone https://github.com/Itseez/opencv.git cd opencv && git checkout 3.2.0
{ "language": "en", "url": "https://stackoverflow.com/questions/33197995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Convert internal web application to public web application I have been tasked with setting up an internal web application to become public facing. The web site was written in ASP.Net and I am just looking for some advice about how I should go about this procedure. Apart from hosting the site on a public facing server I don't know what else I would need to take into consideration. Any information would be appreciated. A: Have a look into ASP.NET Web Application Project Deployment Overview and this one also helps HOW TO: Deploy an ASP.NET Web Application Using the Copy Project Feature in Visual Studio .NET
{ "language": "en", "url": "https://stackoverflow.com/questions/7075732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Extracting Specific text string from PDF First post here, and i'm a near total newbie so please be kind. I am trying to implement a piece of code that will allow a user to import a pdf, search for specific text strings and export them all into a PDF. I have looked at similar answers, but they often seem to be based off of knowing the pdf structure. For context, I am trying to extract all occurrences of where an ID number is used in the following format - AAA01234567 (3 letters, 8 numbers). The PDFs are all downloaded files with OCR. I have tried to utilise pdfplumber and PyPDF2 to do so, but i'm getting struck on the syntax. pip install PyPDF2 import PyPDF2 pdfFileObj = open('Transcript 29 July 2021_1.pdf', 'rb') from re import search match = search('[a-zA-Z]{3}\d{8}', string) match <re.Match object; span=(0, 7), match='RBK0000'> I'm getting some output, but only a single ID rather than the entire list within the document. I would be grateful for any help you can provide! Thanks.
{ "language": "en", "url": "https://stackoverflow.com/questions/69014606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: VueJS: watching two properties Suppose I have two data properties: data() { return { a: false, b: false, } } When a and b become true at the same time, I want to carry out a related task. How can I use a watch method in Vue to achieve this? A: Watch a computed value. computed:{ combined(){ return this.a && this.b } } watch:{ combined(value){ if (value) //do something } } There is a sort of short hand for the above using $watch. vm.$watch( function () { return this.a + this.b }, function (newVal, oldVal) { // do something } ) A: It's pretty much the same solution as @Bert suggested. But you can just do the following: data() { return { combined: { a: false, b: false, } } }, Then: watch: { combined:{ deep:true, handler } }
{ "language": "en", "url": "https://stackoverflow.com/questions/44073206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: No Messagebox showing I have a script that connects itself with a mysql server and makes an sql query if I press a button. My code: try { MySqlConnection c = new MySqlConnection("Server=****;Database=*****;Uid=****;Pwd=y u no want to know ma pw;"); MySqlCommand cmd = new MySqlCommand("DELETE FROM users WHERE username = '" + textBox1.Text + "'", c); MySqlDataReader myReader; c.Open(); myReader = cmd.ExecuteReader(); int cnt = 0; while(myReader.Read()) { cnt = cnt + 1; } if (cnt == 1) { MessageBox.Show("Benutzer erfolgreich entfernt, Sir!"); } } catch(Exception ex) { MessageBox.Show("Benutzer konnte nicht entfernt werden.\n\n" + ex.ToString()); } Why isn't the message box showing? A: You need to use the ExecuteNonQuery() method of your MySqlCommand object, which will return the row(s) affected - which i suspect you are looking for. A DELETE statement will not return a resultset, only the records affected. using(MySqlConnection c = new MySqlConnection("Server=**;Database=***;Uid=**;Pwd=**;")) { using (MySqlCommand cmd = new MySqlCommand("DELETE FROM users WHERE username = @name")) { var userParam = new MySqlParameter(); userParam.Name = "@name"; userParam.Value = textbox1.Text; cmd.Parameters.Add(userParam); c.Open(); var recordsAffected = cmd.ExecuteNonQuery(); c.Close(); if (recordsAffected == 1) { MessageBox.Show("Benutzer erfolgreich entfernt, Sir!"); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/26536907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Unable to record video with RPi-Cam-Web-Interface with USB mounted I am trying to record a video using RPi-Cam-Web-Interface on my raspberry pi model 3b+ and saving it on my thumb-drive. However, when trying to record a video with my thumb-drive mounted to /var/www/html/media it gives me an error mmal: mmal_port_disable: port vc.ril.video_encode:out:0(H264)(0x1234300) is not enabled I tried changing gpu_mem=192 in /boot/config.txt and rebooting the raspberry pi but it still is giving me the error mmal: mmal_port_disable A: I was unable to get it to work with FAT32 so I reformatted my thumb-drive to ext4. Then I created a directory to mount the usb to using: mkdir /media/usb and then editing the etc/fstab and adding to the bottom of the file (where xxxx-xxxx-xxxx is the UUID of the partition of the drive you are using): UUID=xxxx-xxxx-xxxx /path/to/usb ext4 defaults 0 0 /path/to/usb /var/www/html/media/ none bind 0 0 I also found that I had to give permissions to write to my usb using: sudo chown www-data:www-data /path/to/usb
{ "language": "en", "url": "https://stackoverflow.com/questions/73009232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to convert webpage table data to json object or dict in python I am trying to fetch data from webpage which contains table and then compare values in table with other table values. Can I convert webpage into json data or dictionary in python? e.g. I have url www.yahoo.com how can I covert html data into json? I tried response = urllib2.urlopen(url) data = str(response.read()) I get html output. If I try json.loads(data) I get error raise ValueError("No JSON object could be decoded") Is there way to pull data from table which is displayed on webpage A: Use beautifulsoup (http://www.crummy.com/software/BeautifulSoup/bs4/doc/) to extract the table then convert it. Try Convert a HTML Table to JSON; it shows how to use beautifulsoup to grab the table and convert it into JSON.
{ "language": "en", "url": "https://stackoverflow.com/questions/34936208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Accessing database with sqlite in Java I am trying to access a database to then insert new data, but the code below gives me this output: Opened database successfully java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database () The database is created in a different class, I still get this error whether the database has already been created or not. What would be causing this error? Statement stmt = null; Connection c = null; try { Class.forName("org.sqlite.JDBC"); c = DriverManager.getConnection("jdbc:sqlite:src/test.db"); System.out.println("Opened database successfully"); stmt = c.createStatement(); String sql = "INSERT INTO table_one (id, name) VALUES (Null, 'Hayley');"; stmt.executeUpdate(sql); System.out.println("Inserted records"); stmt.close(); c.close(); } catch ( Exception e ) { System.err.println( e.getClass().getName() + ": " + e.getMessage() ); System.exit(0); } System.out.println("Table created sucessfully"); A: How about not inserting the null value in the id column. It is of no use to insert null value. It might have generated the sql exception. Try INSERT INTO table_one (name) VALUES ('Hayley');. I would suggest to use PreparedStatement instead of Statement because of the threat of SQL injection. Sometimes, the particular sql exception can occur if the database name is not given. Have you tried writing the database name like INSERT INTO database_name.table_one (name) VALUES ('Hayley');.
{ "language": "en", "url": "https://stackoverflow.com/questions/25214353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Prism Shared Viewmodel * *Platform: Xamarin forms *Prism version: 6.3.0 *Xamarin version (if applicable): 2.3.4 hello I'm using prism. I have a Tabbedpage how can i share a single viewmodel for all daughters views? my xaml: <TabbedPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:prism="clr-namespace:Prism.Mvvm;assembly=Prism.Forms" xmlns:views="clr-namespace:MySednaApp.Views;assembly=MySednaApp" xmlns:view="clr-namespace:MySednaApp.View;assembly=MySednaApp" prism:ViewModelLocator.AutowireViewModel="True" x:Class="MySednaApp.Views.PubblicaArticoliTabbedPage" Title="{view:Translate PubblicaArticoli}" <TabbedPage.ToolbarItems <ToolbarItem x:Name="Delete2" Icon="ico_delete.png" Text="Elimina" Command="{Binding delete}" <ToolbarItem.Order <OnPlatform x:TypeArguments="ToolbarItemOrder" <On Platform="iOS" Primary</On <On Platform="Android" Secondary</On </OnPlatform </ToolbarItem.Order </ToolbarItem <ToolbarItem x:Name="Save" Icon="ico_save.png" Command="{Binding save}" Order="Primary" Priority="0" / </TabbedPage.ToolbarItems <views:PubblicaArticoliDettaglioPage x:Name="pubblicaArticoliDettaglioPage"/ <views:PubblicaArticoliGaugePage x:Name="pubblicaArticoliGaugePage"/ <views:PubblicaArticoliFotoPage x:Name="pubblicaArticoliFotoPage"/ </TabbedPage A: Simple, just set the BindingContext of each of your tabs to the TabbedPage's BindingContext in your code-behind. A: You have to add these properties in the XAML page to bind the viewmodel xmlns:mvvm="clr-namespace:Prism.Mvvm;assembly=Prism.Forms" mvvm:ViewModelLocator.AutowireViewModel="True"
{ "language": "en", "url": "https://stackoverflow.com/questions/47423633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to do variadic template for even position in c++? I have to write a variadic template function that returns true if the value of every even position is lower than the value of the parameter on the next position. Example: f(4, 5, 7, 9, 2, 4) -> true (4 < 5, 7 < 9, 2 < 4) i tried this : template<typename T> T check() { auto even_number = [](T x) { return (x % 2) == 0 ? x : 0; }; } template <typename T, typename... A> bool check(T first, A... args) { return first < check(args...) ? true : false; } int main() { std::cout << check(4, 5, 7, 9, 2, 4); } and this program give me those errors : 1.'check' : no matching overloaded function found 2.'T check(void)': could not deduce template argument for T -> this error apear if ar second template I add "T check(T first, A... args)" A: Assuming that the number of arguments is always even, you can do it simply like this: bool check() { return true; } template <typename T, typename... Ts> bool check(T t1, T t2, Ts... ts) { return t1 < t2 && (check(ts...)); } A: #include <iostream> template <typename ... Ints> constexpr bool check( Ints... args) requires(std::is_same_v<std::common_type_t<Ints...>, int> && sizeof... (Ints) %2 == 0) { int arr[] {args...}; for(size_t i{}; i < sizeof... (args)/2; ++i){ if (!(arr[2*i] < arr[2*i + 1])) return false; } return true; } int main() { std::cout << check(4, 3, 7, 9, 2, 4); } requires(std::is_same_v<std::common_type_t<Ints...>> && sizeof... (Ints) %2 == 0) makes sure that the number of inputs is even and they are integers Demo A recursive version template <typename Int, typename ... Ints> constexpr bool check(Int int1,Int int2, Ints... ints)requires(std::is_same_v<std::common_type_t<Int, Ints...>, int> && sizeof... (Ints) %2 == 0) { if constexpr (sizeof... (Ints) == 0) return int1 < int2; else return int1 < int2 && (check(ints...)); } Demo
{ "language": "en", "url": "https://stackoverflow.com/questions/63875748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: eclipse c++ unit test shortcut I'm using Eclipse Kepler CDT with Google Test. Unit testing is working fine, but I can't find a way to get a shortcut to run all unit testing. Everytime I need to run test, I have to click the little arrow near the run button and select the unit test icon. I can't find c++ unit test in the "Keys" menu (although Run JUnit test is available). A: Still you can use Ctrl+F11 to launch the last Run Configuration, so launch once your tests by clicking, and then hit Ctrl+F11. A: CTRL+F11 to work the way you want, you must set (from "Windows/Preferences") the "Run/debug > Launching : Launch Operation" setting to: Answer in this post. https://stackoverflow.com/a/1152039
{ "language": "en", "url": "https://stackoverflow.com/questions/25997602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Angular 2 npm test - Can not load "webpack" I'm seeing an error when I try to run npm test 08 09 2017 16:50:50.240:ERROR [preprocess]: Can not load "webpack"! TypeError: Cannot read property 'plugin' of undefined at PathsPlugin.apply (/Users/m/Sites/budget-angular2/node_modules/@ngtools/webpack/src/paths-plugin.js:75:18) at Resolver.apply (/Users/m/Sites/budget-angular2/node_modules/tapable/lib/Tapable.js:375:16) at /Users/m/Sites/budget-angular2/node_modules/enhanced-resolve/lib/ResolverFactory.js:249:12 at Array.forEach (native) at Object.exports.createResolver (/Users/m/Sites/budget-angular2/node_modules/enhanced-resolve/lib/ResolverFactory.js:248:10) at WebpackOptionsApply.process (/Users/m/Sites/budget-angular2/node_modules/webpack/lib/WebpackOptionsApply.js:282:46) at webpack (/Users/m/Sites/budget-angular2/node_modules/webpack/lib/webpack.js:36:48) at new Plugin (/Users/m/Sites/budget-angular2/node_modules/karma-webpack/lib/karma-webpack.js:63:18) at invoke (/Users/m/Sites/budget-angular2/node_modules/di/lib/injector.js:75:15) at Array.instantiate (/Users/m/Sites/budget-angular2/node_modules/di/lib/injector.js:59:20) at get (/Users/m/Sites/budget-angular2/node_modules/di/lib/injector.js:48:43) at /Users/m/Sites/budget-angular2/node_modules/di/lib/injector.js:71:14 at Array.map (native) at Array.invoke (/Users/m/Sites/budget-angular2/node_modules/di/lib/injector.js:70:31) at Injector.get (/Users/m/Sites/budget-angular2/node_modules/di/lib/injector.js:48:43) at instantiatePreprocessor (/Users/m/Sites/budget-angular2/node_modules/karma/lib/preprocessor.js:55:20) at /Users/m/Sites/budget-angular2/node_modules/karma/lib/preprocessor.js:106:17 at Array.forEach (native) at /Users/m/Sites/budget-angular2/node_modules/karma/lib/preprocessor.js:103:27 at module.exports (/Users/m/Sites/budget-angular2/node_modules/karma/node_modules/isbinaryfile/index.js:28:12) at /Users/m/Sites/budget-angular2/node_modules/karma/lib/preprocessor.js:84:7 at /Users/m/Sites/budget-angular2/node_modules/graceful-fs/graceful-fs.js:78:16 at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:446:3) 08 09 2017 16:50:50.257:WARN [karma]: No captured browser, open http://localhost:9876/ 08 09 2017 16:50:50.264:INFO [karma]: Karma v1.2.0 server started at http://localhost:9876/ 08 09 2017 16:50:50.264:INFO [launcher]: Launching browser Chrome with unlimited concurrency 08 09 2017 16:50:50.265:ERROR [karma]: Found 1 load error I have the following in my package.json file: "jasmine-core": "2.5.2", "jasmine-spec-reporter": "2.5.0", "karma": "1.2.0", "karma-chrome-launcher": "^2.0.0", "karma-cli": "^1.0.1", "karma-jasmine": "^1.0.2", "karma-remap-istanbul": "^0.2.1", "karma-webpack": "^2.0.4", I have seen a few posts re karma-webpack versions but the ones recommended seem older that I have here. I did have to manually npm install karma-webpack --save-dev as it was missing from the quickstart, but it should be available now, right? A: karma-webpack has a peer dependency of webpack. I don't see webpack in your package.json that you listed (unless that's not the full list). You need to also install webpack: npm install --save-dev webpack A: You need to upgrade to the newer version of angular CLI. You can delete the project folder which you already created using ng new my-app Uninstall the older angular-cli npm uninstall angular-cli -g Install the new @angular-cli npm install @angular/cli -g Start a new project again ng new my-app Then, run the test ng test A: I found this solution add these in your package.json in dependencies { "karma-sourcemap-loader": "^0.3.7", "karma-webpack": "^2.0.3", } and then npm i and then-: npm run test
{ "language": "en", "url": "https://stackoverflow.com/questions/46120486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Silverstripe prevent load data when submitting a form I need a clean solution to set data after submit a page from being populated by : $form->loadDataFrom( $Page ); There is my code : public function FormUpdate() { $error="Required"; $fields = new FieldList( TextField::create('Title', 'Title')->setCustomValidationMessage($error), TextField::create('Description', 'Description')->setCustomValidationMessage($error), TextField::create('Subject', 'Description')->setCustomValidationMessage($error), ); $actions = new FieldList( FormAction::create("FormUpdateSubmit")->setTitle('Update') ); $Page=Versioned::get_by_stage('Page', 'Live')->filter( array('SecureCode' => $_REQUEST['id'] ))->First(); $fields->push( HiddenField::create('id','SecureCode', $Page->SecureCode )); $fields->push( CheckboxField::create('Approbation', "Approbation")->setCustomValidationMessage($error) ); ), $required = new RequiredFields(array( 'Title','Subject','Description' )); $form = new Form($this, 'FormModifier', $fields, $actions, $required); $form->loadDataFrom( $Page ); $form->setAttribute('novalidate', 'novalidate'); return $form; } The problem... If I change Title and Description and I empty Subject field, i'm redirected back to the form page with the error message below Subject but, All fields are reloaded from $form->loadDataFrom($Page); That wasn't good. I must prevent that data to be reloaded. In this case, datas posted must replace $Page. What I have missing? A: I generally use loadDataFrom on the action that called the form (rather than inside the form function). So for example: ... public function index() { $form =$this->Form(); $form->loadDataFrom($this); $this->customise(array("Form" => $form)); return $this->renderWith("Page"); } ... That way the function only returns the base form and you alter it as and when required. A: Your form will be called once when adding it in the template, and once via request. Since all actions on a controller get the request as parameter, you can modify your form function like so: public function FormUpdate($request = null) { Then inside your function, only populate the form if it's not called via a request, eg. if (!$request) { $form->loadDataFrom($Page); }
{ "language": "en", "url": "https://stackoverflow.com/questions/43851117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Modeling Database Relationship Where Certain Rows Cannot be Deleted Ok, sorry for the weird wording. I have a situation where I have a set number of database rows that cannot be removed. Let's call them ingredients. I have many ingredients that belong to recipes. However, these recipes use the same ingredients. And by use the same that means that they have to use exactly the same row because these ingredients are seeded by an external script. In other words, these ingredients are never created by the application itself but are preloaded. This is the problem. I want to be able to delete the recipe without deleting the ingredient but I would like the ingredient to still belong to them. Advice? A: I can't comment (don't have enough rep), so I'm posting this as an answer. I think you can create a table that stores ingredient_id and recipe_id, call it RecipeToIngredients or something like that. You can then relate the ingredient_id to the records in the ingredients table, and the recipe_id to the recipe table. You can have multiple recipes linking to exactly the same ingredient row. You say you want to delete the recipe without deleting the ingredient, so you can do that with this setup. When you delete the recipe, you can delete the records from the RecipeToIngredients table, and not delete them from the Ingredients table. Then you say "I would like the ingredient to still belong to them". By "them", do you mean still belong to the Recipe? If so, then I wouldn't delete the Recipe, maybe create an "Active" field on the recipe table and set it to inactive. Or you could still delete the Recipe and keep the records on the RecipeToIngredients table, but then you will have a recipe_id value in that table that doesn't mean anything. If by "them" you don't mean the Recipe, then if you would clarify what you do mean, that would be helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/41536673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I change different UINavigationBar Style in a UINavigationController? I want to change the color and translucent of the UINavigationBar in one viewController without influence on other viewControllers. It seems impossible because they share a common UINavigationBar. Some similar questions tell me to change the style of UINavigationBar in viewWillAppear and restore it in viewWillDisappear.But I can see the effect is not perfect. I want to know how can the follow app can do this. It seems it does not share a common UINavigationBar A: Because those two view controllers are in separate navigation controllers, allowing different colours for each. A: Since all you want to do is change the colour, here is what you can do: Simply animate the colour change: -(void)viewWillDisappear:(BOOL)animated { [UIView animateWithDuration: 0.8 animations:^{ [self.navigationController.navigationBar setBarTintColor: [[UIColor orangeColor] colorWithAlphaComponent: 0.1]]; }]; } -(void)viewWillAppear:(BOOL)animated { [UIView animateWithDuration: 0.8 animations:^{ [self.navigationController.navigationBar setBarTintColor: [[UIColor whiteColor] colorWithAlphaComponent: 0.1]]; }]; } Click the GIF below for illustration:
{ "language": "en", "url": "https://stackoverflow.com/questions/37981276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Shutdown ExecutorService gracefully in library code This questions is related to: Shutdown ExecutorService gracefully in webapp? I have a library that is used by multiple clients. One client is a plain "java -jar ..." invocation, one is a Tomcat server, and one a Spring container. This library publishes messages to a queue and some clients publish so many messages per minute that the act of publishing synchronously degrades performance. I implemented an ExecutorService to offload the actual publishing to a worker thread. Performance is excellent with this strategy, however, that client's Spring container no longer shuts down cleanly. Yes, my threads are daemon threads already, however, they in turn use a library which doesn't use daemon threads. The linked article shows that you need a different strategy when running inside a servlet. I have only encountered a few client containers so far and I'm already having to implement specialized shutdown procedures. Rather than getting to specific use cases, my question is more general. How do you write library code which uses an ExecutorService while remaining agnostic to the threading model of the container in which the code runs?
{ "language": "en", "url": "https://stackoverflow.com/questions/28078917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Parameter values with '+' sign and no spaces between characters is getting 404 Not Found in the Response URL I have a 'getCustomerDetails' function in my API that accepts one parameter 'customerName'. This 'customerName' parameter accepts the value from the selected value in a dropdown (ui). Some values from that dropdown has special characters like '%', '#', '+' and '*'. At first, I thought all parameters with special characters on its values would have a response of '404 Not Found' but upon checking, only values with a '+' sign in between characters are getting that response. Please see below examples: Parameter value with no plus sign: Test this one*and this one Result from the Network tab: Parameter value with plus sign in between characters: Test+this one*and this one Result from the Network tab: I've read about encoding but I don't know where to implement it in my codes or what's the right thing to do about this. Here's my get function in the UI. I'm using ReactJS here. getCustomerDetails = (customerName) => {       let url =         Utilities.getApiHttps() +         "CustomerModule/CustomerDetails/" +         customerName;       fetch(url, {         headers: {           "Cache-Control":             "private, max-age=0,  no-cache, no-store, must-revalidate, max-age=31536000",           Pragma: "no-cache",           Expires: 0,           "Content-Type": "application/json",           Authorization: "Bearer " + Utilities.getToken(),         },         method: "GET",       })         .then((response) => response.json())         .then((data) => {           this.setState({             CustomerDetails:               data.CustomerDetails=== 0                 ? ""                 : data.CustomerDetails.toFixed(4),             StartYear: {               value:                 String(data.StartYear) === "0"                   ? ""                   : data.StartYear,               label: data.StartYear,             },             EndYear: {               value:                 String(data.EndYear) === "0"                   ? ""                   : data.EndYear,               label: data.EndYear,             },           });        })         .catch((error) => {           console.error(error);           this.setState({             error: true,           });         });     }; Here's the controller: [HttpGet("CustomerDetails/{customerName}")] public CustomerDetails GetCustomerDetails(string customerName) { CustomerDetails customerName = new CustomerDetails(); ICustomerModuleServices customerModuleServices = new CustomerModuleServices(); UserAccess usernameHasAccess = new UserAccess(); usernameHasAccess = _utilities.CheckUserAccess(Request.Headers["Authorization"]); UserServices uService = new UserServices(); string tokenForValidate = Request.Headers["Authorization"].ToString().Replace("Bearer", "").Trim(); string USER_NAME = _tokenHandler.GetUserNameFromToken(tokenForValidate); if ((tokenForValidate != "") && (uService.ValidateToken(tokenForValidate))) { if (usernameHasAccess.hasAccess) { try { details = CustomerModuleServices.GetCustomerDetails(customerName); logsService.InsertUserAPIActivity("/CM/CustomerDetails", USER_NAME, "TEST", ControllerContext.ActionDescriptor.ControllerName, _utilities.GetDefaultToolName()); } catch (Exception exception) { string MethodName = ControllerContext.ActionDescriptor.ActionName; string Application = _utilities.GetDefaultToolName(); logsService.InsertAPIErrorLogs(GetType().Namespace, GetType().Name, MethodName, USER_NAME, exception, Application, "TEST"); } } } return customerName ; } Here's my service for GetCustomerDetails: public CustomerDetails GetCustomerDetails(string customerName) { CustomerDetails details= new CustomerDetails(); try { using (var conn = new SqlConnection(_utilities.ADOConnectionString())) { using (SqlCommand sqlCommand = new SqlCommand("[dbo].[GetCustomerDetails_sp]")) { sqlCommand.Connection = conn; sqlCommand.CommandType = CommandType.StoredProcedure; conn.Open(); sqlCommand.Parameters.Add("@CUSTOMERNAME", SqlDbType.VarChar).Value = customerName; using (SqlDataReader reader = sqlCommand.ExecuteReader()) { while (reader.Read()) { details.StartYear = reader.IsDBNull(reader.GetOrdinal("StartYear")) ? 0 : reader.GetInt32 (reader.GetOrdinal("StartYear")); details.EndYear = reader.IsDBNull(reader.GetOrdinal("EndYear")) ? 0 : reader.GetInt32(reader.GetOrdinal("EndYear")); details.StartYearLabel = reader.IsDBNull(reader.GetOrdinal("StartYearLabel ")) ? string.Empty : reader.GetString(reader.GetOrdinal("StartYearLabel ")); details.EndYearLabel = reader.IsDBNull(reader.GetOrdinal("EndYearLabel"))? string.Empty : reader.GetString(reader.GetOrdinal("EndYearLabel")); details.Comments = reader.IsDBNull(reader.GetOrdinal("Comments")) ? string.Empty : reader.GetString(reader.GetOrdinal("Comments")); } } } } } catch (...) { ... } return details; }
{ "language": "en", "url": "https://stackoverflow.com/questions/67224503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Linux, Find > zcat > wc a have many zip files on my system. I need to calculate quantity of symbol all of them. But my command doesn't work: [[email protected] ~]$ find /RAID/s.korea/onlyzip/ -name *.zip -type f -exec zcat {} \; |wc -c gzip: /RAID/s.korea/onlyzip/00/node/2015-03.compare15.zip has more than one entry--rest ignored gzip: /RAID/s.korea/onlyzip/00/node/2015-03.compare16.zip has more than one entry--rest ignored gzip: /RAID/s.korea/onlyzip/00/node/2015-03.compare17.zip has more than one entry--rest ignored gzip: /RAID/s.korea/onlyzip/00/node/2015-03.compare18.zip has more than one entry--rest ignored gzip: /RAID/s.korea/onlyzip/00/node/2015-03.compare19.zip has more than one But if i just zcat /RAID/s.korea/onlyzip/00/node/2015-03.compare19.zip it work fine. Could you help me please? A: You can use the --quiet option : find /RAID/s.korea/onlyzip/ -name "*.zip" -type f -exec zcat -q {} \; |wc -c Beware of the " " around the pattern.
{ "language": "en", "url": "https://stackoverflow.com/questions/29368067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Parsing JSON in Python: how do I get has_key() working again after a change of format? I have the following block of Python code: data = json.loads(line) if data.has_key('derivedFrom'): dFin = data['derivedFrom'] if dFin.has_key('derivedIds'): This used to work fine on a block of JSON like this: "derivedFrom": {"source": "FOO", "model": "BAR", "derivedIds": ["123456"]} Now the format changed to: "derivedFrom": "{\"source\": \"FOO.\", \"model\": \"BAR\", \"derivedIds\": [\"123456\"] And so the last line in the Python block throws the following exception: 'unicode' object has no attribute 'has_key' Is there a way to preprocess JSON to make has_key work again? A: "{\"source\": \"FOO.\", \"model\": ... Is a JSON object inside a JSON string literal. To get at the inner JSON's properties, you'll have to decode it again. data = json.loads(line) if 'derivedFrom' in data: dFin = json.loads(data['derivedFrom']) if 'derivedIds' in dFin: .... JSON-in-JSON is typically a mistake as there's rarely a need for it - what is producing this output, does it need fixing? A: Use: 'derivedIds' in dFin This works both on dictionaries and on unicode, even though with unicode it could give false positives. A more robust approach could use Duck Typing: try: dFin = json.loads(data['derivedFrom']) #assume new format except TypeError: dFin = data['derivedFrom'] #it's already a dict if 'derivedIds' in dFin: # or dFin.has_key('derivedIds') #etc A: You are changing the derivedFrom property from a JSON object to a string. Strings don't have an attribute named has_key. A: If you want the exact same block of code to work, consider slightly adjusting your new format to following: "{\"derivedFrom\": {\"source\": \"FOO.\", \"model\": \"BAR\", \"derivedIds\": [\"123456\"]}}"
{ "language": "en", "url": "https://stackoverflow.com/questions/13976060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to get the URL without the Query String or the Id part? Let's say my urls are: https://www.mywebsite.com/app/company/employees/5 https://www.mywebsite.com/app/company/employees?id=5&name=jack https://www.mywebsite.com/app/company/employees/5?clockId=1 I'm looking for a way to get the "base" path, or whatever it's called. Like the "base" path would be "/app/company/employees" for all, without the "/5" part or "?id=5&name=jack" or "/5?clockId=1" parts. I was using string.Join("/", context.Request.ApplicationPath, context.Request.RequestContext.RouteData.Values["controller"], context.Request.RequestContext.RouteData.Values["action"]) to get it (context is HttpContext), but it doesn't work the way I want since it includes the Index action too. Like if the Url is "https://www.mywebsite.com/app/company" I want "/app/company/" not "/app/company/Index". I can always check if the action is Index or not but it feels like kind of a "code smell" to me. Also using context.Request.Url.AbsolutePath doesn't work either since it returns "/app/company/employees/5" (It returns the path with Id) A: Looks like the only two things you need from URL are action and controller and if the action is "index" you don't need it. In this case I believe that you are doing it right. This is a slightly modified piece of code I was using in ASP.NET MVC project: string actionName = this.ControllerContext.RouteData.Values["action"].ToString(); string controllerName = this.ControllerContext.RouteData.Values["controller"].ToString(); if (actionName.Equals("index", StringComparison.InvariantCultureIgnoreCase)) { actionName = string.Empty; } string result = $"/{controllerName}/{actionName}"; There is one more thing to mention: "areas". Once I was working on website which had them, so they may appear in your URL too if your website uses "area" approach. I hope it helps
{ "language": "en", "url": "https://stackoverflow.com/questions/55857721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to expand environment variables in python as bash does? With os.path.expandvars I can expand environment variables in a string, but with the caveat: "Malformed variable names and references to non-existing variables are left unchanged" (emphasis mine). And besides, os.path.expandvars expands escaped \$ too. I would like to expand the variables in a bash-like fashion, at least in these two points. Compare: import os.environ import os.path os.environ['MyVar'] = 'my_var' if 'unknown' in os.environ: del os.environ['unknown'] print(os.path.expandvars("$MyVar$unknown\$MyVar")) which gives my_var$unknown\my_var with: unset unknown MyVar=my_var echo $MyVar$unknown\$MyVar which gives my_var$MyVar, and this is what I want. A: The following implementation maintain full compatibility with os.path.expandvars, yet allows a greater flexibility through optional parameters: import os import re def expandvars(path, default=None, skip_escaped=False): """Expand environment variables of form $var and ${var}. If parameter 'skip_escaped' is True, all escaped variable references (i.e. preceded by backslashes) are skipped. Unknown variables are set to 'default'. If 'default' is None, they are left unchanged. """ def replace_var(m): return os.environ.get(m.group(2) or m.group(1), m.group(0) if default is None else default) reVar = (r'(?<!\\)' if skip_escaped else '') + r'\$(\w+|\{([^}]*)\})' return re.sub(reVar, replace_var, path) Below are some invocation examples: >>> expandvars("$SHELL$unknown\$SHELL") '/bin/bash$unknown\\/bin/bash' >>> expandvars("$SHELL$unknown\$SHELL", '') '/bin/bash\\/bin/bash' >>> expandvars("$SHELL$unknown\$SHELL", '', True) '/bin/bash\\$SHELL' A: Try this: re.sub('\$[A-Za-z_][A-Za-z0-9_]*', '', os.path.expandvars(path)) The regular expression should match any valid variable name, as per this answer, and every match will be substituted with the empty string. Edit: if you don't want to replace escaped vars (i.e. \$VAR), use a negative lookbehind assertion in the regex: re.sub(r'(?<!\\)\$[A-Za-z_][A-Za-z0-9_]*', '', os.path.expandvars(path)) (which says the match should not be preceded by \). Edit 2: let's make this a function: def expandvars2(path): return re.sub(r'(?<!\\)\$[A-Za-z_][A-Za-z0-9_]*', '', os.path.expandvars(path)) check the result: >>> print(expandvars2('$TERM$FOO\$BAR')) xterm-256color\$BAR the variable $TERM gets expanded to its value, the nonexisting variable $FOO is expanded to the empty string, and \$BAR is not touched. A: The alternative solution - as pointed out by @HuStmpHrrr - is that you let bash evaluate your string, so that you don't have to replicate all the wanted bash functionality in python. Not as efficient as the other solution I gave, but it is very simple, which is also a nice feature :) >>> from subprocess import check_output >>> s = '$TERM$FOO\$TERM' >>> check_output(["bash","-c","echo \"{}\"".format(s)]) b'xterm-256color$TERM\n' P.S. beware of escaping of " and \: you may want to replace \ with \\ and " with \" in s before calling check_output A: Here's a solution that uses the original expandvars logic: Temporarily replace os.environ with a proxy object that makes unknown variables empty strings. Note that a defaultdict wouldn't work because os.environ For your escape issue, you can replace r'\$' with some value that is guaranteed not to be in the string and will not be expanded, then replace it back. class EnvironProxy(object): __slots__ = ('_original_environ',) def __init__(self): self._original_environ = os.environ def __enter__(self): self._original_environ = os.environ os.environ = self return self def __exit__(self, exc_type, exc_val, exc_tb): os.environ = self._original_environ def __getitem__(self, item): try: return self._original_environ[item] except KeyError: return '' def expandvars(path): replacer = '\0' # NUL shouldn't be in a file path anyways. while replacer in path: replacer *= 2 path = path.replace('\\$', replacer) with EnvironProxy(): return os.path.expandvars(path).replace(replacer, '$') A: I have run across the same issue, but I would propose a different and very simple approach. If we look at the basic meaning of "escape character" (as they started in printer devices), the purpose is to tell the device "do something different with whatever comes next". It is a sort of clutch. In our particular case, the only problem we have is when we have the two characters '\' and '$' in a sequence. Unfortunately, we do not have control of the standard os.path.expandvars, so that the string is passed lock, stock and barrel. What we can do, however, is to fool the function so that it fails to recognize the '$' in that case! The best way is to replace the $ with some arbitrary "entity" and then to transform it back. def expandvars(value): """ Expand the env variables in a string, respecting the escape sequence \$ """ DOLLAR = r"\&#36;" escaped = value.replace(r"\$", r"\%s" % DOLLAR) return os.path.expandvars(escaped).replace(DOLLAR, "$") I used the HTML entity, but any reasonably improbable sequence would do (a random sequence might be even better). We might imagine cases where this method would have an unwanted side effect, but they should be so unlikely as to be negligible. A: I was unhappy with the various answers, needing a little more sophistication to handle more edge cases such as arbitrary numbers of backslashes and ${} style variables, but not wanting to pay the cost of a bash eval. Here is my regex based solution: #!/bin/python import re import os def expandvars(data,environ=os.environ): out = "" regex = r''' ( (?:.*?(?<!\\)) # Match non-variable ending in non-slash (?:\\\\)* ) # Match 0 or even number of backslash (?:$|\$ (?: (\w+)|\{(\w+)\} ) ) # Match variable or END ''' for m in re.finditer(regex, data, re.VERBOSE|re.DOTALL): this = re.sub(r'\\(.)',lambda x: x.group(1),m.group(1)) v = m.group(2) if m.group(2) else m.group(3) if v and v in environ: this += environ[v] out += this return out # Replace with os.environ as desired envars = { "foo":"bar", "baz":"$Baz" } tests = { r"foo": r"foo", r"$foo": r"bar", r"$$": r"$$", # This could be considered a bug r"$$foo": r"$bar", # This could be considered a bug r"\n$foo\r": r"nbarr", # This could be considered a bug r"$bar": r"", r"$baz": r"$Baz", r"bar$foo": r"barbar", r"$foo$foo": r"barbar", r"$foobar": r"", r"$foo bar": r"bar bar", r"$foo-Bar": r"bar-Bar", r"$foo_Bar": r"", r"${foo}bar": r"barbar", r"baz${foo}bar": r"bazbarbar", r"foo\$baz": r"foo$baz", r"foo\\$baz": r"foo\$Baz", r"\$baz": r"$baz", r"\\$foo": r"\bar", r"\\\$foo": r"\$foo", r"\\\\$foo": r"\\bar", r"\\\\\$foo": r"\\$foo" } for t,v in tests.iteritems(): g = expandvars(t,envars) if v != g: print "%s -> '%s' != '%s'"%(t,g,v) print "\n\n" A: There is a pip package called expandvars which does exactly that. pip3 install expandvars from expandvars import expandvars print(expandvars("$PATH:${HOME:?}/bin:${SOME_UNDEFINED_PATH:-/default/path}")) # /bin:/sbin:/usr/bin:/usr/sbin:/home/you/bin:/default/path It has the benefit of implementing default value syntax (i.e., ${VARNAME:-default}).
{ "language": "en", "url": "https://stackoverflow.com/questions/30734967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Java Runtime String Parsing I am using Java to execute a Python command, i.e. ctool run <cluster_name> <nodes> <command> So basically this runs the <command> in given nodes of the cluster. For example: ctool run my_cluster all 'rm -rf /home/tester/folder' This runs 'rm -rf /home/tester/folder' on 'all' the nodes of 'my_cluster' which runs perfectly fine from the terminal but when I am running this from java runtime as a string, it is taking any option like -p, -r, etc. in the <command> as ctool option and throwing usage error. I am assuming it has something to do with how is string parsing the command. Is there a way I can fix this issue? A: This should work: Process process = new ProcessBuilder("rm","-rf","/home/tester/folder").start(); InputStream is = process.getInputStream(); InputStreamReader isr = new InputStreamReader(is); BufferedReader br = new BufferedReader(isr); String out; while ((out = br.readLine()) != null) { System.out.println(out); }
{ "language": "en", "url": "https://stackoverflow.com/questions/31546790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can ASP.NET session live longer than Application This could be a silly/lame question, especially after working so long with ASP.NET :), but I need to be sure. Is it possible to have session (that is ASP.NET session) outlive the Application (app instance/app domain/Application variable)? In other words, if Application_End is called in the Global.asax, does it indicate that there will be no more active session? and any new request will result in a Application_Start followed by an new Session_Start? Note, the Session may not always be InProc, the session could be in a State server or SQL server. A: With the default InProc session state, the application will terminate when the last session has expired, at which point Application_End occurs. In this scenario the entire appDomain is torn down and all memory freed. As sessions are persisted in memory they are permanently destroyed at this point, and therefore can never live beyond the life of the application. If using Sql Server or State Server where the session is stored on a separate machine, then when the application is torn down the sessions can continue to live. Then because the client retains the original session cookie in the browser, the next time they visit the site the session is restarted, and sessionid used to identify their existing session. A: Yes, when you put the state in SQL Server the application could restart but you will still maintain the session state
{ "language": "en", "url": "https://stackoverflow.com/questions/7341397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Getting Null pointer Exception when trying to populate listview from fragment C to fragment B I am newbie to android development. I need to populate ListView in fragment B from the value selected by user in the spinner from fragment C. So far what I tried is bundle method but its throwing null pointer exception. Really I am confused why this happen. This is my code : How to pass spinner value from one fragment to another?. I would be very glad if someone helps me what is the procedure to communicate between fragments of same activity . This is my from_fragment(fragment c) package com.example.first.servicefirst; import android.app.Fragment; import android.content.Context; import android.os.Bundle; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.Spinner; public class NewRequirements extends Fragment { //public static NewRequirements newInstance(Bundle bundle) { // Add myFragment = new Add(); // myFragment.setArguments(bundle); //} FragmentMigration framgnetmigration; // } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { // TODO Auto-generated method stub View view = inflater.inflate(R.layout.fragment_dialog_claim, container, false); Button btnupdate; btnupdate=(Button)view.findViewById(R.id.update); final Spinner sbu=(Spinner)view.findViewById(R.id.sbuu); ArrayAdapter<CharSequence>adaptersbu=ArrayAdapter.createFromResource( getActivity().getBaseContext(),R.array.newrequirements, R.layout.spinnerlayout); adaptersbu.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); sbu.setAdapter(adaptersbu); final Spinner bu=(Spinner)view.findViewById(R.id.bu); adapterbu.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); bu.setAdapter(adapterbu); final Spinner sbuu=(Spinner)view.findViewById(R.id.sbu); adaptersbuu.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); sbuu.setAdapter(adaptersbuu); final Spinner sc=(Spinner)view.findViewById(R.id.sc); ; adaptersc.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); sc.setAdapter(adaptersc); final Spinner ssc=(Spinner)view.findViewById(R.id.ssc); ArrayAdapter<CharSequence>adapterssc=ArrayAdapter.createFromResource ( getActivity().getBaseContext(), R.array.newrequirements, R.layout.spinnerlayout); adapterssc.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); ssc.setAdapter(adapterssc); final String str=sbu.getSelectedItem().toString(); btnupdate.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Bundle bundle = new Bundle(); FragmentManager fm=getFragmentManager(); Add add = new Add(); bundle.putString("yes", str); // Log.i("Bundle", bundle.toString()); Log.v("Add", str); add.setArguments(bundle); FragmentTransaction ft=fm.beginTransaction(); ft.setTransition(FragmentTransaction.TRANSIT_FRAGMENT_FADE); ft.replace(R.id.content_frame, add,"hi"); ft.addToBackStack(null); ft.commit(); } }); return view; } } } This is my to_fragment(Fragment B): package com.example.first.servicefirst; import android.app.Fragment; import android.app.FragmentTransaction; import android.os.Bundle; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.ListView; import android.widget.Spinner; import android.widget.TextView; import java.util.ArrayList; public class Add extends Fragment implements View.OnClickListener { public static Add() public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View rootView = inflater.inflate(R.layout.activity_btn_add, container, false); Spinner ldsource=(Spinner)rootView.findViewById(R.id.lead_source); ArrayAdapter<CharSequence> adapter = ArrayAdapter.createFromResource( getActivity().getBaseContext(), R.array.dropbox1, R.layout.spinnerlayout); adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); ldsource.setAdapter(adapter); // EditText editText=(EditText)rootView.findViewById(R.id.title); Spinner ldtype=(Spinner)rootView.findViewById(R.id.ldtype); ArrayAdapter<CharSequence> adapter1 = ArrayAdapter.createFromResource( getActivity().getBaseContext(), R.array.dropbox2,R.layout.spinnerlayout); adapter1.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); ldtype.setAdapter(adapter1); Spinner ldstatus=(Spinner)rootView.findViewById(R.id.ldstatus); TextView txt=(TextView)rootView.findViewById(R.id.spinnerTarget); ArrayAdapter<CharSequence> adapter2 = ArrayAdapter.createFromResource( getActivity().getBaseContext(), R.array.dropbox3, R.layout.spinnerlayout); adapter2.setDropDownViewResource(android.R.layout.simple_selectable_list_item); ldstatus.setAdapter(adapter2); // Bundle bundle=getArguments(); // String good=bundle.getString("sbu"); ArrayList<LdNewsItem> listContact = GetlistContact(); final ListView lv = (ListView)rootView.findViewById(R.id.ldrequirements); lv.setAdapter(new customListAdapterldrequirements(getActivity(), listContact)); // lv.setOnTouchListener(new View.OnTouchListener() { // @Override // public boolean onTouch(View v, MotionEvent event) { // return false; // } // Setting on Touch Listener for handling the touch inside ScrollView //}); return rootView; } private ArrayList<LdNewsItem> GetlistContact() { ArrayList<LdNewsItem> contactlist = new ArrayList<>(); LdNewsItem contact = new LdNewsItem(); String yog=getArguments().getString("yes"); for(int i=1;i<=10;i++) { // contact = new LdNewsItem( ); contact.setSbu(""+yog); // contact.setBu(""+str); // contact.setSbuu("Yogeswaran" + str); contact.setSc("Sales" + i); contact.setSsc("term" + i); contact.setReq("business"+i); contactlist.add(contact); } return contactlist; } @Override public void onActivityCreated(Bundle savedInstanceState) { super.onActivityCreated(savedInstanceState); Fragment fragment=null; Button btnrequirements=(Button)getActivity().findViewById(R.id.btnrequirements); btnrequirements.setOnClickListener(this); } @Override public void onClick(View v) { switch (v.getId()){ case R.id.btnrequirements: Fragment newFragment = new NewRequirements(); // consider using Java coding conventions (upper first char class names!!!) FragmentTransaction transaction = getFragmentManager().beginTransaction(); FragmentTransaction ft = getActivity() .getFragmentManager().beginTransaction(); // Replace whatever is in the fragment_container view with this fragment, // and add the transaction to the back stack transaction.replace(R.id.content_frame, newFragment); transaction.addToBackStack(null); // Commit the transaction transaction.commit(); } } } A: The reference String that you are passing while putting the argument args.putString("yog",god);should be the same while you are retriving it yog=getArguments().getString("yog");
{ "language": "en", "url": "https://stackoverflow.com/questions/33751248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Reading from a URLConnection I have a php page in my server that accepts a couple of POST requests and process them. Lets say it's a simple page and the output is simply an echoed statement. With the URLConnection I established from a Java program to send the POST request, I tried to get the input using the input stream got through connection.getInputStream(). But All I get is the source of the page(the whole php script) and not the output it produces. We shall avoid socket connections here. Can this be done with Url connection or HttpRequest? How? class htttp{ public static void main(String a[]) throws IOException{ URL url=new URL("http://localhost/test.php"); URLConnection conn = url.openConnection(); //((HttpURLConnection) conn).setRequestMethod("POST"); conn.setDoOutput(true); conn.setDoInput(true); OutputStreamWriter wr = new OutputStreamWriter(conn.getOutputStream()); wr.write("Hello"); wr.flush(); wr.close(); InputStream ins = conn.getInputStream(); InputStreamReader isr = new InputStreamReader(ins); BufferedReader in = new BufferedReader(isr); String inputLine; String result = ""; while( (inputLine = in.readLine()) != null ) result += inputLine; System.out.print(result); } } I get the whole source of the webpage test.php in result. But I want only the output of the php script. A: The reason you get the PHP source itself, rather than the output it should be rendering, is that your local HTTP server - receiving your request targeted at http://localhost/test.php - decided to serve back the PHP source, rather than forward the HTTP request to a PHP processor to render the output. Why this happens? that has to do with your HTTP server's configuration; there might be a few reasons for that. For starters, you should validate your HTTP server's configuration. * *Which HTTP server are you using on your machine? *What happens when you browse http://localhost/test.php through your browser? A: The problem here is not the Java code - the problem lies with the web server. You need to investigate why your webserver is not executing your PHP script but sending it back raw. You can begin by testing using a simple PHP scipt which returns a fixed result and is accessed using a GET request (from a web browser). Once that is working you can test using the one that responds to POST requests.
{ "language": "en", "url": "https://stackoverflow.com/questions/12651343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: PHP: HTTP Basic - Log off I would to set it up where if someone sends in a request "logout" it will automatically take them to a page saying "successful log out". If the customer tries to press the back button or go to the restricted area, it will ask for HTTP auth again. What I have so far is this: example.com/restricted/index.php: <?php session_start(); if(isset($_GET['logout'])) { unset($_SESSION["login"]); header("location: ../logout.php"); exit; } if (!isset($_SERVER['PHP_AUTH_USER']) || !isset($_SERVER['PHP_AUTH_PW']) || !isset($_SESSION["login"])) { header("HTTP/1.0 401 Unauthorized"); header("WWW-authenticate: Basic realm=\"Tets\""); header("Content-type: text/html"); $_SESSION["login"] = true; // Print HTML that a password is required exit; } ?> // The rest of the page is then displayed like normal The user successful visits example.com/logout.php if example.com/restricted/index.php?logout is accessed. When the user tries to go back however random things happen, sometimes it will ask for HTTP authentication twice (???) , sometimes it will keep asking for authentication in a loop (?) and sometimes it will let me go right back as if I never logged out. I am new to how sessions work but my understanding is this: If/when the person is validated, it stores a variable in it's session called login with a value of true... if it every gets a GET request with logout, it will then delete that session variable and go back to logout.php... Why is it then when I click back to the index will it let me back in without asking for authentication, when session[login] is supposedly not set. Any improvement to this PHP code is appreciated. I know I shouldn't use HTTP Basic and should incorporate SQL, but meh. This is a temporary solution. Edit: I will accept a solution with MySQL if an example with instructions are included. I have no MySQL or PHP database knowledge (yet) A: A rough idea to start you: <?php session_start(); if( isset( $_GET['logout'] ) ) { session_destroy(); header('Location: ../logout.php'); exit; } if( !isset( $_SESSION['login'] ) ) { if( !isset( $_SERVER['PHP_AUTH_USER'] ) || !isset( $_SERVER['PHP_AUTH_PW'] ) ) { header("HTTP/1.0 401 Unauthorized"); header("WWW-authenticate: Basic realm=\"Tets\""); header("Content-type: text/html"); // Print HTML that a password is required exit; } else { // Validate the $_SERVER['PHP_AUTH_USER'] & $_SERVER['PHP_AUTH_PW'] if( $_SERVER['PHP_AUTH_USER']!='TheUsername' || $_SERVER['PHP_AUTH_PW']!='ThePassword' ) { // Invalid: 401 Error & Exit header("HTTP/1.0 401 Unauthorized"); header("WWW-authenticate: Basic realm=\"Tets\""); header("Content-type: text/html"); // Print HTML that a username or password is not valid exit; } else { // Valid $_SESSION['login']=true; } } } ?> // The rest of the page is then displayed like normal A: I've found a way around it. I have 2 files: index.php and logout.php Here is my 'index.php' code: # CHECK LOGIN. if (!isset($_SESSION["loged"])) { $_SESSION["loged"] = false; } else { if (isset( $_SERVER['PHP_AUTH_USER'] ) && isset($_SERVER['PHP_AUTH_PW'])) { if (($_SERVER['PHP_AUTH_USER'] == L_USER) && (md5($_SERVER['PHP_AUTH_PW']) == L_PASS)) { $_SESSION["loged"] = true; } } } if ($_SESSION["loged"] === false) { header('WWW-Authenticate: Basic realm="Need authorization"'); header('HTTP/1.0 401 Unauthorized'); die('<br /><br /> <div style="text-align:center;"> <h1 style="color:gray; margin-top:-30px;">Need authorization</h1> </div>'); } And here is my 'logout.php' code: session_start(); $_SESSION["loged"] = false; // We can't use unset($_SESSION) when using HTTP_AUTH. session_destroy(); A: You can use the meta tag http-equiv="refresh" with a very short response time (e.g. content="1"). This refresh will clear any $_POST. if ( !isset($_SERVER['PHP_AUTH_USER']) || $_SERVER['PHP_AUTH_USER']!='myusername' || $_SERVER['PHP_AUTH_PW']!='mypassword' || isset($_POST['logout']) ) { header('WWW-Authenticate: Basic realm="My protected area"'); header('HTTP/1.0 401 Unauthorized'); echo '<html><head><title>401 Unauthorized</title><meta http-equiv="refresh" content="1"></head><body><h1>401 Unauthorized</h1><p>You are not allowed to see this page. Reload the page to try again.</p></body></html>'; exit(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/3490637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: I am trying to define some methods for my User schema but all of them are throwing that the method is not a function This is my first ever question on SO , so please forgive me incase of question format :) in Schema UserSchema.pre('save' , async function (next){ this.password = await bcrypt.hash( this.password , 5) next() }) UserSchema.method = { createJwt : function(){ return jwt.sign({ name:this.name , email: this.email , id:this._id } , process.env.jwt_secret) }, checkPassword : async function(pass){ const isOk = await bcrypt.compare(pass,this.password) return isOk } } in Controller const User = require('../database/schemas') const createUser = async (req,res) => { const user = await User.create(req.body) const token = user.craeteJwt() res.status(StatusCodes.CREATED).json({ username : user.name , email : user.email , token }) } in Route router.route('/register').post(createUser) //the error says 'data and salt argumets required' for registration and if i continue without bcrypt then the error says 'user.createJwt isn't a function'
{ "language": "en", "url": "https://stackoverflow.com/questions/72019935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: array schema store empty values I want store values in mongodb using node controller but it will store an empty array inside mongodb 1).This is node controller using to accept the req parameter this.childQuestionId = function(req, res, next){ try{ var userObj = { 'child.quiz.questionId' : req.params.questionId, 'child.quiz.score' : req.params.score, //'child.quiz.time' : req.params.time 'child.quiz.time' : new Date().toISOString() }; var childupdate = new childQuiz(userObj); childupdate.save(function(err,data){ if(err) return next(err); console.log("Data saved successfully"); res.send(data); }); }catch(err){ console.log('Error While Saving the result ' +err); return next(err); } } 2).This is mongodb schema using to store the value. Here i am using array quiz schema to store values is array child:{ quiz:[ { /*questionId:{ type: mongoose.Schema.Types.ObjectId, ref: 'commonquestions' },*/ questionId:{type:String}, score:{type:Number}, time:{type:String} } ] } 3).This is my json result sending values using postman { "__v": 0, "_id": "57a43ec68d90b13a7b84c58f", "child": { "quiz": [] } } A: Try with this code in your controller this.childQuestionId = function(req, res, next){ try{ var userObj = { 'questionId' : req.params.questionId, 'score' : req.params.score, //'time' : req.params.time 'time' : new Date().toISOString() }; var childupdate = new childQuiz(); childupdate.quiz.push(userObj); childupdate.save(function(err){ if(err) return next(err); console.log("Data saved successfully"); res.send(childupdate); }); }catch(err){ console.log('Error While Saving the result ' +err); return next(err); } } A: MongoDB save() function accepts 2 parameters, document and data, but in your code, you use a callback function. Should you check it out? https://docs.mongodb.com/manual/reference/method/db.collection.save/#db.collection.save
{ "language": "en", "url": "https://stackoverflow.com/questions/38783590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Android SDK: how to solve setPasswordVisibilityToggleEnabled(boolean) is deprecated? I am facing issues with update of android Version. setPasswordVisibilityToggleEnabled(boolean)' is deprecated I want my login view to switch between hide and show password toggle icon in TextInputLayout. But, how do I use instead of using setPasswordVisibilityToggleEnabled(boolean)? I also searched this on Google and I found they recommend me to use setEndIconDrawable(int) instead. But I don't know how to use it? I also found this How to switch between hide and view password But I don't wanna use this code. Is there any other way to develop like this? Thank you all!. A: You should instead use: textInputLayout.setEndIconMode(TextInputLayout.END_ICON_PASSWORD_TOGGLE) Docs for setEndIconMode Docs for END_ICON_PASSWORD_TOGGLE A: https://developer.android.com/reference/com/google/android/material/textfield/TextInputLayout#getEndIconMode() Here you can see the information about the new method for hiding / showing text
{ "language": "en", "url": "https://stackoverflow.com/questions/60632382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: PL/pgSQL - Pass Dynamic Column Name to PREPARE Postgres version: 14 I have a script that does a number of small repetitive dynamic queries. This works, but it is slow, so I am converting them to prepared statements. This also works until I try to pass a column name as an argument at which point everything seems to be a syntax or "operator does not exist" error. What is the correct syntax to get this to work as shown here https://dev.to/aws-heroes/postgresql-prepared-statements-in-pl-pgsql-jl3 ? DO $$ BEGIN DECLARE rtmp1 record; DECLARE rtmp2 record; DECLARE rtmp3 record; DECLARE col_name1 text := 'my_field1'; DECLARE col_name2 text := 'my_field2'; DECLARE col_name3 text := 'my_field3'; -- PREPARE QUERIES DEALLOCATE ALL; EXECUTE FORMAT('PREPARE q_test(text) AS SELECT first_name FROM my_table WHERE $1 = 0'); EXECUTE FORMAT('EXECUTE q_test(%s)', col_name1) INTO rtmp1; EXECUTE FORMAT('EXECUTE q_test(%s)', col_name2) INTO rtmp2; EXECUTE FORMAT('EXECUTE q_test(%s)', col_name3) INTO rtmp3; END $$; A: I can get the function to run: \d my_table Table "public.my_table" Column | Type | Collation | Nullable | Default --------------+-----------------------------+-----------+----------+--------- other_column | character varying(100) | | | updated_at | timestamp without time zone | | | new_colum | character varying(100) | | | first_name | character varying | | | id | integer | DO $$ BEGIN -- PREPARE QUERIES DEALLOCATE ALL; EXECUTE FORMAT('PREPARE q_test(text) AS SELECT first_name FROM my_table WHERE %I = 0', 'id'); END $$; UPDATE. A version of function that iterates over field names and executes the query for each field name. Does away with the PREPARE/EXECUTE. DO $$ DECLARE fld_name text; BEGIN FOREACH fld_name IN ARRAY array['id', 'first_name'] LOOP RAISE NOTICE '%', fld_name; EXECUTE FORMAT('SELECT first_name FROM my_table WHERE %I IS NOT NULL', fld_name); END LOOP; END $$; NOTICE: id NOTICE: first_name DO
{ "language": "en", "url": "https://stackoverflow.com/questions/74234198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is the meaning of (/ ... /) in Fortran I don't know Fortran but have to inspect a part of a code written in this language. This code is full of lines similar to this: matmul(recVecs2p, real((/ i1, i2, i3 /), dp)) I can't find semantics of (/ ... /) on Google, so I hope I will get an answer here. A: This appears to be a language feature that was introduced in Fortran 90. A first hint is the mention of this syntax on the Wikipedia article on Fortran 95, where it is referred to as "array-valued constants (constructors)". Chapter 4 of the Programmer's Guide to Fortran 90, 3nd [sic!] Edition has a little more information about (/ … /) in Chapter 4.1.4. (Due to the copyright statement I may not reproduce the relevant text passage here, but it's freely accessible through the above hyperlink.)
{ "language": "en", "url": "https://stackoverflow.com/questions/40576365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Data overwriting at same-time request using Mongoose and Node.js I've got a following problem with my MEAN.js application. I'm trying to make some web-browser game. Players are planning attacks on barbarians castles, the attacks must be evaluated, so I've made a method for AttackSchema, which is responsible for that. This method is called everytime the user logs in or view an army moves window (in future I want to add a cron for that too). The problem is that when I plan for example 2 attacks on same barbarian castle, then logout and login after both attacks are evaluated, it overwrites barbarians "level up". See the code below & comments Method calling exports.listByUser = function(req, res) { Attack.find({user: req.user._id}, function(err, attacks) { if(err) return next(err); for(var i in attacks) { attacks[i].evaluateFight(); //here it evaluates every user's attack, which wasn't evaluated yet } res.json(attacks); }); }; Method AttackSchema.methods.evaluateFight = function() { if(!this.result.evaluated && (this.arriveTime < Date.now())) { var player = {}, barbarian = {}; var attack = this; mongoose.model('User').findOne({_id: this.user}, function(err, user) { mongoose.model('Game').findOne({_id: user.game}, function(err, game) { var barbarianQuery = _.find(game.barbarians, function(field) { if(field._id.equals(attack.barbarian)) return true; }); if(barbarianQuery) { // perform an attack evaluation and update barbarian's castle in game mongoose.model('Game').findOneAndUpdate( { '_id': game._id, 'barbarians._id': attack.barbarian }, { '$set': { 'barbarians.$': barbarianQuery } }, function(err, data) { console.log(err); console.log(data); } ); attack.save(); } } }); }); } } The second attack evaluation counts with same barbarian and then overwrites the result of first attack evaluation. For example at first player win - barbarian grows to level 2 and update, secondly player lose - barbarian is overwrited to level 1 again... Is there any option to LOCK updated collections in mongoDB or another solution how to fix this bug? Thanks for answers!
{ "language": "en", "url": "https://stackoverflow.com/questions/33814617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to prevent another app from moving (MS Excel 2010) in C# I need my app to open Excel 2010 and position it in a specific area of my application. I also need Excel to be unresizable, unmovable and eliminate all menu interactions (Close, maximize, minimize) I've found a solution for almost everything, APART from making it unmovable. I tried to use SetWindowPos and set the SWP_NOMOVE flag and a few other flags but I got no success at all. --Specifications: I'm using WPF C#.NET 4.0 and MS Excel 2010. --Below is the method I've come with. Everything work as expected apart from SetWindowPos (it doesn't have any effect at all) public static void StartExcel(string p_processArguments, int p_x, int p_y, int p_height, int p_width, bool p_startMin = true, bool p_setForeground = true, bool p_useShell = true, bool p_waitInput = true, bool p_setNewStyles = true, bool p_removeMenu = true) { //Make sure there is no excel opened (Kill them all - if any) CloseAllExcelProcesses(true); //Make the most basic validations and required info. if (!ValidateProcessArgument(p_processArguments)) throw new Exception("Process' argument is invalid or incorrectly setted. " + p_processArguments); ProcessStartInfo psiApp = new ProcessStartInfo("excel", p_processArguments); if (p_useShell) psiApp.UseShellExecute = true; else psiApp.UseShellExecute = false; if (p_startMin) psiApp.WindowStyle = ProcessWindowStyle.Minimized; Process pApp = Process.Start(psiApp); if (p_waitInput) pApp.WaitForInputIdle(); //Wait for the app to receive the window handle(ID) max limit of 3sec. for (int i = 0; i < 25; i++) { System.Threading.Thread.Sleep(100); } if (pApp.MainWindowHandle != (IntPtr)0) { if (p_startMin) //Now restore its state Win32Import.ShowWindow(pApp.MainWindowHandle, WindowShowStyle.ShowNormal); //Set Foreground if (p_setForeground) Win32Import.SetForegroundWindow(pApp.MainWindowHandle); if (p_setNewStyles) { //Make it an Overlapped Window (Which has no size border, title bar and etc). int style = Win32Import.GetWindowLong(pApp.MainWindowHandle, Win32Import.GWL_STYLE); Win32Import.SetWindowLong(pApp.MainWindowHandle, Win32Import.GWL_STYLE, (uint)(style & ~Win32.WindowStyles.WS_OVERLAPPEDWINDOW)); //NOT WORKING - Apply some flags, to make it unmovable. Win32Import.SetWindowPos(pApp.MainWindowHandle, (IntPtr)0, 0, 0, 0, 0, Win32.SWP.NOMOVE); Win32Import.UpdateWindow(pApp.MainWindowHandle); } if (p_removeMenu) { //Get the app original menu. IntPtr hMenu = Win32Import.GetSystemMenu(pApp.MainWindowHandle, false); //Get the amount of menu the app has. int count = Win32Import.GetMenuItemCount(hMenu); //Remove all existing main menus. for (uint i = 0; i < count; i++) Win32Import.RemoveMenu(hMenu, i, (Win32Import.MF_BYPOSITION | Win32Import.MF_REMOVE)); //Force a redraw. Win32Import.DrawMenuBar(pApp.MainWindowHandle); } //Move the window to the specified location and size (set new position). Win32Import.MoveWindow(pApp.MainWindowHandle, p_x, p_y, p_width, p_height, true); } else { throw new Exception("StartEmbeddedApp - Couldn't get the embedded app handle."); } } I've also come up with a few different ideas such as interceping Excel WM_Message (perhaps by using HWNDSource). Any idea not too complicated to achieve this is welcome! Thanks in advance, Luís. A: I think you are mistaken about the SWP_NOMOVE parameter. That parameter does not mean that the window will not be movable, it just means that the x,y parameters will be ignored during that specific call to the method SetWindowPos. The goal of that flag is to use the function SetWindowPos for changing the size of the target window without changing its position. From SetWindowPos in MSDN: SWP_NOMOVE 0x0002 Retains the current position (ignores X and Y parameters). Something that you could try is to create a custom window on your application that handles all messages related to size and position (examples:WM_SIZE, WM_SIZING, WM_WINDOWPOSCHANGED, WM_WINDOWPOSCHANGING, etc) to prevent modifications, then set your excel window as a child window of your custom window. You could also try to just remove the title bar
{ "language": "en", "url": "https://stackoverflow.com/questions/16216660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: sharing html widget to other andoid apps in flutter we have Html() widget that renders html content. We also have Share.share() that shares given content to other apps. Currently Share.share() requires a string as parameter not a widget. So how can i share the well interpreted html content to other applications? A: Assuming we're talking about the Html widget provided by flutter_html. If you have access to the widget, you can call .data on it to get the String? value: final Html html = Html(data: '<p>Hello world!</p>'); final String stringToShare = html.data; By defining html (the first line) in your build function, you can access it in the share button as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/69965173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: VB.NET - Problem with transparent background PNG and ImageSharp quantizer I am trying to make a WinForms app in VB.NET (Framework 4.7.2) that optimizes an electronic signature using ImageSharp, and I am running into a problem. When I try to use the OctreeQuantizer to decrease the number of colors in the PNG, I get an inverted / negative image if the source has a transparent background. This is a problem since the signature will need to be transparent when it is applied to a document. Application Window - Top is original signature Original Save Dialog from Paint.NET As you can see, I saved the original image as an 8bpp PNG with transparency using Paint.NET Here's my quantize function: Public Shared Function QuantizeImageGrayscalePNG(ByVal vstrSource As String, ByVal vintMaxColors As Integer) As Bitmap Dim poqQuant As New OctreeQuantizer() Dim pencPNG As New PngEncoder() With {.BitDepth = PngBitDepth.Bit8, .ColorType = PngColorType.GrayscaleWithAlpha} vintMaxColors = Math.Min(256, vintMaxColors) If vstrSource IsNot Nothing AndAlso IO.File.Exists(vstrSource) AndAlso vintMaxColors > 0 Then Using piNew As Image = Image.Load(vstrSource), pstOutput As New IO.MemoryStream() poqQuant.Options.MaxColors = vintMaxColors piNew.Mutate(Sub(x) x.Quantize(poqQuant)) piNew.Save(pstOutput, pencPNG) Return Bitmap.FromStream(pstOutput) End Using Else Return Nothing End If End Function If I use an image with no transparency as the source, the issue does not occur. I also tried commenting out the Mutate function, and it "works" but of course does not change the number of colors or image size. A: Answer from JimBobSquarePants on GitHub: Octree only supports a single transparent color . For Png you need to use the Wu quantizer which is the default quantizer. However, if you are trying to reduce the colors in a png you should be saving it in indexed format by setting the PngColorType in the encoder along with a custom Wu quantizer with your max color count. Not sure that is a full answer, but my issue was closed so that's all I'm getting.
{ "language": "en", "url": "https://stackoverflow.com/questions/74735320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Kotlin / Native c interop with reduced visibility Is it possible to generate c stabs with internal visibility? When using c interop with Kotlin/Native the generated stabs for c functions and structures have public visibility. As far as I understand, that means that if someone would use this Kotlin code as a library, the generated stabs will be visible in the target project as well. That fact makes it difficult to create a Kotlin wrapper over the c library. A: I am sorry, but this behavior is not supported for now. First of all, as cinterop tool produce bindings as a .klib file, it is associated with the separate module. So, it won't help if you somehow will mark them as internal. The .klib with the bindings is just another source set of the project. Then, it should be available to connect it with different kinds of dependencies. Now because of some language limitations, one cannot use the implementation dependency kind to connect Kotlin/Native libraries, only api one. But it probably will become available someday. For now, the best option I can recommend is to name the package as internal or something, to let the consumer know about its practical nature.
{ "language": "en", "url": "https://stackoverflow.com/questions/57939049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: TypeError: Tag.find_all() missing 1 required positional argument: 'self' from bs4 import BeautifulSoup as soup import numpy as np import pandas as pd import requests import random headers = { 'authority': 'www.zillow.com', 'accept': '*/*', 'accept-language': 'en-US,en;q=0.9', 'sec-ch-ua': '"Not_A Brand";v="99", "Google Chrome";v="109", "Chromium";v="109"', 'sec-ch-ua-mobile': '?0', 'sec-ch-ua-platform': '"Windows"', 'sec-fetch-dest': 'empty', 'sec-fetch-mode': 'cors', 'sec-fetch-site': 'same-origin', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36' + str(random.randint(1, 1000)), #IF YOU RATELIMIT AND GET CAPTCHA, CHANGE USERAGENT } with requests.session() as i: #user input city = str(input("City To Search In: ")) + "/" #initializers page = 1 end_page = 10 url = "" url_list = [] request = "" request_list = [] while page <= end_page: url = "https://www.zillow.com/homes/for_sale/" + city + str(page) + "_p" url_list.append(url) page += 1 for j in url_list: #can change j to "url" if you want but i like "j" for simplicity request = i.get(j, headers=headers) request_list.append(request) #another two initializers (seperating this felt more organized to me, sorry if it looks messy) rawInfo = "" rawInfoList = [] for r in request_list: #can change r to "request" if you want but i like "r" for simplicity rawInfo = soup(r.content, "html.parser") rawInfoList.append(rawInfo) cleanedList = [] for rawInfo in rawInfoList: for i in rawInfo: address = soup.find_all (class_= "list-card-addr") This is some code I am using to scrape info from zillow search pages. Wheen I use soup.find_all to find all of the addresses listed from the raw info I pulled, it gives this error: "TypeError: Tag.find_all() missing 1 required positional argument: 'self'". Anyone know what I did wrong? I was expecting it to spit out all of the addresses it found in the first 10 pages of results. You can do print(rawInfoList) to see the raw info.
{ "language": "en", "url": "https://stackoverflow.com/questions/75516199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I suppress viewing / listing all file types EXCEPT html files using .htaccess? Basically I want to go beyond simple "Options +Indexes" display on a server, set using the .htaccess file. I ONLY want to see .html files in a given directory, and suppress display of ALL OTHER file types. Why? I am happy to show directory listing pages on a testing server, so a client can select different page design versions, for instance... BUT I would prefer they just see the '.html' files on the directory listing, NOT any other image files, favicon files and so on. Any clues much appreciated, not that familiar with .htaccess possibilities. thanks! A: The only way is to use IndexIgnore carefully . For example ,if you put the following code : Options +Indexes IndexIgnore * The directory listing will be on but nothing will be shown so the use of IndexIgnore is to hide what you want when directory listing is done . Another example is to do the following : Options +Indexes IndexIgnore *.php *.jpg *.png *.txt *image The code above will hide all files with these extensions as well as image folder , so you should exclude all folderes and files that you don't want them to be shown , .html files are not listed above so,they will be shown.
{ "language": "en", "url": "https://stackoverflow.com/questions/46637067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: search and count specific phrases with special characters in text files I have a list of search phrases where some are single words, some are multiple words, some have a hyphen in between them, and others may have both parentheses and hyphens. I'm trying to process a directory of text files and search for 100+ of these phrases, and then count occurrences. It seems like the code below works in 2.7x python until it hits the hyphenated search phrases. I observed some unexpected counts on some text files for at least one of the hyphenated search phrases. kwlist = ['phraseone', 'phrase two', 'phrase-three', 'phrase four (a-b-c) abc', 'phrase five abc', 'phrase-six abc abc'] for kws in kwlist: s_str = kws kw = re.findall(r"\b" + s_str +r"\b", ltxt) count = 0 for c in kw: if c == s_str: count += 1 output.write(str(count)) Is there a better way to handle the range of phrases in the search, or any improvements I can make to my algorithm? A: You could achieve this with what I would call a pythonic one-liner. We don't need to bother with using a regex, as we can use the built-in .count() method, which will from the documentation: string.count(s, sub[, start[, end]]) Return the number of (non-overlapping) occurrences of substring sub in string s[start:end]. Defaults for start and end and interpretation of negative values are the same as for slices. So all we need to do is sum up the occurrences of each keyword in kwlist in the string ltxt. This can be done with a list-comprehension: output.write(str(sum([ltxt.count(kws) for kws in kwlist]))) Update As pointed out in @voiDnyx's comment, the above solution writes the sum of all the counts, not for each individual keyword. If you want the individual keywords outputted, you can just write each one individually from the list: counts = [ltxt.count(kws) for kws in kwlist] for cnt in counts: output.write(str(cnt)) This will work, but if you wanted to get silly and put it all in one-line, you could potentially do: [output.write(str(ltxt.count(kws))) for kws in kwlist] Its up to you, hope this helps! :) If you need to match word boundaries, then yes the only way to do so would be to use the \b in a regex. This doesn't mean that you cant still do it in one line: [output.write(str(len(re.findall(r'\b'+re.escape(kws)+r'\b'))) for kws in kwlist] Note how the re.escape is necessary, as the keyword may contain special characters.
{ "language": "en", "url": "https://stackoverflow.com/questions/46932542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Building an app made with Intel-xdk with Cordova Cli I'm evaluating Intel-xdk, the idea is using this tool in my IT classes (people from 15 to 18 y.o.), I'm on Ubuntu 16.04 and installed XDK 3759 Linux 32. A couple of days ago, as I installed an update I also installed nodejs, npm and cordova because as far as june Xdk will no more provide the cloud build service, anyway, after installing android sdk tools, build tools and platform tools, when I try to compile a simple project I've got this error Error: /home/prof/prova-xdk/platforms/android/gradlew: Command failed with exit code 1 Error output: /home/prof/android-sdk/build-tools/25.0.2/aapt: 3: /home/prof/android-sdk/build-tools/25.0.2/aapt: Syntax error: Unterminated quoted string FAILURE: Build failed with an exception. * *What went wrong: Execution failed for task ':CordovaLib:processDebugResources'. com.android.ide.common.process.ProcessException: Failed to execute aapt Can someone please explain me the error? May I check my code? But which file? Thanks everybody! Alessandro A: I recommend you update to 3900 (hotfix coming very soon to address login and several other issues) and use the "Cordova export" feature to build apps. The ZIP that is generated by that export can be provided directly to PhoneGap Build, using the one free private slot. This will be a much simpler build process than having your students install node, Cordova CLI and the Android build tools.
{ "language": "en", "url": "https://stackoverflow.com/questions/43067104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Setting NGINX TLS with certificate/key loaded by initContainer I've found several pages describing how to set NGINX to use HTTPS/TLS. However, all suggest setting a secret tls with the key & cert. We want to be able to use TLS but ask NGINX to load the key/cert via init-container which in this case implemented by acs-keyvault-agent. Any ideas? A: If your only goal is to obtain the TLS key/cert from Azure Key Vault, then you're probably better of going with the Key Vault FlexVolume project from Azure. This would have the advantage of not using init containers at all and just dealing with volumes and volume mounts. Since you explicitly want to use Hexadite/acs-keyvault-agent and in default mode (which uses volume mounts btw) there is a full example of how to do this in the projects examples folder here: examples/acs-keyvault-deployment.yaml#L40-L47. Obviously you need to build, push, and configure the container correctly for your environment. Then you will need to conifgure Nginx to use the CertFileName.pem and KeyFilename.pem from the /secrets/certs_keys/ folder. Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/57803964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: SQL Syntax error in CREATE TABLE statement in MS Access VBA subroutine I am working on a subroutine in Microsoft Access VBA. The intended purpose of my program is to create a new table called newTableName and then perform a join on the two tables tableName1 and tableName2. The result of this join should be stored in newTableName. From my research online, I am led to believe that I cannot create an empty table to store the joined data in. So, I have attempted to duplicate tablenName2 and store its data in newTableName until the join is performed. However, I get the following error when I run my program: Run-time error '-2147217900 (80040e14)': Syntax error in CREATE TABLE statement. From this line: CurrentProject.Connection.Execute strNewTableQuery Which was tested with user input resulting in this query: CREATE TABLE [UNIX_lob] AS SELECT * FROM UNIX Here is my code: Option Compare Database Public Sub JoinWithIIMMModule() Dim sqlNewTableQuery As String Dim sqlJoinQuery As String Dim tableName1 As String Dim tableName2 As String Dim newTableName As String Dim commonField As String ' Set the common field used to join the two tables ' commonField = "[code]" ' Set the tableName1 to be used in the join. ' tableName1 = "IIMM" ' Acquire name of table to be joined with table1 from the user' tableName2 = InputBox("Enter name of the table which you would like to join with " + tableName1 + ":", _ "Table to split", "UNIX") ' Set the name of the new table where the joins will be stored.' newTableName = "[" + tableName2 + "_lob]" ' Create newTableName, which is a duplicate of the tableName2 table.' ' This is where the joined data will reside.' sqlNewTableQuery = "CREATE TABLE " + newTableName + " AS SELECT * FROM " + tableName2 Debug.Print sqlNewTableQuery CurrentProject.Connection.Execute sqlNewTableQuery ' Join tableName1 and tableName2 on commonField, and store the joined data in newTableName' sqlJoinQuery = "SELECT " + tableName1 + ".*, " + tableName2 + ".*" & _ "INTO " + newTableName & _ "FROM " + tableName1 & _ "INNER JOIN " + tableName2 & _ "ON " + tableName1 + "." + commonField + " = " + tableName2 + "." + commonField + ";" Debug.Print sqlJoinQuery CurrentDb.Execute sqlJoinQuery End Sub Does anybody know what might be causing this error?
{ "language": "en", "url": "https://stackoverflow.com/questions/38508544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I 'expect' a chain of methods using Rspec where the first method takes a parameter? I have a method call in a ruby model that looks like the following: Contentful::PartnerCampaign.find_by(vanityUrl: referral_source).load.first Within the models spec.rb file, I'm trying to mock that call and get a value by passing in a param. But I'm having trouble figuring out the correct way of calling it. At the top of my spec.rb file I have: let(:first_double) { double("Contentful::Model", fields {:promotion_type => "Promotion 1"}) } Within the describe block I've tried the following: expect(Contentful::PartnerCampaign).to receive_message_chain(:find_by, :load, :first). and_return(first_double) expect(Contentful::PartnerCampaign).to receive_message_chain(:find_by, :load, :first).with(vanityUrl: 'test_promo_path'). and_return(first_double) expect(Contentful::PartnerCampaign).to receive_message_chain(:find_by => vanityUrl: 'test_promo_path', :load, :first). and_return(first_double) As you can probably guess, none of these are working. Does anyone know the correct way to do this sort of thing? Is it even possible? A: Generally speaking, I prefer not to use stub chains, as they are often a sign that you are violating the Law of Demeter. But, if I had to, this is how I would mock that sequence: let(:vanity_url) { 'https://vanity.url' } let(:partner_campaigns) { double('partner_campaigns') } let(:loaded_partner_campaigns) { double('loaded_partner_campaigns') } let(:partner_campaign) do double("Contentful::Model", fields {:promotion_type => "Promotion 1"} end before do allow(Contentful::PartnerCampaign) .to receive(:find_by) .with(vanity_url: vanity_url) .and_return(partner_campaigns) allow(partner_campaigns) .to receive(:load) .and_return(loaded_partner_campaigns) allow(loaded_partner_campaigns) .to receive(:first) .and_return(partner_campaign) end A: This is what I would do. Notice that I split the "mocking" part and the "expecting" part, because usually I'll have some other it examples down below (of which then I'll need those it examples to also have the same "mocked" logic), and because I prefer them to have separate concerns: that is anything inside the it example should just normally focus on "expecting", and so any mocks or other logic, I normally put them outside the it. let(:expected_referral_source) { 'test_promo_path' } let(:contentful_model_double) { instance_double(Contentful::Model, promotion_type: 'Promotion 1') } before(:each) do # mock return values chain # note that you are not "expecting" anything yet here # you're just basically saying that: if Contentful::PartnerCampaign.find_by(vanityUrl: expected_referral_source).load.first is called, that it should return contentful_model_double allow(Contentful::PartnerCampaign).to receive(:find_by).with(vanityUrl: expected_referral_source) do double.tap do |find_by_returned_object| allow(find_by_returned_object).to receive(:load) do double.tap do |load_returned_object| allow(load_returned_object).to receive(:first).and_return(contentful_model_double) end end end end end it 'calls Contentful::PartnerCampaign.find_by(vanityUrl: referral_source).load.first' do expect(Contentful::PartnerCampaign).to receive(:find_by).once do |argument| expect(argument).to eq({ vanityUrl: expected_referral_source}) double.tap do |find_by_returned_object| expect(find_by_returned_object).to receive(:load).once do double.tap do |load_returned_object| expect(load_returned_object).to receive(:first).once end end end end end it 'does something...' do # ... end it 'does some other thing...' do # ... end If you do not know about ruby's tap method, feel free to check this out A: I think you need to refactor the chain in two lines like this: model = double("Contentful::Model", fields: { promotion_type: "Promotion 1" }) campaign = double allow(Contentful::PartnerCampaign).to receive(:find_by).with(vanityUrl: 'test_promo_path').and_return(campaign) allow(campaign).to receive_message_chain(:load, :first).and_return(model) Then you can write your spec that will pass that attribute to find_by and check the chain.
{ "language": "en", "url": "https://stackoverflow.com/questions/51921964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Continuously checking connection to MySQL Database Is there an effective way to continuously check if a connection to the database is established? For example, if the network drops or the server computer is turned off, then there is no obvious database available. My idea was to create a Timer that would poll every 10 seconds or so to determine an available connection but that just seems like another "hack". Another idea is to check connectivity before any user input. Here is the sample code that would check for a connection: public bool isDbAvail() { using (MySqlConnection conn = new MySqlConnection(TimeClock.Properties.Settings.Default.timeclockConnectionString)) { try { conn.Open(); return true; } catch (MySqlException ex) { Console.WriteLine(ex.Message); return false; } } } Any input would be great on what would be the best approach.
{ "language": "en", "url": "https://stackoverflow.com/questions/24620091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: QProcess, Cannot Create Pipe I am running a QProcess in a timer slot at 1 Hz. The process is designed to evoke a Linux command and parse it's output. The problem is this: after the program runs for about 20 minutes, I get this error: QProcessPrivate::createPipe: Cannot create pipe 0x104c0a8: Too many open files QSocketNotifier: Invalid socket specified Ideally, this program would run for the entire uptime of the system, which may be days or weeks. I think I've been careful with process control by reading the examples, but maybe I missed something. I've used examples from the Qt website, and they use the same code that I've written, but those were designed for a single use, not thousands. Here is a minimum example: class UsageStatistics : public QObject { Q_OBJECT public: UsageStatistics() : process(new QProcess) { timer = new QTimer(this); connect(timer, SIGNAL(timeout()), this, SLOT(getMemoryUsage())); timer->start(1000); // one second } virtual ~UsageStatistics() {} public slots: void getMemoryUsage() { process->start("/usr/bin/free"); if (!process->waitForFinished()) { // error processing } QByteArray result = process->realAll(); // parse result // edit, I added these process->closeReadChannel(QProcess::StandardOutput); process->closeReadChannel(QProcess::StandardError); process->closeWriteChannel(); process->close(); } } I've also tried manually deleting the process pointer at the end of the function, and then new at the beginning. It was worth a try, I suppose. Free beer for whoever answers this :) A: QProcess is derived from QIODevice, so I would say calling close() should close the file handle and solve you problem. A: I cannot see the issue, however one thing that concerns me is a possible invocation overlap in getMemoryUsage() where it's invoked before the previous run has finished. How about restructuring this so that a new QProcess object is used within getMemoryUsage() (on the stack, not new'd), rather than being an instance variable of the top-level class? This would ensure clean-up (with the QProcess object going out-of-scope) and would avoid any possible invocation overlap. Alternatively, rather than invoking /usr/bin/free as a process and parsing its output, why not read /proc/meminfo directly yourself? This will be much more efficient. A: First I had the same situation with you. I got the same results. I think that QProcess can not handle opened pipes correctly. Then, instead of QProcess, I have decided to use popen() + QFile(). class UsageStatistics : public QObject { Q_OBJECT public: UsageStatistics(){ timer = new QTimer(this); connect(timer, SIGNAL(timeout()), this, SLOT(getMemoryUsage())); timer->start(1000); // one second } virtual ~UsageStatistics() {} private: QFile freePipe; FILE *in; public slots: void getMemoryUsage() { if(!(in = popen("/usr/bin/free", "r"))){ qDebug() << "UsageStatistics::getMemoryUsage() <<" << "Can not execute free command."; return; } freePipe.open(in, QIODevice::ReadOnly); connect(&freePipe, SIGNAL(readyRead()), this, SLOT(parseResult()) ); // OR waitForReadyRead() and parse here. } void parseResult(){ // Parse your stuff freePipe.close(); pclose(in); // You can also use exit code by diving by 256. } } A: tl;dr: This occurs because your application wants to use more resources than allowed by the system-wide resource limitations. You might be able to solve it by using the command specified in [2] if you have a huge application, but it is probably caused by a programming error. Long: I just solved a similar problem myself. I use a QThread for the logging of exit codes of QProcesses. The QThread uses curl to connect to a FTP server uploads the log. Since I am testing the software I didn't connect the FTP server and curl_easy_perform apparently waits for a connection. As such, my resource limit was reached and I got this error. After a while my program starts complaining, which was the main indicator to figure out what was going wrong. [..] QProcessPrivate::createPipe: Cannot create pipe 0x7fbda8002f28: Too many open files QProcessPrivate::createPipe: Cannot create pipe 0x7fbdb0003128: Too many open files QProcessPrivate::createPipe: Cannot create pipe 0x7fbdb4003128: Too many open files QProcessPrivate::createPipe: Cannot create pipe 0x7fbdb4003128: Too many open files [...] curl_easy_perform() failed for curl_easy_perform() failed for disk.log [...] I've tested this by connecting the machine to the FTP server after this error transpired. That solved my problem. Read: [1] https://linux.die.net/man/3/ulimit [2] https://ss64.com/bash/ulimit.html [3] https://bbs.archlinux.org/viewtopic.php?id=234915
{ "language": "en", "url": "https://stackoverflow.com/questions/16237117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: No default constructor exists for the class I know this question has already been ask, but I couldn't figure it out. I have two classes Point and Line, and 2 Points are members of Line. However in the Line constructor I get "no default constructor exists for the class" error. How can I fix this problem? #include <cstdlib> #include <cmath> #include "PointClass.h" using namespace std; class Line { public: Line(const Point& p1, const Point& p2) { this->point1 = p1; this->point2 = p2; } Point point1; Point point2; static double Distance(Point p1, Point p2, Point p3) { double distance = (abs((p1.y - p2.y) * p3.x - (p2.x - p1.x) * p3.y + p2.x * p1.y - p2.x * p1.x) / (sqrt(pow((p2.y - p1.y), 2.0) + pow((p2.x - p1.x), 2.0)))); return distance; } }; class Point { public: Point(double a, double b) { this->setCoord(a, b); } double x; double y; void setCoord(double a, double b) { this->x = a; this->y = b; } }; A: The reason for you error, is that this code calls the Point default constructor (which doesn't exist) Line(const Point& p1, const Point& p2) { this->point1 = p1; this->point2 = p2; } instead you should write it like this Line(const Point& p1, const Point& p2) : point1(p1), point2(p2) { } Your version calls the Point default constructor and then assigns the point values. My version initialises the points by calling the Point copy constructor A: The error message says "no default constructor", so you should add ones. class Line { public: Line() {} // add this Line(const Point& p1, const Point& p2) { class Point { public: Point() {} // add this Point(double a, double b) { or ones with initialization (safer): class Line { public: Line() : point1(), point2() {} // add this Line(const Point& p1, const Point& p2) { class Point { public: Point() : x(0), y(0) {} // add this Point(double a, double b) { or constructors with default arguments: A: The core reason you are getting that error. Is because Line constructor doesn't know how to initialize point1 and point2 prior to your assignment statements in the constructor. That's what constructor initialization lists are for. It's usually better to do initial member initialization in the constructor initialization list instead of in the constructor body. Without a constructor initialization list, point1 and point2 get constructed with the default constructor (error because it's missing) and then immediate updated with additional code in constructor body. You can avoid the need for a default constructor on Point by having Line constructor defined as follows: Line(const Point& p1, const Point& p2) : point1(p1), point2(p2) {} That will resolve your compiler error. Further, it's still not a bad idea to have a default constructor for Point. It's the type of class where it's often useful to have such a constructor. And some point later, you might need to have a collection of Point instances and the compiler will complain again without it. MakeCAT's answer is correct in that regards. Aside: Your Distance function is passing Point parameters by value. This means the compiler needs to construct 3 new instances of Point each time Distance is invoked. Change your function signature for Distance as follows. If this doesn't make the compiler happy, it will at the very least, generate more efficient code. static double Distance(const Point& p1, const& Point p2, const& Point p3) { double distance = (abs((p1.y - p2.y) * p3.x - (p2.x - p1.x) * p3.y + p2.x * p1.y - p2.x * p1.x) / (sqrt(pow((p2.y - p1.y), 2.0) + pow((p2.x - p1.x), 2.0)))); return distance; } A: The problem is that you don't have a default constructor for that class and therefore when you create the points you don't know what values their variables will take. The easiest and fastest way of solutions is to give default values to the constructor when it does not receive parameters class Point { public: //only modification here Point(double a = 0, double b = 0) { this->setCoord(a, b); } double x; double y; void setCoord(double a, double b) { this->x = a; this->y = b; } }; A: The default constructor is the constructor without parameters. If you have a user provided constructor taking parameters (like your Line constructor taking two Points) then you don't automatically get the default constructor, you need to add it yourself. I suggest changing your classes * *to use in-class initializers *to use member initialization instead of setting in the constructor body *defaulting the default constructor (which requires step 1) You can compare the IsoCppCoreguidelines for a more detailed explanation on these changes. class Point { public: Point() = default; Point(double a, double b) : x{a}, y{b} {} double x{}; double y{}; }; class Line { public: Line() = default; Line(const Point& p1, const Point& p2) : point1{p1}, point2{p2} {} Point point1{}; Point point2{}; };
{ "language": "en", "url": "https://stackoverflow.com/questions/61579664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Many of metrics are empty when setting up istio 1.2.1 - what did i do wrong? I’m getting started with Istio (1.2.1) on k8s v1.15.0 - and everything looks good except many of the attributes are empty or “unknown”. Using the bookinfo sample app fields like destinationName, destinationApp, destinationWorkload, sourceName, sourceWorkload, etc are empty (or set to unknown), while other metrics are set. The log lines below were taken from the telemetry container on the telemetry pod while I curl’d the URL 100.96.76.176:9080/productpage. All the pods have app and version labels set, and I’m not sure where else to look. [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:24.625780Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.6.140”,“destinationName”:"",“destinationNamespace”:“g-demo”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“details.g-demo.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:"",“httpAuthority”:“details:9080”,“latency”:“2.511893ms”,“method”:“GET”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:677,“referer”:"",“reporter”:“destination”,“requestId”:“5662e4c0-8cd8-4212-b4e3-6fe8be75d3f5”,“requestSize”:0,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:178,“responseTimestamp”:“2019-07-24T16:52:24.628198Z”,“sentBytes”:313,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/details/0",“userAgent”:“curl/7.58.0”,“xForwardedFor”:“0.0.0.0”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:24.625458Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“unknown”,“destinationApp”:"",“destinationIp”:“100.97.6.140”,“destinationName”:"",“destinationNamespace”:"",“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“details.g-demo.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:"",“httpAuthority”:“details:9080”,“latency”:“3.060349ms”,“method”:“GET”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:342,“referer”:"",“reporter”:“source”,“requestId”:“5662e4c0-8cd8-4212-b4e3-6fe8be75d3f5”,“requestSize”:0,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:178,“responseTimestamp”:“2019-07-24T16:52:24.628457Z”,“sentBytes”:307,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:“g-demo”,“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/details/0",“userAgent”:“curl/7.58.0”,“xForwardedFor”:“0.0.0.0”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:24.631186Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“unknown”,“destinationApp”:"",“destinationIp”:“100.97.6.152”,“destinationName”:"",“destinationNamespace”:"",“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“reviews.g-demo.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:"",“httpAuthority”:“reviews:9080”,“latency”:“14.927949ms”,“method”:“GET”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:342,“referer”:"",“reporter”:“source”,“requestId”:“5662e4c0-8cd8-4212-b4e3-6fe8be75d3f5”,“requestSize”:0,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:375,“responseTimestamp”:“2019-07-24T16:52:24.646069Z”,“sentBytes”:549,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:“g-demo”,“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/reviews/0",“userAgent”:“curl/7.58.0”,“xForwardedFor”:“0.0.0.0”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:24.640811Z”,“instance”:“tcpaccesslog.logentry.istio-system”,“connectionDuration”:“0s”,“connectionEvent”:“open”,“connection_security_policy”:“unknown”,“destinationApp”:"",“destinationIp”:“100.97.5.90”,“destinationName”:"",“destinationNamespace”:"",“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“mongodb.g-demo.svc.cluster.local”,“destinationWorkload”:"",“protocol”:“tcp”,“receivedBytes”:0,“reporter”:“source”,“requestedServerName”:"",“responseFlags”:"",“sentBytes”:0,“sourceApp”:"",“sourceIp”:“100.97.8.150”,“sourceName”:"",“sourceNamespace”:“g-demo”,“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“totalReceivedBytes”:0,“totalSentBytes”:0} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:24.644777Z”,“instance”:“tcpaccesslog.logentry.istio-system”,“connectionDuration”:“4.38701ms”,“connectionEvent”:“close”,“connection_security_policy”:“unknown”,“destinationApp”:"",“destinationIp”:“100.97.5.90”,“destinationName”:"",“destinationNamespace”:"",“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“mongodb.g-demo.svc.cluster.local”,“destinationWorkload”:"",“protocol”:“tcp”,“receivedBytes”:351,“reporter”:“source”,“requestedServerName”:"",“responseFlags”:"",“sentBytes”:429,“sourceApp”:"",“sourceIp”:“100.97.8.150”,“sourceName”:"",“sourceNamespace”:“g-demo”,“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“totalReceivedBytes”:351,“totalSentBytes”:429} [istio-telemetry-56d5966d6b-qj9z2 telemetry] 2019-07-24T16:52:25.771171Z warn input set condition evaluation error: id=‘4’, error=‘lookup failed: ‘destination.service.host’’ [istio-telemetry-56d5966d6b-qj9z2 telemetry] 2019-07-24T16:52:25.771376Z warn input set condition evaluation error: id=‘4’, error=‘lookup failed: ‘destination.service.host’’ [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:24.641665Z”,“instance”:“tcpaccesslog.logentry.istio-system”,“connectionDuration”:“0s”,“connectionEvent”:“open”,“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.5.90”,“destinationName”:"",“destinationNamespace”:“g-demo”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:"",“destinationWorkload”:"",“protocol”:“tcp”,“receivedBytes”:277,“reporter”:“destination”,“requestedServerName”:"",“responseFlags”:"",“sentBytes”:0,“sourceApp”:"",“sourceIp”:“100.97.8.150”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“totalReceivedBytes”:277,“totalSentBytes”:0} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:24.644179Z”,“instance”:“tcpaccesslog.logentry.istio-system”,“connectionDuration”:“3.633777ms”,“connectionEvent”:“close”,“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.5.90”,“destinationName”:"",“destinationNamespace”:“g-demo”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:"",“destinationWorkload”:"",“protocol”:“tcp”,“receivedBytes”:351,“reporter”:“destination”,“requestedServerName”:"",“responseFlags”:"",“sentBytes”:429,“sourceApp”:"",“sourceIp”:“100.97.8.150”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“totalReceivedBytes”:351,“totalSentBytes”:429} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:24.636470Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.8.150”,“destinationName”:"",“destinationNamespace”:“g-demo”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“ratings.g-demo.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:"",“httpAuthority”:“ratings:9080”,“latency”:“7.978662ms”,“method”:“GET”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:693,“referer”:"",“reporter”:“destination”,“requestId”:“5662e4c0-8cd8-4212-b4e3-6fe8be75d3f5”,“requestSize”:0,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:48,“responseTimestamp”:“2019-07-24T16:52:24.644258Z”,“sentBytes”:166,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/ratings/0",“userAgent”:“curl/7.58.0”,“xForwardedFor”:“0.0.0.0”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:24.631485Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.6.152”,“destinationName”:"",“destinationNamespace”:“g-demo”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“reviews.g-demo.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:"",“httpAuthority”:“reviews:9080”,“latency”:“14.298546ms”,“method”:“GET”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:677,“referer”:"",“reporter”:“destination”,“requestId”:“5662e4c0-8cd8-4212-b4e3-6fe8be75d3f5”,“requestSize”:0,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:375,“responseTimestamp”:“2019-07-24T16:52:24.645688Z”,“sentBytes”:555,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/reviews/0",“userAgent”:“curl/7.58.0”,“xForwardedFor”:“0.0.0.0”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:24.619969Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.4.119”,“destinationName”:"",“destinationNamespace”:“g-demo”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“productpage.g-demo.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:"",“httpAuthority”:“100.96.76.176:9080”,“latency”:“28.389479ms”,“method”:“GET”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:647,“referer”:"",“reporter”:“destination”,“requestId”:“5662e4c0-8cd8-4212-b4e3-6fe8be75d3f5”,“requestSize”:0,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:5179,“responseTimestamp”:“2019-07-24T16:52:24.648264Z”,“sentBytes”:5324,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/productpage",“userAgent”:“curl/7.58.0”,“xForwardedFor”:“0.0.0.0”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:24.619518Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“unknown”,“destinationApp”:"",“destinationIp”:“100.97.4.119”,“destinationName”:"",“destinationNamespace”:"",“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“productpage.g-demo.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:"",“httpAuthority”:“100.96.76.176:9080”,“latency”:“29.187307ms”,“method”:“GET”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:224,“referer”:"",“reporter”:“source”,“requestId”:“5662e4c0-8cd8-4212-b4e3-6fe8be75d3f5”,“requestSize”:0,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:5179,“responseTimestamp”:“2019-07-24T16:52:24.648640Z”,“sentBytes”:5318,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:“g-demo”,“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/productpage",“userAgent”:“curl/7.58.0”,“xForwardedFor”:“0.0.0.0”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] gc 74945 @116322.022s 10%: 0.015+35+105 ms clock, 0.12+3.6/8.2/36+845 ms cpu, 15->16->7 MB, 16 MB goal, 8 P [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:24.635940Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“unknown”,“destinationApp”:"",“destinationIp”:“100.97.8.150”,“destinationName”:"",“destinationNamespace”:"",“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“ratings.g-demo.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:"",“httpAuthority”:“ratings:9080”,“latency”:“8.73426ms”,“method”:“GET”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:362,“referer”:"",“reporter”:“source”,“requestId”:“5662e4c0-8cd8-4212-b4e3-6fe8be75d3f5”,“requestSize”:0,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:48,“responseTimestamp”:“2019-07-24T16:52:24.644626Z”,“sentBytes”:160,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:“g-demo”,“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/ratings/0",“userAgent”:“curl/7.58.0”,“xForwardedFor”:“0.0.0.0”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:25.628982Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.6.136”,“destinationName”:"",“destinationNamespace”:“istio-system”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“istio-telemetry.istio-system.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:“0”,“httpAuthority”:“mixer”,“latency”:“4.039738ms”,“method”:“POST”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:1152,“referer”:"",“reporter”:“destination”,“requestId”:“242c989e-1d28-447b-b382-b194574cbdd7”,“requestSize”:825,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:5,“responseTimestamp”:“2019-07-24T16:52:25.632940Z”,“sentBytes”:141,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/istio.mixer.v1.Mixer/Report",“userAgent”:"",“xForwardedFor”:“100.97.6.140”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:25.630094Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.6.136”,“destinationName”:"",“destinationNamespace”:“istio-system”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“istio-telemetry.istio-system.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:“0”,“httpAuthority”:“mixer”,“latency”:“3.013082ms”,“method”:“POST”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:1806,“referer”:"",“reporter”:“destination”,“requestId”:“219ef409-2f09-4c89-acb2-d5da0ef45169”,“requestSize”:1475,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:5,“responseTimestamp”:“2019-07-24T16:52:25.633022Z”,“sentBytes”:141,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/istio.mixer.v1.Mixer/Report",“userAgent”:"",“xForwardedFor”:“100.97.4.119”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:25.640448Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.6.136”,“destinationName”:"",“destinationNamespace”:“istio-system”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“istio-telemetry.istio-system.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:“0”,“httpAuthority”:“mixer”,“latency”:“1.354099ms”,“method”:“POST”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:964,“referer”:"",“reporter”:“destination”,“requestId”:“968c7073-d926-43f7-b8bc-3c45e273c95c”,“requestSize”:637,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:5,“responseTimestamp”:“2019-07-24T16:52:25.641721Z”,“sentBytes”:141,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/istio.mixer.v1.Mixer/Report",“userAgent”:"",“xForwardedFor”:“100.97.8.150”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:25.642777Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.6.136”,“destinationName”:"",“destinationNamespace”:“istio-system”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“istio-telemetry.istio-system.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:“0”,“httpAuthority”:“mixer”,“latency”:“129.565696ms”,“method”:“POST”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:827,“referer”:"",“reporter”:“destination”,“requestId”:“7758c475-47ea-4dab-8240-0a9bf3f08839”,“requestSize”:501,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:5,“responseTimestamp”:“2019-07-24T16:52:25.772309Z”,“sentBytes”:143,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/istio.mixer.v1.Mixer/Report",“userAgent”:"",“xForwardedFor”:“100.97.5.90”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:25.647082Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.6.136”,“destinationName”:"",“destinationNamespace”:“istio-system”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“istio-telemetry.istio-system.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:“0”,“httpAuthority”:“mixer”,“latency”:“125.353066ms”,“method”:“POST”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:1201,“referer”:"",“reporter”:“destination”,“requestId”:“344b3520-0a89-427a-95f2-dcd83eb8c302”,“requestSize”:874,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:5,“responseTimestamp”:“2019-07-24T16:52:25.772387Z”,“sentBytes”:143,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/istio.mixer.v1.Mixer/Report",“userAgent”:"",“xForwardedFor”:“100.97.6.152”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:25.648757Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.6.136”,“destinationName”:"",“destinationNamespace”:“istio-system”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“istio-telemetry.istio-system.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:“0”,“httpAuthority”:“mixer”,“latency”:“123.705739ms”,“method”:“POST”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:1180,“referer”:"",“reporter”:“destination”,“requestId”:“3830bb1e-ee4a-4747-9b85-aa8632a05f10”,“requestSize”:849,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:5,“responseTimestamp”:“2019-07-24T16:52:25.772425Z”,“sentBytes”:143,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/istio.mixer.v1.Mixer/Report",“userAgent”:"",“xForwardedFor”:“100.97.4.119”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:25.644416Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.6.136”,“destinationName”:"",“destinationNamespace”:“istio-system”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“istio-telemetry.istio-system.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:“0”,“httpAuthority”:“mixer”,“latency”:“128.10414ms”,“method”:“POST”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:1158,“referer”:"",“reporter”:“destination”,“requestId”:“7ff3a60a-b806-4f09-9c12-28dc5652a413”,“requestSize”:831,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:5,“responseTimestamp”:“2019-07-24T16:52:25.772461Z”,“sentBytes”:143,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/istio.mixer.v1.Mixer/Report",“userAgent”:"",“xForwardedFor”:“100.97.8.150”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:25.648347Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.6.136”,“destinationName”:"",“destinationNamespace”:“istio-system”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“istio-telemetry.istio-system.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:“0”,“httpAuthority”:“mixer”,“latency”:“124.187413ms”,“method”:“POST”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:1163,“referer”:"",“reporter”:“destination”,“requestId”:“678d4923-e1d8-450b-a0c4-620c89e6e088”,“requestSize”:840,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:5,“responseTimestamp”:“2019-07-24T16:52:25.772496Z”,“sentBytes”:143,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/istio.mixer.v1.Mixer/Report",“userAgent”:"",“xForwardedFor”:“100.97.6.157”} [istio-telemetry-56d5966d6b-qj9z2 telemetry] {“level”:“info”,“time”:“2019-07-24T16:52:25.644908Z”,“instance”:“accesslog.logentry.istio-system”,“apiClaims”:"",“apiKey”:"",“clientTraceId”:"",“connection_security_policy”:“none”,“destinationApp”:"",“destinationIp”:“100.97.6.136”,“destinationName”:"",“destinationNamespace”:“istio-system”,“destinationOwner”:"",“destinationPrincipal”:"",“destinationServiceHost”:“istio-telemetry.istio-system.svc.cluster.local”,“destinationWorkload”:"",“grpcMessage”:"",“grpcStatus”:“0”,“httpAuthority”:“mixer”,“latency”:“221.122459ms”,“method”:“POST”,“permissiveResponseCode”:“none”,“permissiveResponsePolicyID”:“none”,“protocol”:“http”,“receivedBytes”:1188,“referer”:"",“reporter”:“destination”,“requestId”:“780ce5c5-df66-47a8-abcf-349ff7a9cf3e”,“requestSize”:861,“requestedServerName”:"",“responseCode”:200,“responseFlags”:"-",“responseSize”:5,“responseTimestamp”:“2019-07-24T16:52:25.866003Z”,“sentBytes”:143,“sourceApp”:"",“sourceIp”:“0.0.0.0”,“sourceName”:"",“sourceNamespace”:"",“sourceOwner”:"",“sourcePrincipal”:"",“sourceWorkload”:"",“url”:"/istio.mixer.v1.Mixer/Report",“userAgent”:"",“xForwardedFor”:“100.97.6.152”} Thank you, Mike A: I am posting @ShreeGrossOptum's answer from the comments for better visibility and due to lack of response from the OP: Sir please check once about attributes I hope you ll get your answers like why mostly attributes are unknown and empty.
{ "language": "en", "url": "https://stackoverflow.com/questions/57188605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: htaccess mod_rewrite good in IE, but downloads web page in other browsers (Chrome, Firefox) I am trying to modify an .htaccess file so that file extensions are omitted-- mainly .html It seems to be working fine in IE written like this: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}.html -f RewriteRule ^(.*)$ $1.html AddType text/html .html But in Chrome or Firefox (haven't tried any others) the web page immediately downloads instead of displays without the extension. I have never modified this file before-- I got the above Rewrite lines from another site but I cannot seem to find any solution to prevent the files from downloading in those browsers. This site is hosted on Amazon Web Services. Thanks
{ "language": "en", "url": "https://stackoverflow.com/questions/40313848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Play Console Unable To Click Create New Release Button My Application is published in the play store but I want to update my app but I am unable to update my app in Google Play Console
{ "language": "en", "url": "https://stackoverflow.com/questions/65001446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Change Image Button in Adapter Listview Can Anyone help me. Where is my fault? I have an adapter listview is: db = Database.getReadableDatabase(); String queryString = "SELECT * from TBL_CHECKIN WHERE FLAG='1' and NOCUST='" + ListViewCallplan.get(position).gettxid() + "'"; Cursor c = db.rawQuery(queryString, null); if (c.getCount() == 0) { System.out.println("ok1"); notifyDataSetInvalidated(); holder.image .setImageResource(R.mipmap.cekin); holder.image.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent logtent = new Intent(context, Cls_Checkin.class); logtent.putExtra("custcode", ListViewCallplan.get(position).gettxid()); context.startActivity(logtent); } }); // notifyDataSetChanged(); } if (c.getCount() > 0) { System.out.println("okcekot"); notifyDataSetInvalidated(); holder.image .setImageResource(R.mipmap.cekout); holder.image.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent logtent = new Intent(context, Cls_Checkout.class); context.startActivity(logtent); } }); // notifyDataSetChanged(); } I want to change image the button with that condition, but when me open again it's not do nothing. Stay in same display. The image in button always show checkin for the example.
{ "language": "en", "url": "https://stackoverflow.com/questions/44153134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: SharePoint Approval I need to add sharepoint approval to a list... and I was hoping to use the default approval process... but... I need to run some code once an item is approved. Do I have to then use VS to create a custom workflow? I need to run some code that currently runs in an event reciever. I need to move the code to another function (as we are moving the processing out of event reciever code and now based on item approval). This code would execute after the item is approved and can be hosted as a service or .net code. A: Full commented source to a state-machine based approval workflow comes with the MOSS SDK 1.5 in the samples directory. http://www.microsoft.com/downloads/details.aspx?familyid=6D94E307-67D9-41AC-B2D6-0074D6286FA9 -Oisin
{ "language": "en", "url": "https://stackoverflow.com/questions/1295104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Unable to cast object of type 'OpenQA.Selenium.Chrome.ChromeDriver' to type 'OpenQA.Selenium.Remote.RemoteWebDriver' I am trying to use WebDriverManager Package for .Net for automatically managing the chromedriver.exe version while executing a test case. But when ever I try to type cast ChromeDriver object to RemoteDriverObject it is showing me below error. Unable to cast object of type 'OpenQA.Selenium.Chrome.ChromeDriver' to type 'OpenQA.Selenium.Remote.RemoteWebDriver'. I will write the methods and the code that I am using below: public IWebDriver WebInit() { ChromeOptions options = new ChromeOptions(); options.AddArguments("--test-type"); options.AddArguments("--start-maximized"); options.AddArguments("--disable-infobars"); options.AddArguments("--disable-extensions"); options.AddAdditionalCapability("useAutomationExtension", false); options.AddArguments("--ignore-certificate-errors"); options.AddExcludedArgument("enable-automation"); options.AddUserProfilePreference("credentials_enable_service", false); options.AddUserProfilePreference("profile.password_manager_enabled", false); options.AddAdditionalCapability("useAutomationExtension", false); options.AddUserProfilePreference("download.prompt_for_download", false); options.AddUserProfilePreference("download.default_directory", downloadFilepath); options.AddUserProfilePreference("safebrowsing.enabled", true); options.AddArguments("window-size=1600,900"); new DriverManager().SetUpDriver(new ChromeConfig(), VersionResolveStrategy.MatchingBrowser); //this is the code used from the mentioned Github Repository _driver = new ChromeDriver(options); } public void FunctionWhereIAmCallingTheWebInitFucntion() { _driver = WebInit() ICapabilities capabilities = ((RemoteWebDriver)_driver).Capabilities; //whenever this line gets executed it throws the exception that is mentioned } Below are the Package versions that I am using <PackageReference Include="Selenium.Support" Version="4.1.1" /> <PackageReference Include="Selenium.WebDriver" Version="4.1.1" /> <PackageReference Include="WebDriverManager" Version="2.12.4" /> Please can anyone from the community can guide me where am I making a mistake? Thank you very much!! A: Ok, based on your response in the comments, I think this is what you are going for. The ChromeDriver object has a Capabilities property you can use to request the name and version. As long as you are working with a ChromeDriver directly and not an IWebDriver that property is accessible like follows: string? versions = driver.Capabilities.GetCapability("browserVersion").ToString(); string? name = driver.Capabilities.GetCapability("browserName").ToString(); I modified your program a little bit based on what it seems like you are trying to do and printed the result to the console: private static ChromeDriver WebInit() { // set up options ChromeOptions options = new ChromeOptions(); options.AddArguments("--start-maximized"); // download latest chromedriver new DriverManager().SetUpDriver(new ChromeConfig(), VersionResolveStrategy.MatchingBrowser); //this is the code used from the mentioned Github Repository // initialize the driver ChromeDriver driver = new ChromeDriver(options); return driver; } public static void Main() { ChromeDriver driver = WebInit(); // get the capabilities string? versions = driver.Capabilities.GetCapability("browserVersion").ToString(); string? name = driver.Capabilities.GetCapability("browserName").ToString(); Console.WriteLine(versions); Console.WriteLine(name); Console.ReadKey(); } The above example works for me in .Net 6.0 with the following NuGet packages installed: * *Selenium.Support (4.1.1) *Selenium.WebDriver (4.1.1) *WebDriverManager (2.13.0)
{ "language": "en", "url": "https://stackoverflow.com/questions/72380341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: move up and down a treeview I'm populating a tkinter treeview in python using Os.walk() but after populating it I want to be able to reorder the tree leaves using buttons. The command for moving up the tree works just fine (I want to be able to move multiple leaves at once) def moveUp(): leaves = Tree.selection() for i in leaves: Tree.move(i, Tree.parent(i), Tree.index(i)-1) But when I reversed it to go down the tree I get a weird bug def moveDown(): leaves = Tree.selection() for i in leaves: Tree.move(i, Tree.parent(i), Tree.index(i)+1) I can only move a single leaf down, if I select an odd number of leaves then the lowest one moves down, and if I pick an even number of leaves none of them move. A: As suggested in the comments going through leaves in the reverse order using reversed() fixes the problem. (Would be a comment but I don't have the reputation) def moveDown(): leaves = Tree.selection() for i in reversed(leaves): Tree.move(i, Tree.parent(i), Tree.index(i)+1) A: Note, for moveUp, reversing the leaves would result in nothing actually changing in the tree. Simply iterating through the leaves in order is the ticket. def moveUp(): leaves = Tree.selection() for i in leaves: Tree.move(i, Tree.parent(i), Tree.index(i)-1)
{ "language": "en", "url": "https://stackoverflow.com/questions/38884133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Slow SQL: Select count of Table1 values missing from Table2 I am trying to return a count of the number of values from Table1 WHERE NOT EXISTS in Table2, I've got a query that seems correct but it's extremely slow to run: SELECT count(container_no) FROM pier_data WHERE NOT EXISTS( SELECT 1 FROM Iron_mountain_data WHERE pier_data.container_no = iron_mountain_data.[Customer Box Nbr] ); Is there any way I can speed this up? Edit: For whatever reason I believe this LEFT JOIN business gave me the most expedient results in MS Access: SELECT Count(container_no) AS boxes_missing_from_IM FROM pier_data AS pd LEFT JOIN iron_mountain_data AS imd ON pd.container_no = imd.[Customer Box Nbr] WHERE imd.[Customer Box Nbr] Is Null; A: This might be better, try it : SELECT COUNT(container_no) FROM pier_date WHERE container_no NOT IN ( SELECT [Customer Box Nbr] FROM iron_mountain_data ); Also, as it has been suggested, you could use a left join with a where clause like this : SELECT COUNT(container_no) FROM pier_date pd LEFT JOIN iron_mountain_data imd ON pd.container_no = imd.[Customer Box Nbr] WHERE imd.[Customer Box Nbr] IS NULL The use of keyword 'DISTINCT' might be useful too like COUNT(DISTINCT container_no) or SELECT DISTINCT [Customer Box Nbr] A: Assuming [container_no] and [Customer box nbr] captures the same information SELECT COUNT(container_no) FROM pier_date pd WHERE container_no NOT IN ( SELECT DISTINCT [Customer Box Nbr] FROM iron_mountain_data ) A: Your query is fine, although I would use COUNT(*) and table aliases: SELECT COUNT(*) FROM pier_data as pd WHERE NOT EXISTS (SELECT 1 FROM Iron_mountain_data as imd WHERE pd.container_no = imd.[Customer Box Nbr] ); (These changes have no impact on performance, except for a very, very, very minor check that container_no is not NULL for the COUNT().) For performance, you want an index on Iron_mountain_data([Customer Box Nbr]): create index idx_iron_mount_data_customer_box_nbr on Iron_mountain_data([Customer Box Nbr]; This should work for almost any way that you write the query.
{ "language": "en", "url": "https://stackoverflow.com/questions/58977296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to test the appearance of a child component within an ngIf Statement in Angular 8 I have a parent component where one of two child components are displayed, depending on the value of a variable (workHistoryState.ViewType): <div class="work-history-wrapper"> <div class='jobList' *ngIf="workHistoryState.ViewType === 'summary'"> <app-job-list></app-job-list> </div> <div class='assignmentList' *ngIf="workHistoryState.ViewType === 'detailed'"> <app-assignment-list ></app-assignment-list> </div> </div> The value in workHistoryState.ViewType is updated by another child component. It calls this method on the workHistoryState service: public ViewTypeChanged(viewType: string) { this.ViewType = viewType; } In my test code for the parent component I have to use mocks for the child components. The mocks are as follows: import { Component } from "@angular/core"; @Component({ selector: 'app-assignment-list', template: '<p>Mock Assignment List Component</p>' }) export class AssignmentListComponentMock {} import { Component } from "@angular/core"; @Component({ selector: 'app-job-list', template: '<p>Mock Job List Component</p>' }) export class JobListComponentMock {} So my test code is as follows (and both of the tests fail): import { async, ComponentFixture, TestBed } from '@angular/core/testing'; import { WorkHistoryParentComponent } from './work-history-parent.component'; import { JobListComponentMock } from '../component-mocks/job-list.component.mock'; import { AssignmentListComponentMock } from '../component-mocks/assignment-list.component.mock'; import { IndustryDropDownComponentMock } from '../component-mocks/industry-drop-down.component.mock'; import { RoleTypeDropDownComponentMock } from '../component-mocks/role-type-drop-down.component.mock'; import { ViewTypeSelectComponentMock } from '../component-mocks/view-type-select.component.mock'; import { By } from '@angular/platform-browser'; import { APP_BASE_HREF } from '@angular/common'; describe('WorkHistoryParentComponent', () => { let component: WorkHistoryParentComponent; let fixture: ComponentFixture<WorkHistoryParentComponent>; beforeEach(async(() => { TestBed.configureTestingModule({ declarations: [WorkHistoryParentComponent, JobListComponentMock, AssignmentListComponentMock, IndustryDropDownComponentMock, RoleTypeDropDownComponentMock, ViewTypeSelectComponentMock], providers:[ { provide: APP_BASE_HREF, useValue : '/' }, ], }).compileComponents(); })); beforeEach(() => { fixture = TestBed.createComponent(WorkHistoryParentComponent); component = fixture.componentInstance; fixture.detectChanges(); }); it('should not find job list by css when detailed view is chosen', () => { component.workHistoryState.ViewTypeChanged("detailed"); let jobList = fixture.debugElement.query(By.css('.jobList')); expect(jobList).toBeNull(); }); it('should not find job list when detailed view is chosen', () => { component.workHistoryState.ViewTypeChanged("detailed"); let jobList = fixture.debugElement.query(By.directive(JobListComponentMock)); expect(jobList).toBeNull(); }); }); Does anybody know why they are failing? A: I needed to call fixture.detectChanges() after changing the value in the service. So the tests should have been: it('should not find job list by css when detailed view is chosen', () => { component.workHistoryState.ViewTypeChanged("detailed"); fixture.detectChanges(); let jobList = fixture.debugElement.query(By.css('.jobList')); expect(jobList).toBeNull(); }); it('should not find job list when detailed view is chosen', () => { component.workHistoryState.ViewTypeChanged("detailed"); fixture.detectChanges(); let jobList = fixture.debugElement.query(By.directive(JobListComponentMock)); expect(jobList).toBeNull(); });
{ "language": "en", "url": "https://stackoverflow.com/questions/61140530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: I'm trying to add a List-Unsubscribe header to my e-mail header using php-mailer I want to show unsubscribe links beside the email headers of email providers like Gmail, yahoo, and outlook. This is how its called in PHP-mailer:(I have tried by following ways) 1st way: $mail->AddCustomHeader("List-Unsubscribe:<mailto:[email protected]>,<https://myexample.com/unsubscribe.php?identifier=1599&type=s&campaignid=99>"); 2nd way: $mail->AddCustomHeader("List-Unsubscribe:<mailto:[email protected]?subject=unsubscribe>,<https://myexample.com/unsubscribe.php?identifier=1599&type=s&campaignid=99>"); By this code, it sends the header List-Unsubscribe with the email but the unsubscribe link is not showing there. I don't know this is the process or not. I have DKIM: PASS record. In the mail raw header of Gmail, outlook, and yahoo it shows, List-Unsubscribe: List-Unsubscribe: mailto:[email protected]?subject=unsubscribe,https://myexample.com/unsubscribe.php?identifier=1599&type=s&campaignid=99. Do we need to do some email server configuration for this to show link in emails header ? I can not find the solution for this.
{ "language": "en", "url": "https://stackoverflow.com/questions/71308364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Run Celery tasks on Railway I deployed a Django project in Railway, and it uses Celery and Redis to perform an scheduled task. The project is successfully online, but the Celery tasks are not performed. If I execute the Celery worker from my computer's terminal using the Railway CLI, the tasks are performed as expected, and the results are saved in the Railway's PostgreSQL, and thus those results are displayed in the on-line site. Also, the redis server used is also the one from Railway. However, Celery is operating in 'local'. This is the log on my local terminal showing the Celery is running local, and the Redis server is the one up in Railway: -------------- [email protected] v5.2.7 (dawn-chorus) --- ***** ----- -- ******* ---- macOS-13.1-arm64-arm-64bit 2023-01-11 23:08:34 - *** --- * --- - ** ---------- [config] - ** ---------- .> app: suii:0x1027e86a0 - ** ---------- .> transport: redis://default:**@containers-us-west-28.railway.app:7078// - ** ---------- .> results: - *** --- * --- .> concurrency: 10 (prefork) -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker) --- ***** ----- -------------- [queues] .> celery exchange=celery(direct) key=celery [tasks] . kansoku.tasks.suii_kakunin I included this line of code in the Procfile regarding to the worker (as I saw in another related answer): worker: python manage.py qcluster --settings=my_app_name.settings And also have as an environment variable CELERY_BROKER_REDIS_URL pointing to Railway's REDIS_URL. I also tried creating a 'Periodic task' from the admin of the live aplication, but it just doesn't get executed. What should I do in order to have the scheduled tasks be done automatically without my PC?
{ "language": "en", "url": "https://stackoverflow.com/questions/75085016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: while changing SDK 33 XML autocompletion is not working I hv version bubble bee Android after changing SDK 33 XML auto complete is not working plz help. when I change version 33 then Android application execute but XML auto complete doesn't work I have tried all possible ways but i unable to find and solution
{ "language": "en", "url": "https://stackoverflow.com/questions/75631868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: AJAX POST JSON Array Javascript NOT JQUERY In this screenshot, a JSON array is being passed to PHP via AJAX without a POST body. I am using this code to implement it: with(x=new XMLHttpRequest()) open("POST", "http://myweb/api/mobile/v1/jobeventadd?key=cfff"), setRequestHeader("Content-Type", "application/x-www-form-urlencoded"), send("%7B%0A%22SessionID%22%3A%22hn0oqa0u687avsrnev6f5t2nh7%22%2C%0A%22ObjectID%22%3A%226460%22%2C%0A%22ItemName%22%3A%22UologiciPhone%20test%20event%22%2C%0A%22ActivityFrom%22%3A%2201-01-2013%2012%3A00%3A00%22%0A%7D"); But it's not working. How can I do it via XMLHttpRequest? No JQUERY PLEASE A: Use the onreadystatechange property to get the response after the success state, and then store the data in a custom header to troubleshoot issues with the POST body: with(new XMLHttpRequest) { open("POST",{},true); setRequestHeader("Foo", "Bar"); send(""); onreadystatechange = handler; } function handler(event) { !!event.target && !!event.target.readyState && event.target.readyState === 4 && ( console.log(event) ); } References * *XMLHTTPRequest Living Standard *Fetch Living Standard A: It was same as GET with(new XMLHttpRequest) { open("POST","http://google.com",true); send("hello=world&no=yes"); onreadystatechange = function(){}; }
{ "language": "en", "url": "https://stackoverflow.com/questions/10724573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: django: generic views + custom template tags or custom views + generic/normal template tags This is more of a best-practices question, and given that I'm quite tired it mightn't make much sense. I've been putting together a blog app as a learning experience and as an actual part of a website I am developing. I've designed it like most apps so that you can list blog posts by multiple criteria i.e. /blog/categories/ /blog/authors/ /blog/tags/ /blog/popular/ etc. On each page above I also want to list how many entries are a part of that criteria i.e. for "categories", I want /blog/categories/ to list all the different categories, but also mention how many blog posts are in that category, and possibly list those entries. Django seems to give you lots of ways of doing this, but not much indication on what's best in terms of flexibility, reusability and security. I've noticed that you can either A: Use generic/very light views, pass a queryset to the template, and gather any remaining necessary information using custom template tags. i.e. pass the queryset containing the categories, and for each category use a template tag to fetch the entries for that category or B: Use custom/heavy views, pass one or more querysets + extra necessary information through the view, and use less template tags to fetch information. i.e. pass a list of dictionaries that contains the categories + their entries. The way I see it is that the view is there to take in HTTP requests, gather the required information (specific to what's been requested) and pass the HTTP request and Context to be rendered. Template tags should be used to fetch superflous information that isn't particularly related to the current template, (i.e. get the latest entries in a blog, or the most popular entries, but they can really do whatever you like.) This lack of definition (or ignorance on my part) is starting to get to me, and I'd like to be consistent in my design and implementation, so any input is welcome! A: I'd say that your understanding is quite right. The main method of gathering information and rendering it via a template is always the view. Template tags are there for any extra information and processing you might need to do, perhaps across multiple views, that is not directly related to the view you're rendering. You shouldn't worry about making your views generic. That's what the built-in generic views are for, after all. Once you need to start stepping outside what they provide, then you should definitely make them specific to your use cases. You might of course find some common functionality that is used in multiple views, in which case you can factor that out into a separate function or even a context processor, but on the whole a view is a standalone bit of code for a particular specific use.
{ "language": "en", "url": "https://stackoverflow.com/questions/3884586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I set up a Jenkins CI server to run automated BDD selenium tests using a remote webdriver? I'm new to Jenkins and am trying to set up a server to run selenium tests from a GitHub repo. I'm sure that I'm doing something wrong, likely several things, but haven't been able to figure it out. I have configured the selenium plugin to use the default Selenium hub port 4444. Project GitHub Configuration Can't figure out why I'm getting this error. The credentials match the created username and ssh key. I can even access the repo by clicking on GitHub in the project dashboard. Project Shell Execution Steps The before-build execution steps. These are the commands I use in the terminal to run the tests locally. When I build the job it gives the following log: Started by <user> Building in workspace /Users/Shared/Jenkins/Home/workspace/Tutorial > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url https://github.com/<repo address>.git # timeout=10 Fetching upstream changes from https://github.com/<repo address>.git > git --version # timeout=10 using GIT_SSH to set credentials > git fetch --tags --progress https://github.com/<repo address>.git +refs/heads/*:refs/remotes/origin/* ERROR: Error fetching remote repo 'origin' hudson.plugins.git.GitException: Failed to fetch from https://github.com/<repo address>.git at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:809) at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1076) at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1107) at hudson.scm.SCM.checkout(SCM.java:496) at hudson.model.AbstractProject.checkout(AbstractProject.java:1281) at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604) at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86) at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529) at hudson.model.Run.execute(Run.java:1728) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43) at hudson.model.ResourceController.execute(ResourceController.java:98) at hudson.model.Executor.run(Executor.java:405) Caused by: hudson.plugins.git.GitException: Command "git fetch --tags --progress https://github.com/<repo address>.git +refs/heads/*:refs/remotes/origin/*" returned status code 128: stdout: stderr: remote: Invalid username or password. fatal: Authentication failed for 'https://github.com/<repo address>.git/' at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1877) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1596) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:71) at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:348) at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:807) ... 11 more ERROR: null Finished: FAILURE A: If you are using SSH key for Jenkins to authenticate try using SSH version e.g. [email protected]:FOO/BAR.git instead of HTTPS one.
{ "language": "en", "url": "https://stackoverflow.com/questions/44012819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Spark Session Error: java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem I am unable to build a SparkSession in Scala without an error. I am using Maven as my build tool. I have Spark 3.3.0 installed locally. I also have an Azure Databricks 3.3.0 cluster. Whenever I pass the databricks master URL it bombs out with this error: Error: java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>sample</groupId> <artifactId>sample</artifactId> <version>1.0-SNAPSHOT</version> <properties> <spark.version>3.3.0</spark.version> <scala.version.major>2.12</scala.version.major> <scala.version.minor>12</scala.version.minor> <scala.version>${scala.version.major}.${scala.version.minor}</scala.version> </properties> <dependencies> <dependency> <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>${scala.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_${scala.version.major}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_${scala.version.major}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_${scala.version.major}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-mllib_${scala.version.major}</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.jmockit</groupId> <artifactId>jmockit</artifactId> <version>1.34</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>net.alchim31.maven</groupId> <artifactId>scala-maven-plugin</artifactId> <version>3.2.2</version> <executions> <execution> <goals> <goal>compile</goal> <goal>testCompile</goal> </goals> </execution> </executions> <configuration> <scalaVersion>${scala.version}</scalaVersion> </configuration> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.3</version> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> </plugins> </build> </project> package sample import org.apache.spark.SparkConf import org.apache.spark.sql.SparkSession object SparkMain extends App { val sparkConf = new SparkConf() .setMaster("local[1]") .setAppName("ScalaSparkDF-Test") .set("spark.driver.allowMultipleContexts", "false") .set("spark.ui.enabled", "false") val spark = SparkSession.builder() .config(sparkConf) .getOrCreate() val df = spark.read.option("header","true") .csv("/Users/") }
{ "language": "en", "url": "https://stackoverflow.com/questions/73535699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Angular: get request i have an issue with displaying data in console. I am trying to retrieve log data in the console but i am getting null without knowing what the problem is. This is my code: Service.ts getItemById(id:number): Observable<any> { return this.http.get<any>(`${this.API}/${id}`).pipe(catchError(this.handleError )); } private handleError(httpErrorResponse:HttpErrorResponse){ if(httpErrorResponse.error instanceof ErrorEvent){ console.error(httpErrorResponse.error.message) } return of(null) } Component.ts showItem(id: any) { this.ItemService.getItemById(id).subscribe((data: any) => { console.log(data) this.log = data; }) } html button <td><button class="btn btn-block btn-primary" (click)="showItem(log.id)">{{buttonName}}</button></td> when i click the button the response is giving 200 in the network section but the console is returning a null, it does not return the payload. Any solutions ? A: Since you have an error, try first with a simpler solution: Service.ts getItemById(id:number): Observable<any> { return this.http.get(`${this.API}/${id}`); } Component.ts showItem(id: any) { this.ItemService.getItemById(id) .subscribe( (data: any) => { console.log(data); //this.log = data; }); } After this, see if your API works (if it gets data). Once you check this, * *If it doesn't return anything, your solution was right, and the problem is in the back. *If it is returning data, there was a problem. Then, in order to find out what is going on, you can try to left this simplified version and try the error directly in the subscription, with catchError or simply managing the "path" of error (something like this:) this.ItemService.getItemById(id) .subscribe( (data: any) => { console.log(data); //this.log = data; }, err => {console.log(err);} // You'll get the whole object, // If you get [object object] you shoud use the specifics fields, something like console.log(err.error.message) or something like that... (look them up on the internet) ); A: I just figured out that i had to add the responseType to the url in the service.ts. This link was useful : Angular HttpClient "Http failure during parsing"
{ "language": "en", "url": "https://stackoverflow.com/questions/72162818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Elastic Search missing some documents while creating index from another index I have created one index in elastic search name as documents_local and load data from Oracle Database via logstash. This index containing following values. "FilePath" : "Path of the file", "FileName" : "filename.pdf", "Language" : "Language_name" Then, i want to index those file contents also, so i have created one more index in elastic search named as document_attachment and using the following Java code, i fetched FilePath & FileName from the index documents_local with the help of the filepath i retrived file will is available in my local drive and i have index those file contents using ingest-attachment plugin processor. Please find my java code below, where am indexing the files. private final static String INDEX = "documents_local"; //Documents Table with file Path - Source Index private final static String ATTACHMENT = "document_attachment"; // Documents with Attachment... -- Destination Index private final static String TYPE = "doc"; public static void main(String args[]) throws IOException { RestHighLevelClient restHighLevelClient = null; Document doc=new Document(); try { restHighLevelClient = new RestHighLevelClient(RestClient.builder(new HttpHost("localhost", 9200, "http"), new HttpHost("localhost", 9201, "http"))); } catch (Exception e) { System.out.println(e.getMessage()); } //Fetching Id, FilePath & FileName from Document Index. SearchRequest searchRequest = new SearchRequest(INDEX); searchRequest.types(TYPE); SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); QueryBuilder qb = QueryBuilders.matchAllQuery(); searchSourceBuilder.query(qb); searchSourceBuilder.size(3000); searchRequest.source(searchSourceBuilder); SearchResponse searchResponse = null; try { searchResponse = restHighLevelClient.search(searchRequest); } catch (IOException e) { e.getLocalizedMessage(); } SearchHit[] searchHits = searchResponse.getHits().getHits(); long totalHits=searchResponse.getHits().totalHits; int line=1; String docpath = null; Map<String, Object> jsonMap ; for (SearchHit hit : searchHits) { String encodedfile = null; File file=null; Map<String, Object> sourceAsMap = hit.getSourceAsMap(); doc.setId((int) sourceAsMap.get("id")); doc.setLanguage(sourceAsMap.get("language")); doc.setFilename(sourceAsMap.get("filename").toString()); doc.setPath(sourceAsMap.get("path").toString()); String filepath=doc.getPath().concat(doc.getFilename()); System.out.println("Line Number--> "+line+++"ID---> "+doc.getId()+"File Path --->"+filepath); file = new File(filepath); if(file.exists() && !file.isDirectory()) { try { FileInputStream fileInputStreamReader = new FileInputStream(file); byte[] bytes = new byte[(int) file.length()]; fileInputStreamReader.read(bytes); encodedfile = new String(Base64.getEncoder().encodeToString(bytes)); } catch (FileNotFoundException e) { e.printStackTrace(); } } jsonMap = new HashMap<>(); jsonMap.put("id", doc.getId()); jsonMap.put("language", doc.getLanguage()); jsonMap.put("filename", doc.getFilename()); jsonMap.put("path", doc.getPath()); jsonMap.put("fileContent", encodedfile); String id=Long.toString(doc.getId()); IndexRequest request = new IndexRequest(ATTACHMENT, "doc", id ) .source(jsonMap) .setPipeline(ATTACHMENT); try { IndexResponse response = restHighLevelClient.index(request); } catch(ElasticsearchException e) { if (e.status() == RestStatus.CONFLICT) { } e.printStackTrace(); } } System.out.println("Indexing done..."); } Please find my mappings details for ingest attachment plugin( i did mappping first and then executing this java code ). PUT _ingest/pipeline/document_attachment { "description" : "Extract attachment information", "processors" : [ { "attachment" : { "field" : "fileContent" } } ] } But while doing this process, Am missing some of the doucments, My source index is documents_local which is having 2910 documents. Am fetching all the 118 documents and attaching my PDF ( after converting to base64) and writting in another index document_attachment But document_attachment index having only 118 it should be 2910. some of the doucments are missing. Also, for indexing its taking very long time. Am not sure, how the documents are missing in second index ( document_attachment ) and is there any other way to improvise this process ? Can we include Thread mechanism here. Because, i future i have to index more than 100k pdf, in this same way.
{ "language": "en", "url": "https://stackoverflow.com/questions/51090440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Purge hidden storage on Github Desktop OSX? I have several folders of jpg files on my OSX laptop, cloned from a GitHub repository. The files I can see total less than 20GB, but my system tells me that they occupy 65+GB. Where are these hidden files? How can I purge them? I should mention that there is only one branch in the repository. (Also, I know that GitHub is not meant to be a place to store lots of image files. This repo is a component of a larger project.) A: As mentioned in "Show system files / Show git ignore in osx", use in your Finder ⌘⇧. That should show you the hidden folder .git/, which include all the history of the repo. And for a repo of binaries (like jpeg images), and new version of an image might take quite a lot of place (poor diff, poor compression between each version) I wonder if there's any way I can purge this history from local storage? (I'd like to keep it in the cloud) You would need to download only the archive of the repo (meaning the content, without the history). If you don't need to change and push back those modification, the archive would be enough. If you need the history, a shallow clone (git clone --depth 1) that I detailed here (or here) is best to keep the downloaded data at a minimum. A: I would try re-indexing your hdd and see if that accurately displays file sizes once its done. Run the following in osx terminal: sudo mdutil -E / Original article: https://www.maketecheasier.com/fix-wrong-hard-drive-data-usage-calculation-osx/
{ "language": "en", "url": "https://stackoverflow.com/questions/46612723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Auto Generate Patient ID Using Two Columns In the table Screening i have StudyID. The StudyID is a combination of two numbers StudyType( For Cases = KNH-01, For Controls = KNH-02) and Serealized 5 digit number (00001,00002). So the screening table looks like this; ID StudyID Age 1 KNH-01/00001 22 2 KNH-01/00002 15 3 KNH-02/00001 4 4 KNH-02/00002 28 Currently the StudyID numbers are manually saved in a log book and I would wish to automate the process of generating the StudyID numbers. I have created a table to manage StudyID CREATE TABLE [dbo].[PatientID]( [ID] [int] IDENTITY(1,1) NOT NULL, [PatientType] [varchar](5) NULL, [PatientNumber] [int] NULL, CONSTRAINT [PK__PatientI__3214EC2739E294A9] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO I am unable to get last PatientNumber based on the PatientType my intention is to have a button at the vb.net front end when a user clicks to generate StudyID it checks the last given number within a studytype adds 1 to it and loads to form. e.g. KNH-01/00003 or KNH-02/00003. If you have a better Idea i will really appreciate. A: Well, my personal favorite way of solving such requirements involves using a computed column and a UDF to compute the values in that column. So first, you create the UDF: CREATE FUNCTION dbo.GenerateStudyID ( @PatientType varchar(5), @id int ) RETURNS char(11) AS BEGIN RETURN ( SELECT @PatientType +'/'+ RIGHT('00000', CAST(COUNT(*) as char(5)), 5) FROM [dbo].[PatientID] WHERE PatientType = @PatientType AND Id <= @id ) END GO Then, you add the computed column to your table: ALTER TABLE [dbo].[PatientID] ADD StudyID as dbo.GenerateStudyID (PatientType, id) GO Please note that you can't use this method if you need this column to be persisted, since once you will delete a record the values will change.
{ "language": "en", "url": "https://stackoverflow.com/questions/44756525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: uploaded files - database vs filesystem, when using Grails and MySQL I know this is something of a "classic question", but does the mysql/grails (deployed on Tomcat) put a new spin on considering how to approach storage of user's uploaded files. I like using the database for everything (simpler architecture, scaling is just scaling the database). But using the filesystem means we don't lard up mysql with binary files. Some might also argue that apache (httpd) is faster than Tomcat for serving up binary files, although I've seen numbers that actually show just putting Tomcat on the front of your site can be faster than using an apache (httpd) proxy. How should I choose where to place user's uploaded files? Thanks for your consideration, time and thought. A: I don't know if one can make general observations about this kind of decision, since it's really down to what you are trying to do and how high up the priority list NFRs like performance and response time are to your application. If you have lots of users, uploading lots of binary files, with a system serving large numbers of those uploaded binary files then you have a situation where the costs of storing files in the database include: * *Large size binary files *Costly queries Benefits are * *Atomic commits *Scaling comes with database (though w MySQL there are some issues w multinode etc) *Less fiddly and complicated code to manage file systems etc Given the same user situation where you store to the filesystem you will need to address * *Scaling *File name management (user uploads same name file twice etc) *Creating corresponding records in DB to map to the files on disk (and the code surrounding all that) *Looking after your apache configs so they serve from the filesystem We had a similar problem to solve as this for our Grails site where the content editors are uploading hundreds of pictures a day. We knew that driving all that demand through the application when it could be better used doing other processing was wasteful (given that the expected demand for pages was going to be in the millions per week we definitely didn't want images to cripple us). We ended up creating upload -> file system solution. For each uploaded file a DB meta-data record was created and managed in tandem with the upload process (and conversely read that record when generating the GSP content link to the image). We served requests off disk through Apache directly based on the link requested by the browser. But, and there is always a but, remember that with things like filesystems you only have content per machine. We had the headache of making sure images got re-synchronised onto every server, since unlike a DB which sits behind the cluster and enables the cluster behave uniformly, files are bound to physical locations on a server. Another problem you might run up against with filesystems is folder content size. When you start having folders where there are literally tens of thousands of files in them, the folder scan at the OS level starts to really drag. To avert this problem we had to write code which managed image uploads into yyyy/MM/dd/image.name.jpg folder structures, so that no one folder accumulated hundreds of thousands of images. What I'm implying is that while we got the performance we wanted by not using the DB for BLOB storage, that comes at the cost of development overhead and systems management. A: Just as an additional suggestion: JCR (eg. Jackrabbit) - a Java Content Repository. It has several benefits when you deal with a lot of binary content. The Grails plugin isn't stable yet, but you can use Jackrabbit with the plain API. A: Another thing to keep in mind is that if your site ever grows beyond one application server, you need to access the same files from all app servers. Now all app servers have access to the database, either because that's a single server or because you have a cluster. Now if you store things in the file system, you have to share that, too - maybe NFS. A: Even if you upload file in filesystem, all the files get same permission, so any logged in user can access any other's file just entering the url (Since all of them get same permission). If you however plan to give each user a directory then a user permission of apache (that is what server has permission) is given to them. You should su to root, create a user and upload files to those directories. Again accessing those files could end up adding user's group to server group. If I choose to use filesystem to store binary files, is there an easier solution than this, how do you manage access to those files, corresponding to each user, and maintaining the permission? Does Spring's ACL help? Or do we have to create permission group for each user? I am totally cool with the filesystem url. My only concern is with starting a seperate process (chmod and stuff), using something like ProcessBuilder to run Operating Systems commands (or is there better solution ?). And what about permissions?
{ "language": "en", "url": "https://stackoverflow.com/questions/491944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Serverless Web App Architecture I am currently designing a simple Serverless Web App using Serverless. My current expected stack is; * *API Gateway *Lambda *DynamoDB *Static Single Page App I have followed a few tutorials for building the Serverless API, Lambda and DynamoDB using the Serverless Framework and I have built my single page app however right now they are 2 separate entities. What I am looking to do now is bring the static site (nodejs) into the same project as my API, Lambda and DynamoDB and use the Serverless Framework to control the deployment however I'm struggling to find guidance on; * *How do you represent the static web site part in the Serverless Framework *How best to host that static web site (e.g. s3 static site hosting or other option?) *How best to reference a API Gateway URL that is being generated at the same time the deployment is happening (e.g. via Serverless Framework) Could anyone provide any insight into how this is supposed to work, or maybe point me int he direction of some good blogs/resources? Kind Regards, John A: I had a SPA (single page application) written in React communicating to a REST JSON API written in nodejs and hosted on Heroku as a monolith. I migrated to AWS Lambda and split the monolith into 3+ AWS Lambdas micro services inside of a monorepo The following project structure is good if your SPA requires users to login to be able to do anything. I used a single git repository where I have a folder for each service: * *api *app *www Inside of each of the services' folders I have a serverless.yml defining the deployment to a separate AWS Lambda. Each service maps to only 1 function index which accepts all HTTP endpoints. I do use 2 environments staging and production. My AWS Lambdas are named like this: * *example-api-staging-index *example-api-production-index *example-app-staging-index *example-app-production-index *example-www-staging-index *example-www-production-index I use AWS Api Gateway's Custom Domain Names to map each lambda to a public domain: * *api.example.com (REST JSON API) *app.example.com (single page application that requires login) *www.example.com (server side rendered landing pages) *api-staging.example.com *app-staging.example.com *www-staging.example.com You can define the domain mapping using serverless.yml Resources or a plugin but you have to do this only once so I did manually from the AWS website console. My .com domain was hosted on GoDaddy but I migrated it to AWS Route 53 since HTTPS certificates are free. app service * */bundles-production */bundles-staging *src (React/Angular single page application here) *handler.js *package.json *serverless.yml The app service contains a folder /src with the single page application code. The SPA is built locally on my computer in either ./bundles-production or ./bundles-staging based on the environment. The built generates the .js and .css bundles and also the index.html. The content of the folder is deployed to a S3 bucket using the serverless-s3-deploy plugin when I run serverless deploy -v -s production. I defined only 1 function getting called for all endpoints in the serverless.yml (I use JSON instead of YAML): ... "functions": { "index": { "handler": "handler.index", "events": [ { "http": "GET /" }, { "http": "GET /{proxy+}"} ] }, }, The handler.js file returns the index.html defined in /bundles-staging or /bundles-production I use webpack to build the SPA since it integrates very well with serverless with the serverless-webpack plugin. api service I used aws-serverless-express to define all the REST JSON API endpoints. aws-serverless-express is like normal express but you can't do some things like express.static() and fs.sendFile(). I tried initially to use a separate AWS Lambda function instead of aws-serverless-express for each endpoint but I quickly hit the CloudFormation mappings limit. www service If most of the functionality of your single page application require a login it's better to place the SPA on a separate domain and use the www domain for server-side rendered landing pages optimized for SEO. BONUS: graphql service Using a micro service architecture makes it easy to experiment. I'm currently rewriting the REST JSON API in graphql using apollo-server-lambda A: I have done pretty much the same architecture and hosted the single page app on s3. What you can do is set up cloudfront for api gateway and than point api.yourDomain.com to that cloudfront. Than you will also need to enable cors on your api. This plugin handles setting up domain and cloudfront for you: https://github.com/amplify-education/serverless-domain-manager I am not sure about your project requirements but if you want to serve static files faster setting up domain->cloudfront->s3 might be a wise choice. A: Here is something meaningful and worthy to read about the Serverless Architecture and Azure Serverless services https://sps-cloud-architect.blogspot.com/2019/12/what-is-serverless-architecture.html
{ "language": "en", "url": "https://stackoverflow.com/questions/48304178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: cannot import name 'PackageFinder' from 'pip._internal.index' pip3 On my mac, pip3 exists in /usr/bin. When I try to install any package using pip3, it is failing. Even simple 'pip3' also failing with below error. Traceback (most recent call last): File "/Library/Developer/CommandLineTools/usr/bin/pip3", line 6, in <module> from pip._internal import main File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/__init__.py", line 40, in <module> from pip._internal.cli.autocompletion import autocomplete File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/cli/autocompletion.py", line 8, in <module> from pip._internal.cli.main_parser import create_main_parser File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/cli/main_parser.py", line 11, in <module> from pip._internal.commands import ( File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/commands/__init__.py", line 6, in <module> from pip._internal.commands.completion import CompletionCommand File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/commands/completion.py", line 6, in <module> from pip._internal.cli.base_command import Command File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 26, in <module> from pip._internal.index import PackageFinder ImportError: cannot import name 'PackageFinder' from 'pip._internal.index' (/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages/pip/_internal/index/__init__.py) I tried to remove /usr/bin/pip3. But it is saying operation is not permitted. I am unable to remove pip3. Can any one please let me know how to fix this issue. Note: I went through some stack overflow questions related to this. Most of the answers suggested to uninstall and install pip3. But I am unable to remove pip3.
{ "language": "en", "url": "https://stackoverflow.com/questions/69623926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: jquery: Callback not received on clicking of button I need to send a request on click of button but callback is not received on firing of click event of the button. Following is code snippet: $(document).ready(function () { var counter = 0; $("#trail").click(function () { $("#dialog").dialog(); if (counter < 1) { $("#searchboxdiv").after('<input type="text" id="searchbox">'); $("#searchbox").after('<input type="button" id="searchbutton" value="search">'); counter++; } }); $("#searchbutton").click(function () { var dataToSend = null; $.ajax({ data: dataToSend, url: "FormHandler", success: function (result) {}, beforeSend: function () { dataToSend = $("#searchbox").val(); } }); }); $("#searchboxdiv").on('click', "#searchbutton", function(){ var data = null; }); }); I added the textbox in the dialog box dynamically and on click of button in dialog box, callback is not received A: Use event delegation (for dynamically added #searchbutton) $('#searchboxdiv').on('click',"#searchbutton",function(){ * *http://learn.jquery.com/events/event-delegation/ A: Your options: * *Use event delegation. Bind the click to immediate static parent like this : $("#searchboxdiv").on('click', "#searchbutton", function(){ }); *Or, bind it to the document. $(document).on('click', "#searchbutton", function(){ }); *Or, move the existing click after counter++;, ie., inside $("#trail")'s click handler. For more info, see on()
{ "language": "en", "url": "https://stackoverflow.com/questions/17132898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a method to setup flink memory dynamically? When using Apache Flink we can configure values in flink-conf.yaml. But here using CLI commands we can assign some values Dynamically when starting or submitting a job or a task in flink. eg:- bin/taskmanager.sh start-foreground -Dtaskmanager.numberOfSlots=12 But some values like jobmanager.memory.process.size and taskmanager.memory.process.size are unable to set Dynamically using "-D". Is there a way to set up those values dynamically when starting jobmanager and taskmanager using CLI? A: May be this might help you : ParameterTool parameters = ParameterTool.fromPropertiesFile("src/main/resources/application.properties"); // one can specify the properties defined in conf/flink-conf.yaml in this properties file Configuration config = Configuration.fromMap(parameters.toMap()); TaskExecutorResourceUtils.adjustForLocalExecution(config); StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(config); env.setParallelism(3); System.out.println("Config Params : " + config.toMap()); Make a note to set the parallelism equal to the number of task slots for task manager as per this link. By default, the number of task slots for a task manager is one.
{ "language": "en", "url": "https://stackoverflow.com/questions/73101551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Fetching npm module from GitHub brings ".git" folder I recently forked an npm package and updated it for my needs. Then, I changed the dependency on packages.json to point to my GitHub repo, and it worked fine. But, when npm installed the module, it brought also the git folder (.git). Because of that, when I try to install anything else, npm gives me this error: npm ERR! path /node_modules/react-native-static-server npm ERR! code EISGIT npm ERR! git /node_modules/react-native-static-server: Appears to be a git repo or submodule. npm ERR! git /node_modules/react-native-static-server npm ERR! git Refusing to remove it. Update manually, npm ERR! git or move it out of the way first. What am I doing wrong? How do I avoid the .git folder from being downloaded? You can check the repo here: https://github.com/dccarmo/react-native-static-server EDIT The dependency in my packages.json: "react-native-static-server": "dccarmo/react-native-static-server" A: This seems to be an old question, but I was running into the same thing today. I am rather new to git and npm, but I did find something that may be of help to someone. If the git repo does not have a .gitignore, the .git folder is not downloaded / created. If the git repo does have a .gitignore, the .git folder is downloaded / created. I had two repos, one without a .gitignore (because when I made it I was not aware of what .gitignore was or did), and one with a .gitignore. I included both as npm packages in a project, and noticed that the one without the .gitignore was not giving me the EISGIT error (because of the .git folder). So, after I found this question, I removed the .gitignore from that repo and now it too does not create a .git folder. Later on, I discovered that adding both a .gitignore and a .npmignore to the project now stops the .git folder from appearing. I did not add .git in my .npmignore.
{ "language": "en", "url": "https://stackoverflow.com/questions/47634465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Linux: access i2c device within board_init function (iMX6 SOC running Linux 3.0) I need to run a few I2C transactions in my board_init function. I tried calling i2c_get_adapter, then i2c_transfer, those being available in kernel mode, but i2c_get_adapter returns NULL. It's already called imx6q_add_imx_i2c, which is a wrapper around platform_device_register_full, but that isn't enough. I can manipulate GPIO in board_init by calling gpio_request to obtain access, and gpio_free at the end. Is there something analogous to those functions for i2c? --- added details --- I've got a LAN9500A USB 100Base-T Ethernet MAC connected to a LAN9303 3-port switch with a virtual PHY. The MAC has a GPIO reset line that has to be turned off before it will spring to life and enumerate on the USB. That's done in board_init because it's completely nonstandard, so we don't want to patch the stock driver to manipulate some GPIO that's not part of the device. The problem I'm having is that even though the MAC is permanently attached to the VPHY, it's not noticing it's connected, and an "ip link show eth1" command shows NO-CARRIER. I found I can kickstart it by unmasking the VPHY's Device Ready interrupt via I2C, but I also have to mask it immediately, or I get infinite interrupts. That's not the right solution, but Microchip has been no help in showing me a better way. But we don't want to patch the MAC driver with code to fiddle with the PHY. There is no PHY driver for this part, and the MII interface to the VPHY doesn't include any interrupt-related registers, so it must be done through I2C. Writing a PHY driver just to flip a bit on and off once seems a lot of trouble, especially for a newbie like me who's never written a driver before. I can do it in Python in a startup script, but that, too, seems like a heavyweight solution to a lightweight problem. A: That's a wrong approach. Board file supposed to register device drivers and pass important information to them, rather than act as a device driver itself. I'm not sure if what you're trying to do is even possible. If you really need to extract something from your I2C device on a very early stage - do that in the bootloader and pass the data to kernel via cmdline (U-boot, by the way, has I2C support for a quite some time). Then later, kernel might do appropriate actions depending on what you have passed to it.
{ "language": "en", "url": "https://stackoverflow.com/questions/40542078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Calling methods from objects which implement an interface I am trying to wrap my head around interfaces, and I was hoping they were the answer to my question. I have made plugins and mods for different games, and sometimes classes have onUpdate or onTick or other methods that are overridable. If I make an interface with a method, and I make other classes which implement the method, and I make instances of the classes, then how can I call that method from all the objects at once? A: You'll be looking at the Observer pattern or something similar. The gist of it is this: somewhere you have to keep a list (ArrayList suffices) of type "your interface". Each time a new object is created, add it to this list. Afterwards you can perform a loop on the list and call the method on every object in it. I'll edit in a moment with a code example. public interface IMyInterface { void DoSomething(); } public class MyClass : IMyInterface { public void DoSomething() { Console.WriteLine("I'm inside MyClass"); } } public class AnotherClass : IMyInterface { public void DoSomething() { Console.WriteLine("I'm inside AnotherClass"); } } public class StartUp { private ICollection<IMyInterface> _interfaces = new Collection<IMyInterface>(); private static void Main(string[] args) { new StartUp(); } public StartUp() { AddToWatchlist(new AnotherClass()); AddToWatchlist(new MyClass()); AddToWatchlist(new MyClass()); AddToWatchlist(new AnotherClass()); Notify(); Console.ReadKey(); } private void AddToWatchlist(IMyInterface obj) { _interfaces.Add(obj); } private void Notify() { foreach (var myInterface in _interfaces) { myInterface.DoSomething(); } } } Output: I'm inside AnotherClass I'm inside MyClass I'm inside MyClass I'm inside AnotherClass Edit: I just realized you tagged it as Java. This is written in C#, but there is no real difference other than the use of ArrayList instead of Collection. A: An interface defines a service contract. In simple terms, it defines what can you do with a class. For example, let's use a simple interface called ICount. It defines a count method, so every class implementing it will have to provide an implementation. public interface ICount { public int count(); } Any class implementing ICount, should override the method and give it a behaviour: public class Counter1 implements ICount { //Fields, Getters, Setters @Overide public int count() { //I don't wanna count, so I return 4. return 4; } } On the other hand, Counter2 has a different oppinion of what should count do: public class Counter2 implements ICount { int counter; //Default initialization to 0 //Fields, Getters, Setters @Overide public int count() { return ++count; } } Now, you have two classes implementing the same interface, so, how do you treat them equally? Simple, by using the first common class/interface they share: ICount. ICount count1 = new Counter1(); ICount count2 = new Counter2(); List<ICount> counterList = new ArrayList<ICount>(); counterList.add(count1); counterList.add(count2); Or, if you want to save some lines of code: List<ICount> counterList = new ArrayList<ICount>(); counterList.add(new Counter1()); counterList.add(new Counter2()); Now, counterList contains two objects of different type but with the same interface in common(ICounter) in a list containing objects that implement that interface. You can iterave over them and invoke the method count. Counter1 will return 0 while Counter2 will return a result based on how many times did you invoke count: for(ICount current : counterList) System.out.println(current.count()); A: You can't call a method from all the objects that happen to implement a certain interface at once. You wouldn't want that anyways. You can, however, use polymorphism to refer to all these objects by the interface name. For example, with interface A { } class B implements A { } class C implements A { } You can write A b = new B(); A c = new C(); A: Interfaces don't work that way. They act like some kind of mask that several classes can use. For instance: public interface Data { public void doSomething(); } public class SomeDataStructure implements Data { public void doSomething() { // do something } } public static void main(String[] args) { Data mydataobject = new SomeDataStructure(); } This uses the Data 'mask' that several classes can use and have certain functionality, but you can use different classes to actually implement that very functionality. A: The crux would be to have a list that stores every time a class that implements the interface is instantiated. This list would have to be available at a level different that the interface and the class that implements it. In other words, the class that orchestrates or controls would have the list. An interface is a contract that leaves the implementation to the classes that implements the interface. Classes implement the interface abide by that contract and implement the methods and not override them. Taking the interface to be public interface Model { public void onUpdate(); public void onClick(); } public class plugin implements Model { @Override public void onUpdate() { System.out.println("Pluging updating"); } @Override public void onClick() { System.out.println("Pluging doing click action"); } } Your controller class would be the one to instantiate and control the action public class Controller { public static void orchestrate(){ List<Model> modelList = new ArrayList<Model>(); Model pluginOne = new plugin(); Model plugTwo = new plugin(); modelList.add(pluginOne); modelList.add(plugTwo); for(Model model:modelList){ model.onUpdate(); model.onClick(); } } } You can have another implementation called pluginTwo, instantiate it, add it to the list and call the methods specified by the interface on it.
{ "language": "en", "url": "https://stackoverflow.com/questions/18240947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why does multiprocessing use only a single core after I import numpy? I am not sure whether this counts more as an OS issue, but I thought I would ask here in case anyone has some insight from the Python end of things. I've been trying to parallelise a CPU-heavy for loop using joblib, but I find that instead of each worker process being assigned to a different core, I end up with all of them being assigned to the same core and no performance gain. Here's a very trivial example... from joblib import Parallel,delayed import numpy as np def testfunc(data): # some very boneheaded CPU work for nn in xrange(1000): for ii in data[0,:]: for jj in data[1,:]: ii*jj def run(niter=10): data = (np.random.randn(2,100) for ii in xrange(niter)) pool = Parallel(n_jobs=-1,verbose=1,pre_dispatch='all') results = pool(delayed(testfunc)(dd) for dd in data) if __name__ == '__main__': run() ...and here's what I see in htop while this script is running: I'm running Ubuntu 12.10 (3.5.0-26) on a laptop with 4 cores. Clearly joblib.Parallel is spawning separate processes for the different workers, but is there any way that I can make these processes execute on different cores? A: Python 3 now exposes the methods to directly set the affinity >>> import os >>> os.sched_getaffinity(0) {0, 1, 2, 3} >>> os.sched_setaffinity(0, {1, 3}) >>> os.sched_getaffinity(0) {1, 3} >>> x = {i for i in range(10)} >>> x {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} >>> os.sched_setaffinity(0, x) >>> os.sched_getaffinity(0) {0, 1, 2, 3} A: After some more googling I found the answer here. It turns out that certain Python modules (numpy, scipy, tables, pandas, skimage...) mess with core affinity on import. As far as I can tell, this problem seems to be specifically caused by them linking against multithreaded OpenBLAS libraries. A workaround is to reset the task affinity using os.system("taskset -p 0xff %d" % os.getpid()) With this line pasted in after the module imports, my example now runs on all cores: My experience so far has been that this doesn't seem to have any negative effect on numpy's performance, although this is probably machine- and task-specific . Update: There are also two ways to disable the CPU affinity-resetting behaviour of OpenBLAS itself. At run-time you can use the environment variable OPENBLAS_MAIN_FREE (or GOTOBLAS_MAIN_FREE), for example OPENBLAS_MAIN_FREE=1 python myscript.py Or alternatively, if you're compiling OpenBLAS from source you can permanently disable it at build-time by editing the Makefile.rule to contain the line NO_AFFINITY=1 A: This appears to be a common problem with Python on Ubuntu, and is not specific to joblib: * *Both multiprocessing.map and joblib use only 1 cpu after upgrade from Ubuntu 10.10 to 12.04 *Python multiprocessing utilizes only one core *multiprocessing.Pool processes locked to a single core I would suggest experimenting with CPU affinity (taskset).
{ "language": "en", "url": "https://stackoverflow.com/questions/15639779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "138" }
Q: Can't uninstall npm module on OSX I have installed cordova on my mac and want to uninstall it. I've tried the following: npm uninstall cordova -g but I get the following error: npm WARN uninstall not installed in /usr/local/Cellar/node/0.10.32/lib/node_modules: "cordova" Any ideas? A: There are know issues with the way homebrew and npm play together. From this article by Dan Herbert There's an NPM bug for this exact problem. The bug has been "fixed" by Homebrew installing npm in a way that allows it to manage itself once the install is complete. However, this is error-prone and still seems to cause problems for some people. The root of the the issue is really that npm is its own package manager and it is therefore better to have npm manage itself and its packages completely on its own instead of letting Homebrew do it. Aside from that your node version is out of date. If you can upgrade you should do so to Node v4.1.2. Follow the instructions on the node.js site.
{ "language": "en", "url": "https://stackoverflow.com/questions/33015001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Sum of arrays between specific positions in the array I have two arrays of doubles. For example: {1,2,3} {4,5,6} We are using (x,y) coordinates as (1,4),(2,5) and (3,6) for example. So there are 3 data points in this example. We need to calculate the slope between all of the points and store the slope with the maximum absolute value. To do this I have a function with a 'for' loop inside a 'for' loop. My code is successfully calculating the maximum absolute value slope when I use it in the Main. We need to find the positions that this max slope occurs at and print out the corresponding positions in array2. I have done this. Finally there is a third array. We need to calculate the average value of the numbers up to the first point used in the max slope, the average value of the numbers between the first and second points used in the max slope and also the average value of the numbers from the second point used in the max slope to the end of the array. Does this needs to be done in the main, or does it need its own method? public static void Main() { List<double> Array1 = new List<double>(); List<double> Array2 = new List<double>(); List<double> Array3 = new List<double>(); Array1.Add(Convert.ToDouble(columns[0])); Array2.Add(Convert.ToDouble(columns[1])); Array3.Add(Convert.ToDouble(columns[1])); int[] positions = GreatestSlopeLocation(Array1, Array2); Console.WriteLine("The Location of the max slope is {0} {1}", Array2[positions[0]], Array2[positions[1]]); } static int[] GreatestSlopeLocation(List<double> x, List<double> y) { int size = x.Count; double maxSlope = Double.MinValue; // don't set this to 0 // consider case when all slopes are negative // int index1 = 0; int index2 = 0; for (int i = 0; i < size; i++) { for (int j = i + 1; j < size; j++) { double slope = Math.Abs((y[j] - y[i]) / (x[j] - x[i])); if (slope > maxSlope) // I think you want > instead of < here { maxSlope = slope; index1 = i; index2 = j; } } } return new int[]{index1, index2}; }
{ "language": "en", "url": "https://stackoverflow.com/questions/34009055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Firebase hosting using Old webpack bundle when deployed via GitHub Actions I'm trying to create a GitHub-Action that first uses Webpack to create a bundle.js file and then deploy my index.html and newly created bundle.js to Firebase hosting. First I just pushed my bundle.js in my GitHub repository, but I since removed it via "git rm --cache public/bundle.js" (not sure if this is the exact command) and added bundle.js to my .gitignore. The action does work (no error or crash) but it seems that Firebase does not use the newly packaged bundle.js, but instead uses an old version that might be cached somewhere? firebase-hosting-pull-request.yml: name: Deploy to Firebase Hosting on PR 'on': pull_request jobs: build_and_preview: if: '${{ github.event.pull_request.head.repo.full_name == github.repository }}' runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - run: npm ci && npm run build - uses: FirebaseExtended/action-hosting-deploy@v0 with: repoToken: '${{ secrets.GITHUB_TOKEN }}' firebaseServiceAccount: '${{ secrets.FIREBASE_SERVICE_ACCOUNT_PROJECTLIBRARIAN }}' projectId: projectlibrarian npm run build in this case executes webpack --mode=production My index.js (which is webpack'd into bundle.js) only contains a single log statement which I changed to test the GitHub-Action. Link to the Repo if it helps someone I'm wondering if I need to clear some kind of GitHub actions cache? or maybe the Firebase deploy action script I'm using isn't doing what I think it should be doing? What confusing me the most is where does Firebase get the old version of bundle.js! A: Ok I have found two changes that fixed the problem. Add channelId: live in my workflow yaml Reload the site with the "Empty chache and Hard reload" developer option.
{ "language": "en", "url": "https://stackoverflow.com/questions/71632892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Javascript: List Files in Directory on Server Is there a way to list all files in a particular directory on the server using Javascript? If so, how? I am not allowed to use platforms which require server side installation. I mean platforms like node.js and such. A: Well how would you list the server side files if you have no access to server side ? "I am not allowed to use platforms which require server side installation." There are some workarounds for this problem, but these usually needs some extra enabled flags from the user side. http://www.chrome-allow-file-access-from-file.com/ A: I am not allowed to use platforms which require server side installation. You need some server side handler for this request. But may be you already have something. For example mod_autoindex for apache..
{ "language": "en", "url": "https://stackoverflow.com/questions/27876180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: How do I remove duplicate values ​in a v-for array? I coded like this to remove duplicate values. vue <div class="col-md-6" style="float: left"> <ul class="list-group"> <li class="list-group-item" :class="{ active: index1 == currentIndex1 }" v-for="(member, index1) in uniqueMemName" v-bind:value="member.mem_name" :key="index1" @click="setActiveMember(member, index1)" > <strong style="margin-bottom: 5px"> {{member.mem_name}} </strong> </li> </ul> </div> vue (script) computed: { uniqueMemName() { return _.uniqBy(this.members, function(m) { return m.mem_name; }); } }, I also installed lodash. But I get an error. Which part is wrong? Please let me know if there is another method other than the one I wrote. ++) error console console window ++) array information: I have tables A and B. Table A imports only the mem_name column. Table B imports all columns. Example -> a.mem_name b.col1 b.col2 mem1 10 20 mem1 30 40 mem2 50 60 I'm working on making duplicate mem_names into one at this time. Using lodash's unique features. A: If you want to use lodash just for this, which sounds like the case, I suggest that there may be a better way without it, only using newer vanilla JS: ... computed: { uniqueMemName() { return [...new Set(this.members.map(m => m.mem_name))] } } Sets are always unique, so mapping and converting to a set and then back to array gives us a unique array.
{ "language": "en", "url": "https://stackoverflow.com/questions/67817634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you reference a C# class library from a Win8 Javascript metro application? When I try the standard way it complains of an unsupported reference and I can't seem to use any of my classes. A: You need to create a Windows Runtime component by creating a class library from the "Visual C#" -> "Windows Metro Style" -> "Class Library" template. Then in the properties for that class library project you need to mark the output type as "WinMD File" Better instructions can be found here: http://msdn.microsoft.com/en-us/library/windows/apps/hh779077(v=vs.110).aspx This isn't stated in the documentation and is probably just a bug with the Windows 8 Consumer Preview and the Visual Studio 11 Beta but be sure not to include a period in the name of the project you're referencing. For instance, I was working on a Car application so I made an assembly named "Car.Business". The application would always crash with a blank startup screen whenever I tried to reference this. If on the other hand I just used "Business" as the name of the assembly then the application would work fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/9650654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I reference the Toggle Button Circle and override vertical alignment of the Aero theme Expander? Currently, I am referencing the Aero theme in my App.xml file. In the main.xml file, I am using expanders to display the content in a resizable width app. (For this example, I limited the width to 500) The expander header content generally will be short, but it allows for up to 500 chars. Depending on the window size (600 pixels by default), the content can wrap (thereby stretching the expander header downwards). This is fine, but the toggle button (a circle /w arrow) is set for VerticalAlignment=center from what I can tell. I need a way to override that VerticalAlignment in a style without re-creating the Aero template for the Expander. I just cant seem to reference the circle and arrow objects. I have tried overriding Toggle Button as well, with no luck. As you can see below, I can override some aspects of the Aero Expander. I just need that little nudge to get the Toggle Button Circle and Arrow objects to change the VerticalAlignment. Thanks Code example follows: <Window.Resources> <Style TargetType="{x:Type Expander}" BasedOn="{StaticResource {x:Type Expander}}"> <Setter Property="Foreground" Value="White" /> <Setter Property="Background" Value="#464646" /> <Setter Property="Width" Value="Auto" /> <Setter Property="Margin" Value="1,0,1,0" /> <Setter Property="IsExpanded" Value="False" /> </Style> </Window.Resources> <Expander ContextMenu="{StaticResource cMnu}" Width="auto"> <Expander.Header> <StackPanel Orientation="Horizontal" Width="auto" Margin="0"> <TextBlock Width="65">Normal</TextBlock> <TextBlock Width="80">Open</TextBlock> <TextBlock Width="80">10/31/2009</TextBlock> <TextBlock TextWrapping="Wrap" Width="500"> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam ultrices auctor magna, sit amet commodo ipsum accumsan eu. Sed a mollis felis. Nam ullamcorper augue vel mauris consequat imperdiet. Nunc in augue mauris. Quisque metus tortor, porttitor nec auctor id, mollis nec ipsum. Suspendisse eget ipsum vitae lectus fermentum porta. Aliquam erat volutpat. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Phasellus congue dui ac arcu eleifend a amet. </TextBlock> </StackPanel> </Expander.Header> </Expander> A: If you look at the default template for the Expander, you can see why none of your property setters are working: <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="20" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <ToggleButton IsChecked="{Binding Path=IsExpanded,Mode=TwoWay, RelativeSource={RelativeSource TemplatedParent}}" OverridesDefaultStyle="True" Template="{StaticResource ExpanderToggleButton}" Background="{StaticResource NormalBrush}" /> <ContentPresenter Grid.Column="1" Margin="4" ContentSource="Header" RecognizesAccessKey="True" /> </Grid> The ToggleButton's VerticalAlignment is what you are after, and there are no setters for it. It seems to me that there is no way to change this alignment property through the Style. You must provide a new template.
{ "language": "en", "url": "https://stackoverflow.com/questions/1331142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to ensure the exception "The calling thread cannot access this object because a different thread owns it"? In C# WPF application, if in the following button click event habdler: private void start_Click(object sender, RoutedEventArgs e) { for (int i = 2; i < 20; i++) { var t = Task.Factory.StartNew (() => { var result=Thread.CurrentThread.ManagedThreadId.ToString(); //this.Dispatcher.BeginInvoke(new Action(() => textBlock1.Text += "root " + i.ToString() + " " + result + Environment.NewLine ;//to comment this line if to uncomment th others //), null); } ); } } to uncomment the commented lines, i.e. to output to textblock through Dispatcher.BeginInvoke() then it is filled with varying thread IDs. Though with commented lines, as shown above, the textblock stays blank and there is no exception thrown. In similar situation using Parallel.For private void start_Click(object sender, RoutedEventArgs e) { Parallel.For(2, 6, (i) => { var result = Thread.CurrentThread.ManagedThreadId.ToString(); textBlock1.Text += "root " + i.ToString() + " " + result + Environment.NewLine; } ); } the application breaks with exception: "The calling thread cannot access this object because a different thread owns it" Why isn't it thrown in first case, while using Task.Factory.StartNew() ? Any way to ensure this exception? A: Exceptions thrown in tasks are always handled by the Task object itself. The exception is later rethrown when you, e.g., access the Task.Result property. This way the handling of the exception is left to the thread creating the Task. If you run the first code snippet and look at the Output pane, you'll see that there are multiple first-chance InvalidOperationException logged there. The exceptions are thrown - the tasks just stash them away for later rethrowing. Parallel.For actually does the same - it stashes away all exceptions occurring within the delegate and then, when the loop is finished, it rethrows all exceptions that occurred in a single AggregateException. You'll notice that the debugger breaks in the thread calling Parallel.For, not within the delegate passed to it. To cause the exception in the task to propagate to the calling thread, like Parallel.For does, call Wait/WaitAll on the tasks.
{ "language": "en", "url": "https://stackoverflow.com/questions/15765147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Morphia aggregation query to get size of list I am trying to count the number of elements in a list of each document in a collection, similar to below example. (https://docs.mongodb.com/manual/reference/operator/aggregation/size/#exp._S_size) db.inventory.aggregate( [ { $project: { item: 1, numberOfColors: { $size: "$colors" } } } ] ) This query would return size of list "colors" in each document. An equivalent morphia query would be something like this : pipeline = ds.createAggregation(Abc.class) .match(query) .project(Projection.projection("count", Projection.expression("$size","colors"))); Error on executing above : java.lang.String cannot be cast to com.mongodb.DBObject I am unable to arrive at an equivalent morphia query to achieve the same. Any help in this regard would be greatly appreciated. A: Change the following lines of code project(Projection.projection("count", Projection.expression("$size","colors")) to Projection.expression("count",new BasicDBObject("$size","$colors"))) A: Did you try Projection.expression("$size","$colors"))); With dollar before colors?
{ "language": "en", "url": "https://stackoverflow.com/questions/46704786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: distinct value of one column based on another column mysql Table structure Column1 | Column2 | Column 3 | Column 4 . . .. . Column N Col1Val1 | Col2Val2 | . . . . . Col1Val1 | Col2Val2 | . . . .. Col1Val1 | Col2Val3 | . . . . Col1Val2 | Col2Val4 | . . . Col1Val3 | Col2Val5 | . . Col1Val3 | Col2Val6 | . .. I want to query for all the distinct values of column2 for a every distinct value of column1 and its count & values Example output should be: Col1Val1 | Col2Val2,Col2Val3 | 2 Col1Val2 | Col2Val4 | 1 Col1Val3 | Col2Val5,Col2Val6 | 2 . . This is pretty much doable from the querying + application handling. Can this/similar output be achieved via only the sql query. A: You can use GROUP_CONCAT() to aggregate and count he distinct Column2 values: SELECT Column1, GROUP_CONCAT(DISTINCT Column2), COUNT(DISTINCT Column2) FROM yourTable GROUP BY Column1 Output: Demo here: Rextester A: Try this. select Column1 , group_concat(distinct column2) ,count(distinct column2) from your_table group by column1
{ "language": "en", "url": "https://stackoverflow.com/questions/44516851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Prolog: How to find and remove minimum list element? I am new to Prolog. I need help writing a predicate which finds and deletes the minimum element in a list. Thank you very much! A: If all list items are integers, we can use clpfd! :- use_module(library(clpfd)). We define zmin_deleted/2 using maplist/2, (#=<)/2, same_length/2, and select/3: zmin_deleted(Zs1,Zs0) :- same_length(Zs1,[_|Zs0]), maplist(#=<(Min),Zs1), select(Min,Zs1,Zs0). Sample queries: ?- zmin_deleted([3,2,7,8],Zs). Zs = [3,7,8] ; false. ?- zmin_deleted([3,2,7,8,2],Zs). Zs = [3, 7,8,2] ; Zs = [3,2,7,8 ] ; false. Note that zmin_deleted/2 also works in the "other direction": ?- zmin_deleted(Zs,[3,2,7,8]). _A in inf..2, Zs = [_A, 3, 2, 7, 8] ; _A in inf..2, Zs = [ 3,_A, 2, 7, 8] ; _A in inf..2, Zs = [ 3, 2,_A, 7, 8] ; _A in inf..2, Zs = [ 3, 2, 7,_A, 8] ; _A in inf..2, Zs = [ 3, 2, 7, 8,_A] ; false. A: Let me google it for you. How could you find a minimum of a list.. Anyway, there is a nice min_list predicate. ?- min_list([1,2,2,3],X). X = 1. Here is a little example how could you remove some element from a list (notice, that all 2s is gone): ?- delete([1,2,2,3],2,X). X = [1, 3]. If you wanna remove only first occasion of a element, use select: ?- select(2, [2,1,2,2,3], X), !. X = [1, 2, 2, 3]. So your final answer could be something like that: delete_min(A, C) :- min_list(A, B), select(B, A, C), !. And ?- delete_min([1,1,2,3],X). X = [1, 2, 3]. A: Again, just use the structural recursion on lists. Lists are built of nodes, [H|T], i.e. compound data structures with two fields - head, H, and tail, T. Head is the data (list element) held in the node, and T is the rest of the list. We find minimum element by rolling along the list, while keeping an additional data - the minimum element from all seen so far: minimum_elt([H|T],X):- minimum_elt(T,H,X). There is no definition for the empty list case - empty lists do not have minimum elements. minimum_elt([],X,X). If there are no more elements in the list, the one we have so far is the answer. minimum_elt([A|B],M,X):- two cases here: A < M, or otherwise: A < M, minimum_elt(B,A,X). minimum_elt([A|B],M,X):- A >= M, minimum_elt(B,M,X). Nothing more to say about it, so this completes the program. edit: except, you wanted to also delete that element. This changes things. Hmm. One obvious approach is first find the minimum elt, then delete it. We'll have to compare all elements once again, this time against the minimal element found previously. Can we do this in one scan? In Lisp we could have. To remove any element from a list, surgically, we just reset the tail pointer of a previous node to point to the next node after the one being removed. Then using this approach, we'd scan the input list once, keeping the reference to the previous node to the minimum element found so far, swapping it as we find the smaller and smaller elements. Then when we've reached the end of the list, we'd just surgically remove the minimal node. But in Prolog, we can't reset things. Prolog is a set once language. So looks like we're stuck with the need to pass over the list twice ... or we can try to be very smart about it, and construct all possible lists as we go, sorting them out each time we find the new candidate for the smallest element. rem_min([A|B],L):- % two possibilities: A is or isn't the minimum elt rem_min(B,A,([A|Z],Z,Z),L). rem_min([],A,(With,Without,Tail),L):- Tail = [], % A is indeed the minimal element L = Without. rem_min([H|T],A,(With,Without,Tail),L):- H >= A, Tail=[H|Z], rem_min(T,A,(With,Without,Z),L). rem_min([H|T],A,(With,Without,Tail),L):- H < A, % use this H copy_the_list(With,Tail,W2,T2), % no good - quadratic behaviour Tail=[H|Z], T2=Z, rem_min(T,A,(With,W2,Z),L). copy_the_list([A|B],T,[A|C],C):- var(B), !, T=B. % fresh tail copy_the_list([A|B],T,[A|C],T2):- copy_the_list(B,T,C,T2). So looks like we can't avoid the second pass, but at least we can save all the superfluous comparisons: rem_min([A|B],L):- N=[A|_], rem_min(B,A,N,[N|Z],Z,L). rem_min([],_A,N,L2,Z,L):- Z=[], N=[_,1], % finalize the minimal entry scan(L2,L). rem_min([H|T],A,N,L2,Z,L):- H >= A, Z=[H|Z2], rem_min(T,A,N,L2,Z2,L). rem_min([H|T],A,N,L2,Z,L):- H < A, % use this H N2=[H|_], N=[_], % finalize the non-minimal entry Z=[N2|Z2], rem_min(T,H,N2,L2,Z2,L). scan( [], []). scan( [[_,1]|B],C):- !, scan(B,C). % step over the minimal element scan( [[A]|B],[A|C]):- !, scan(B,C). % previous candidate scan( [A|B], [A|C]):- !, scan(B,C).
{ "language": "en", "url": "https://stackoverflow.com/questions/14339993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Compare two NSDate by .Day in swift is giving wrong results Im trying to compare two NSDate by .Day doing this (both on gmt timezone): var order = NSCalendar.currentCalendar().compareDate(event.startDate!, toDate: cell.date!, toUnitGranularity: .Day) if (order == .OrderedSame) { print(String(event.startDate!) + " == " + String(cell.date!)) cell.events.append(event) break } When using 2016-03-08 21:43:53 +0000 and 2016-03-09 00:00:00 +0000 it says they are orderedSame, but they arent, the days are different. This function outputs: 2016-03-08 21:43:53 +0000 == 2016-03-09 00:00:00 +0000 Can someone help me please? A: Those two logged dates are being logged in the UTC timezone. But if your current timezone is west of that, then both dates are on March 8, 2016 in your timezone and the comparison is done in your timezone. Or, if you live east of that by at least 2.5 hours, then both dates are March 9, 2016. Example. If you live in the eastern USA (GMT-4 currently but GMT-5 on March 8), then those two dates are actually 2016-03-08 16:43:53 -0500 and 2016-03-08 19:00:00 -0500. Example. If you live in eastern Europe or Asia (say GMT+5), then those two dates would be 2016-03-09 02:43:53 +0500 and 2016-03-09 05:00:00 +0500. As you can see, those are on the same day so the comparison is correct.
{ "language": "en", "url": "https://stackoverflow.com/questions/36204879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: iFrame disappears on Internet Explorer A friend of mine created a simple web site using iWeb and added an iFrame to one of the pages. iWeb seems to be loading iFrames through a series of Javascript libraries and CSS files (quite strange). But whenever I look at this page on IE, the iFrame disappears after a few seconds: http://bit.ly/fD7QGD Does anyone know what could be causing this buggy behavior on IE? Thanks, Ralph A: Well, I tried it on Firefox, Chrome and IE and waited more than 1 minutes. It didn't disappear. Maybe there's a problem with your browser or it was a temporary bug.
{ "language": "en", "url": "https://stackoverflow.com/questions/4929477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: LibGDX 3d mesh transformation my intention is to make the "liquid" 3d structure, which is dynamically changing. The idea is to make it as the array of meshes and then change the elevation of the points of meshes. So, my question is how to transform the mesh by setting the vertices Y variable? And is it even possible in LibGDX?
{ "language": "en", "url": "https://stackoverflow.com/questions/34175756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Catch throw error from node module I am using a module which throws an error in case something is wrong. The error is not being returned as part of the callback and the run is being stopped when is thrown. I am calling it like this: co(function*(){ try{ for (let item of items){ item.newValue = yield myFunction(param) } }catch(err){ } } myFunction: function(param){ return new Promise(function(resolve, reject){ module.callMethod(param), function(result){ if (result) resolve(result) }else{ reject('err')} } } During callMethod I get a throw error but it is not caught on the co try and I'd like in case of a error to catch it but allow the for loop to continue. How can I properly catch the thrown error? A: I'd like in case of a error to catch it but allow the for loop to continue Promise constructor should treat errors thrown inside executor as promise rejection. MDN If an error is thrown in the executor function, the promise is rejected. So moving try-catch inside for loop should work. . 'use strict' co(function*() { for (let item of [1, 2, 3, 4]) { try { const newValue = yield myFunction(item) console.log(newValue) } catch (e) { console.error(e.message) } } }) function callMethod(param, cb) { if (param % 2) cb('Success') throw new Error(`Invalid param ${param}`) } function myFunction(param) { return new Promise(function(resolve, reject) { callMethod(param, function(result) { if (result) { resolve(result) } else { reject('err') } }) }) } <script> const module = {} </script> <script src="https://unpkg.com/[email protected]/index.js"></script>
{ "language": "en", "url": "https://stackoverflow.com/questions/46081201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Can't run jar file of JavaFx application on Linux I created JavaFx application on Windows.Then created jar file which works normal on windows.But when i try run this jar on Linux Mint i get this log: нояб. 07, 2022 3:15:00 PM com.sun.javafx.application.PlatformImpl startup WARNING: Unsupported JavaFX configuration: classes were loaded from 'unnamed module @6a7840b4' java.lang.ClassNotFoundException: com.sun.glass.ui.gtk.GtkPlatformFactory  at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641)  at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)  at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520)  at java.base/java.lang.Class.forName0(Native Method)  at java.base/java.lang.Class.forName(Class.java:375)  at com.sun.glass.ui.PlatformFactory.getPlatformFactory(PlatformFactory.java:42)  at com.sun.glass.ui.Application.run(Application.java:146)  at com.sun.javafx.tk.quantum.QuantumToolkit.startup(QuantumToolkit.java:291)  at com.sun.javafx.application.PlatformImpl.startup(PlatformImpl.java:293)  at com.sun.javafx.application.PlatformImpl.startup(PlatformImpl.java:163)  at com.sun.javafx.application.LauncherImpl.startToolkit(LauncherImpl.java:659)  at com.sun.javafx.application.LauncherImpl.launchApplication1(LauncherImpl.java:679)  at com.sun.javafx.application.LauncherImpl.lambda$launchApplication$2(LauncherImpl.java:196)  at java.base/java.lang.Thread.run(Thread.java:833) Failed to load Glass factory class Exception in thread "main" java.lang.NullPointerException: Cannot invoke "com.sun.glass.ui.PlatformFactory.createApplication()" because the return value of "com.sun.glass.ui.PlatformFactory.getPlatformFactory()" is null  at com.sun.glass.ui.Application.run(Application.java:146)  at com.sun.javafx.tk.quantum.QuantumToolkit.startup(QuantumToolkit.java:291)  at com.sun.javafx.application.PlatformImpl.startup(PlatformImpl.java:293)  at com.sun.javafx.application.PlatformImpl.startup(PlatformImpl.java:163)  at com.sun.javafx.application.LauncherImpl.startToolkit(LauncherImpl.java:659)  at com.sun.javafx.application.LauncherImpl.launchApplication1(LauncherImpl.java:679)  at com.sun.javafx.application.LauncherImpl.lambda$launchApplication$2(LauncherImpl.java:196)  at java.base/java.lang.Thread.run(Thread.java:833) I installed on Linux openjfx and i have openjdk 17.0.4. Please, i'm waiting any help.
{ "language": "en", "url": "https://stackoverflow.com/questions/74348523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Flask SQLAlchemy pagination error I have this code and the all() method and every other method works on this and I have looked all over and I could that the method paginate() works on BaseQuery which is also Query @app.route('/') @app.route('/index') @app.route('/blog') @app.route('/index/<int:page>') def index(page = 1): posts = db.session.query(models.Post).paginate(page, RESULTS_PER_PAGE, False) return render_template('index.html', title="Home", posts=posts) but this gives me the error AttributeError: 'Query' object has no attribute 'paginate' I've looked everywhere and I can't find any solution to this. A: From your question... that the method paginate() works on BaseQuery which is also Query I think this is where you're being confused. "Query" refers to the SQLAlchemy Query object. "BaseQuery" refers to the Flask-SQLALchemy BaseQuery object, which is a subclass of Query. This subclass includes helpers such as first_or_404() and paginate(). However, this means that a Query object does NOT have the paginate() function. How you actually build the object you are calling your "Query" object depends on whether you are dealing with a Query or BaseQuery object. In this code, you are getting the SQLAlchemy Query object, which results in an error: db.session.query(models.Post).paginate(...) If you use the following code, you get the pagination you're looking for, because you are dealing with a BaseQuery object (from Flask-SQLAlchemy) rather than a Query object (from SQLAlchemy). models.Post.query.paginate(...)
{ "language": "en", "url": "https://stackoverflow.com/questions/18468887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Field Dao in DaoImpl required a bean named 'entityManagerFactory' that could not be found I have error message as follows: *************************** APPLICATION FAILED TO START *************************** Description: Field visitorDao in com.ahmed.service.VisitorDaoImpl required a bean named 'entityManagerFactory' that could not be found. Action: Consider defining a bean named 'entityManagerFactory' in your configuration. Please have the classes as below. I am unable to understand what compiler wants to say. <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.springframework</groupId> <artifactId>landon-hotel-event-mgmt-app</artifactId> <version>0.1.0</version> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.0.0.RELEASE</version> </parent> <dependencies> <!-- https://mvnrepository.com/artifact/org.springframework.data/spring-data-jpa --> <dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-jpa</artifactId> <version>2.0.5.RELEASE</version> </dependency> <!-- https://mvnrepository.com/artifact/com.h2database/h2 --> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <version>1.4.197</version> <!--<scope>test</scope>--> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-thymeleaf</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-devtools</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>${hibernate.version}</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>${hibernate.version}</version> </dependency> <!-- jsr303 validation <dependency> <groupId>javax.validation</groupId> <artifactId>validation-api</artifactId> <version>1.1.0.Final</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-validator</artifactId> <version>5.1.3.Final</version> </dependency>--> </dependencies> <properties> <java.version>1.8</java.version> <hibernate.version>5.2.17.Final</hibernate.version> </properties> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> <name>Ahmed Visitor application</name> </project> Here are the classess. package com.ahmed.service; import com.ahmed.dao.VisitorDao; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; @Service public class VisitorDaoImpl { @Autowired VisitorDao visitorDao; } Here is another.... package com.ahmed.dao; import com.ahmed.domain.Visitor; import java.time.LocalDateTime; import java.util.List; import org.springframework.data.repository.CrudRepository; import org.springframework.stereotype.Repository; /** * * @author abhiramagopaladasa */ @Repository public interface VisitorDao extends CrudRepository<Visitor, Integer>{ List<Visitor> findByOutTime(LocalDateTime outTime); List<Visitor> findByInTime(LocalDateTime outTime); } I also have the HibernateConfiguration class as such: [![/* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ package com.ahmed.configuration; import java.util.Properties; import javax.sql.DataSource; import org.hibernate.SessionFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.PropertySource; import org.springframework.core.env.Environment; import org.springframework.jdbc.datasource.DriverManagerDataSource; import org.springframework.orm.hibernate5.HibernateTransactionManager; import org.springframework.orm.hibernate5.LocalSessionFactoryBean; import org.springframework.transaction.annotation.EnableTransactionManagement; /** * * @author abhiramagopaladasa */ @Configuration @EnableTransactionManagement @ComponentScan({ "com.ahmed.domain" }) @PropertySource(value = { "classpath:application.properties" }) public class HibernateConfiguration { @Autowired private Environment environment; @Bean public LocalSessionFactoryBean sessionFactory() { LocalSessionFactoryBean sessionFactory = new LocalSessionFactoryBean(); sessionFactory.setDataSource(dataSource()); sessionFactory.setPackagesToScan(new String\[\] { "com.ahmed.domain" }); sessionFactory.setHibernateProperties(hibernateProperties()); return sessionFactory; } @Bean public DataSource dataSource() { DriverManagerDataSource dataSource = new DriverManagerDataSource(); dataSource.setDriverClassName(environment.getRequiredProperty("jdbc.driverClassName")); dataSource.setUrl(environment.getRequiredProperty("jdbc.url")); dataSource.setUsername(environment.getRequiredProperty("jdbc.username")); dataSource.setPassword(environment.getRequiredProperty("jdbc.password")); return dataSource; } private Properties hibernateProperties() { Properties properties = new Properties(); properties.put("hibernate.dialect", environment.getRequiredProperty("hibernate.dialect")); properties.put("hibernate.show_sql", environment.getRequiredProperty("hibernate.show_sql")); properties.put("hibernate.format_sql", environment.getRequiredProperty("hibernate.format_sql")); return properties; } @Bean @Autowired public HibernateTransactionManager transactionManager(SessionFactory s) { HibernateTransactionManager txManager = new HibernateTransactionManager(); txManager.setSessionFactory(s); return txManager; } }][1]][1] Any help will be greatly appreciated. According to me I have already done everything required. I dont know what I am missing. Do I need to have another class configuring the entityManager or something?
{ "language": "en", "url": "https://stackoverflow.com/questions/50925605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why did jupyter server fail in VS code? I had this problem for a while now. I am trying to work on Jupyter Notebooks on VS code, but the jupyter server doesn't connect when I open VS code directly, but it works when I open VS code from Anaconda. Here is the error message: Activating Python 3.7.4 64-bit ('base': conda) to run Jupyter failed with Error: StdErr from ShellExec, tput: No value for $TERM and no -T specified tput: No value for $TERM and no -T specified tput: No value for $TERM and no -T specified for . /Users/jingzhaogao/opt/anaconda3/bin/activate && conda activate base && echo 'e8b39361-0157-4923-80e1-22d70d46dee6' && python /Users/jingzhaogao/.vscode/extensions/ms-python.python-2020.6.88468/pythonFiles/pyvsc-run-isolated.py /Users/jingzhaogao/.vscode/extensions/ms-python.python-2020.6.88468/pythonFiles/printEnvVariables.py. Please assist. A: Unfortunately there's nothing to specifically do here other than continue to open VS Code from the activated conda environment. Conda simply wants to own the environment you work from and those settings need to propagate into the one VS Code works from to make conda work.
{ "language": "en", "url": "https://stackoverflow.com/questions/62459632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Identifying Windows Active Directory Connection - Windows 8 Is there a way from within a Windows 8 store app (C#) to either identify that the OS is Windows 8 Pro or Enterprise edition? Can you also check to see if the OS has the capability to connect to an AD domain? I have found other posts saying that it isn't possible to find the version of Windows, and the posts suggest checking for capabilities. I have not been able to find any information on how to check if the user is connected to a domain or not. A: You can call UserInformation.GetDomainNameAsync to determine if the user is part of a domain. The app must declare the Enterprise Authentication app capability. To determine if you are on Pro, you might be able to call GetNativeSystemInfo and figure it out from the processor architecture.
{ "language": "en", "url": "https://stackoverflow.com/questions/14821296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Python assert -- Best way to report non-existing files in a list with one line Background I use assert statement to check if certain files in a list exist in my computer, then I would like to do further work if these files are all there. I've referenced to this thread, so I do something like this: from pathlib import Path # The list containing several filepaths files = ['folder/file1', 'folder/file2', 'folder/file3'] # check if all of these files are exist assert all(Path(n).exists() for n in files) # Do something else ... # ... This piece of code is runnable. If one file does not exist, the program will raise AssertionError. Question Now I would like to all the file(s) that not exist, instead of a simple AssertionError. Is there any one-liner solution for this? What I have tried I've tried the following: assert all(Path(n).exists() for n in files), f"file {n} not exist!" Runing this code will report NameError: name 'n' is not defined. A: You'll need to use a loop: for file in files: assert Path(file).exists(), f"{file} does not exist!" In Python 3.8+, you can do it in one line by using the walrus operator: assert not (missing_files := [n for n in files if not Path(n).exists()]), f"files missing: {missing_files}" A: n from comprehension isn't in scope where you do f'{n}' You can show all not exists files not_exists = [f for f in files if not Path(f).exists()] assert not not_exists, f'Files {not_exists} not exist' Or, only the first one for f in files: assert Path(f).exists(), f'{f} not exists' A: As already pointed out in comments, assert is strictly a development tool. You should use an exception which cannot be turned off for any proper run-time checks; maybe create your own exception for this. (Assertions will be turned off in production code under some circumstances.) Secondly, the requirement to do this in a single line of code is dubious. Code legibility should trump the number of lines everywhere, except possibly in time-critical code, where execution time trumps both. class MissingFilesException(Exception): pass missing = {x for x in files if not Path(x).exists()} if missing: raise MissingFilesException( 'Missing files: {}'.format(", ".join(missing))
{ "language": "en", "url": "https://stackoverflow.com/questions/56847249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }