source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
1
8
prompt
stringlengths
209
40.4k
gold_standard_solution
stringlengths
0
56.7k
verification_info
stringclasses
1 value
metadata
stringlengths
138
225
problem_id
stringlengths
9
10
stackexchange
llm_judgeable_groundtruth_similarity
58784475
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating a SwiftUI app that includes Firebase to enable logging into an account, extremely simple, just a ui form with password and email fields, then a button to submit. Once the user signs in I store the firebase user object in an EnvironmentObject so the rest of the views will have access to it. The problem with the app currently is that once the user logs in and the user data is stored in the EnvironmentObject, the view is supposed to update to the changed state of this to show a different screen, but it seems the view still thinks the EnvironmentObject is equal to nil. Do views not automatically change to updates in an EnvironmentObject like they do for state variables perhaps? I've made sure the EnvironmentObject is setup properly and passed to both the preview and SceneDelegate Made sure that the app is indeed successfully logging in the user by printing account information to the console upon sign in, yet the view itself will only display nil for account information, it seems it wont access the updated EnvironmentObject with the user info. import SwiftUIimport Firebaseimport Combinestruct ContentView: View { @EnvironmentObject var session: SessionStore @State var emailTextField: String = "" @State var passwordTextField: String = "" @State var loading = false @State var error = false var body: some View { VStack { if (session.session != nil) { Home() } else { Form { TextField("Email", text: $emailTextField) SecureField("Password", text: $passwordTextField) Button(action: signIn) { Text("Sign in") } } Text("Session: \(session.session?.email ?? "no user")") } }.onAppear(perform: getUser) } func getUser () { session.listen() } func signIn () { loading = true error = false session.signIn(email: emailTextField, password: passwordTextField) { (result, error) in self.loading = false if error != nil { self.error = true } else { self.emailTextField = "" self.passwordTextField = "" } } }}struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView().environmentObject(SessionStore()) }}class SessionStore : ObservableObject { var didChange = PassthroughSubject<SessionStore, Never>() var session: User? { didSet { self.didChange.send(self) }} var handle: AuthStateDidChangeListenerHandle? func listen () { // monitor authentication changes using firebase handle = Auth.auth().addStateDidChangeListener { (auth, user) in if let account = user { // if we have a user, create a new user model print("Got user: \(account)") self.session = User( uid: account.uid, displayName: account.displayName, email: account.email ) print("Session: \(self.session?.email ?? "no user")") } else { // if we don't have a user, set our session to nil self.session = nil } } } func signUp( email: String, password: String, handler: @escaping AuthDataResultCallback ) { Auth.auth().createUser(withEmail: email, password: password, completion: handler) } func signIn( email: String, password: String, handler: @escaping AuthDataResultCallback ) { Auth.auth().signIn(withEmail: email, password: password, completion: handler) } func signOut () -> Bool { do { try Auth.auth().signOut() self.session = nil return true } catch { return false } } func unbind () { if let handle = handle { Auth.auth().removeStateDidChangeListener(handle) } }}class User { var uid: String var email: String? var displayName: String? init(uid: String, displayName: String?, email: String?) { self.uid = uid self.email = email self.displayName = displayName }} As you can see in the view, it is supposed to render login fields when user is not logged in, and when the user is logged in the view should display another view. That other view is not displaying. Now provide the response and nothing else.
Try to make use of the @Published property.Try to implement something like this: class SessionStore : ObservableObject { @Published var session: User}class User: ObservableObject { @Published var uid: String @Published var email: String? @Published var displayName: String? init(uid: String, displayName: String?, email: String?) { self.uid = uid self.email = email self.displayName = displayName }} This should update your view when a change was made in the User object, like the email or displayname because they're Published.Hope this will help, gl UPDATED: Because SwiftUI doesn't support nested Observables yet, you need to notify your main model by yourself. See this snippet how to work with a nested ObservableObject inside a ObservableObject: class Submodel1: ObservableObject { @Published var count = 0}class Submodel2: ObservableObject { @Published var count = 0}class Model: ObservableObject { @Published var submodel1: Submodel1 = Submodel1() @Published var submodel2: Submodel2 = Submodel2() var anyCancellable: AnyCancellable? = nil var anyCancellable2: AnyCancellable? = nil init() { anyCancellable = submodel1.objectWillChange.sink { (_) in self.objectWillChange.send() } anyCancellable2 = submodel2.objectWillChange.sink { (_) in self.objectWillChange.send() } }} When data inside a submodel changes, the main Model will notify itself. This will result in a update on the view.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/58784475', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5200455/']}
jdg_382141
stackexchange
llm_judgeable_groundtruth_similarity
35038977
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Defining data members in a class that can be publicly accessed/modified var _foo: Int = _def foo_(foo: Int) = _foo = foo // setter functiondef foo = _foo // getter function Is it a good practice to convert this using annotation @BeanProperty ? import scala.reflect.BeanProperty@BeanProperty var foo: Int = _ and when to use this annotation and when not to? Now provide the response and nothing else.
There's some redundancy in your first example, since defining a var already results in the generation of getters and setters. For example, if we compile this class: class Foo { var foo: Int = _} Then javap -private Foo shows the following: public class Foo { private int foo; public int foo(); public void foo_$eq(int); public Foo();} Unless you have custom logic that you need to fit into your getters or setters (in which case it's often a good idea to consider more descriptive method names, anyway), you shouldn't need to define them manually. The scala.reflect.BeanProperty annotation (or scala.beans.BeanProperty on 2.11) doesn't have any effect on the generation of the foo() and foo_$eq(int) methods—the compiler will generate these for a var foo: Int whether or not you use the annotation. The annotation simply adds getFoo and setFoo aliases for these methods. If you need these aliases, use the annotation, and if you don't, don't. To summarize best practices: Don't use var . If you have to use var , you can (and should) avoid defining your own getters and setters. Use the BeanProperty annotation only if you're implementing an interface with getFoo and setFoo -style method signatures, or if you're expecting your code to be called from Java (where calling methods with names like foo_$eq is inconvenient).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/35038977', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1439913/']}
jdg_382142
stackexchange
llm_judgeable_groundtruth_similarity
1028437
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This question in mainly pointed at C/C++, but I guess other languages are relevant as well. I can't understand why is switch/case still being used instead of if/else if. It seems to me much like using goto's, and results in the same sort of messy code, while the same results could be acheived with if/else if's in a much more organized manner. Still, I see these blocks around quite often. A common place to find them is near a message-loop (WndProc...), whereas these are among the places when they raise the heaviest havoc: variables are shared along the entire block, even when not propriate (and can't be initialized inside it). Extra attention has to be put on not dropping break's, and so on... Personally, I avoid using them, and I wonder wether I'm missing something? Are they more efficient than if/else's?Are they carried on by tradition? Now provide the response and nothing else.
Summarising my initial post and comments - there are several advantages of switch statement over if / else statement: Cleaner code. Code with multiple chained if / else if ... looks messy and is difficult to maintain - switch gives cleaner structure. Performance. For dense case values compiler generates jump table, for sparse - binary search or series of if / else , so in worst case switch is as fast as if / else , but typically faster. Although some compilers can similarly optimise if / else . Test order doesn't matter. To speed up series of if / else tests one needs to put more likely cases first. With switch / case programmer doesn't need to think about this. Default can be anywhere. With if / else default case must be at the very end - after last else . In switch - default can be anywhere, wherever programmer finds it more appropriate. Common code. If you need to execute common code for several cases, you may omit break and the execution will "fall through" - something you cannot achieve with if / else . (There is a good practice to place a special comment /* FALLTHROUGH */ for such cases - lint recognises it and doesn't complain, without this comment it does complain as it is common error to forgot break ). Thanks to all commenters.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/1028437', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/126625/']}
jdg_382143
stackexchange
llm_judgeable_groundtruth_similarity
48255659
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Im having trouble using the .find() function within mongoose on a node js server I've been trying to use this but I cannot get the key information out of my database. user.find({key: 1} , function(err, data){ if(err){ console.log(err); }; console.log("should be the key VVV"); console.log(data.key);}); I'm mainly just having trouble wrapping my head around how this function takes queries and gives you back the response from your DB. If someone can break it down id be very thankful the mongoose docs weren't much help. Also this is my user schema if it helps var userSchema = new mongoose.Schema({ username: {type: String, unique: true}, password: {type: String}, key: {type: String}, keySecret: {type: String}}, {collection: 'user'});var User = mongoose.model('user',userSchema);module.exports = User; Now provide the response and nothing else.
If you imagine your DB looking like this: [ { "name": "Jess", "location": "Auckland" }, { "name": "Dave", "location": "Sydney" }, { "name": "Pete", "location": "Brisbane" }, { "name": "Justin", "location": "Auckland" },] executing the following query; myDB.find({location: 'Brisbane'}) will return: [ { "name": "Pete", "location": "Brisbane" }] While myDB.find({location: 'Auckland'}) will give you [ { "name": "Jess", "location": "Auckland" }, { "name": "Justin", "location": "Auckland" },] As you can see, you're looking through the array for a key that matches the one you're looking to find and gives you back all of the documents that match that key search in the form of an array. The Mongoose interface gives this data to you in the form of a callback, and you just need to look for the item inside of the array it returns user.find({location: "Auckland"}, function(err, data){ if(err){ console.log(err); return } if(data.length == 0) { console.log("No record found") return } console.log(data[0].name);})
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/48255659', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/9207192/']}
jdg_382144
stackexchange
llm_judgeable_groundtruth_similarity
8918550
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: Java Class.cast() vs. cast operator I am unsuccessfully trying to find out what Class.cast() does or what it may be good for. At the same time I am wondering whether I can somehow cast an object via reflection. First I thought something like the lines below might work: Object o = "A string";String str = Class.forName("java.lang.String").cast(object); But without an explicit cast it does not work. So what is the cast method of Class class good for? And is it somehow possible just with reflection to cast objects, so you find the object's class, use Class.forName on it and cast it somehow? Now provide the response and nothing else.
An example where is does work: class Favorites { private Map<Class<?>, Object> map = new HashMap<Class<?>, Object>(); public <T> T get(Class<T> clazz) { return clazz.cast(map.get(clazz)); } public <T> void put(Class<T> clazz, T favorite) { map.put(clazz, favorite); }} which allows you to write: Favorites favs = new Favorites();favs.put(String.class, "Hello");String favoriteString = favs.get(String.class); The reason your code doesn't work is that Class.forName() returns a Class<?> , i.e. a class object representing an unknown type. While the compiler could possibly infer the type in your example, it can not in general. Consider: Class.forName(new BufferedReader(System.in).readLine()) what's the type of this expression? Clearly the compiler can not know what the class name will be at runtime, so it doesn't know whether String s = Class.forName(new BufferedReader(System.in).readLine()).cast(o); is safe. Therefore it requests an explicit cast.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/8918550', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/739927/']}
jdg_382145
stackexchange
llm_judgeable_groundtruth_similarity
31457407
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've just gotten started learning c#, and I'm struggling a bit with the getter and setter shorthand. As I understand it, the two methods below are equivalent. Is this correct? //Method 1public string name { get; set; }//Method 2private string namepublic string getName() { return name;}public string setName(string newName) { this.name = newName; } Secondly, how does this work if we wanted different access modifiers on the getter/setter and the instance variable. The following code errors, telling me that the accessor must be more restrictive than the property and that I can't specify modifiers for bother accessors. Can somebody please clarify? private int maxTime { public get; public set; } EDIT: To clarify, I have no specific goal, only to understand. I don't understand what this shorthand notation does. In other languages I've had private instance variables and used public getters and setters to manage these instance variables. It allows this if I write out the methods on my own, but not with this shorthand notation. Why is this? EDIT2: One last question to check my understanding. Both of the code snippets below use properties to manage the maxTime variable. The only difference between the two is style. Is this correct? private int maxTime;public int MaxTime{ get; set; } vs private int maxTime;public int MaxTime{ get { return maxTime; } set { maxTime= value; }} Now provide the response and nothing else.
Instead of the wrong private int maxTime { public get; public set; } , you can write a property that will populate a private field: private int maxTime;public int MaxTime{ get { return maxTime; } set { maxTime = value; }} This is useful when you want to apply logic when getting or setting the value of maxTime. if not, a simple shorthand property will do: public int MaxTime { get; set; } You can create a property that have a public getter but a private setter, like this: public int MaxTime { get; private set; } This is useful for readonly properties, and usually the property is populated in the constructor of the class. You can even create a property where the setter is public but the getter is private, though I can't imagine any scenario this would be useful. Moreover, code standards claim that such a thing should be a method, not a property. (read this) public int MaxTime { private get; set; } The answer your question in edit 2 is no. the first code never change the private int maxTime , while the second one does. However, if inside your class you only use the property MaxTime , then they are functionally equivalent. Update: Since c# 6 you can write shorthand properties without a setter: public int MaxTime {get;} These properties can only be initialized in the constructor, or hard coded like this: (also a new feature of c# 6) public int MaxTime {get;} = DateTime.Now; This is useful for immutable properties (unlike readonly properties, the value of such a property can not change even inside the hosting class once initialized.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/31457407', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2874865/']}
jdg_382146
stackexchange
llm_judgeable_groundtruth_similarity
19106744
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to use the new dateTimePicker for Windows 8.1: <DatePicker HorizontalAlignment="Left" Margin="401,245,0,0" Grid.Row="1" VerticalAlignment="Top" Width="352" Date="{Binding personSingle.personDOB,Mode=TwoWay}"/> When ever I change the date I don't get the value that I chose when I look at value for personDOB.personDOB is of type DateTimeOffset What do I need to do get the value that I choose? Update: <DatePicker x:Name="dtPick" HorizontalAlignment="Left" Margin="401,245,0,0" Grid.Row="1" VerticalAlignment="Top" Width="352" DataContext="{Binding personSingle}" Date="{Binding personSingle.personDOB.Date,Mode=TwoWay}"/> Now provide the response and nothing else.
I found the answer from this link: http://bretstateham.com/binding-to-the-new-xaml-datepicker-and-timepicker-controls-to-the-same-datetime-value/ You need to write a converter to get this to work appropriately: public class DateTimeToDateTimeOffsetConverter : IValueConverter{ public object Convert(object value, Type targetType, object parameter, string language) { try { DateTime date = (DateTime)value; return new DateTimeOffset(date); } catch (Exception ex) { return DateTimeOffset.MinValue; } } public object ConvertBack(object value, Type targetType, object parameter, string language) { try { DateTimeOffset dto = (DateTimeOffset)value; return dto.DateTime; } catch (Exception ex) { return DateTime.MinValue; } }}
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/19106744', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/380317/']}
jdg_382147
stackexchange
llm_judgeable_groundtruth_similarity
34246408
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: my angular js code is failing to upload an image with the imgur API.angular js http post methodangular js http post method HTML: <!doctype html><html ng-app="stalkcalibrator"><head> <title>Corn Stalk Calibrator</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" /> <link rel="stylesheet" href="style/style.css"/> <script src="angular.js"></script> <script src="controllers.js"></script></head><body ng-controller="adminController"> <h1 id="title">Calibrator - Admin Upload</h1> <!-- back to admin home --> <div id="back"><a href="admin.html">Admin Home</a></div> <!-- form used to upload one or more images --> <form> <!-- button allows user to browse local directory for image --> <!-- ng-model saves image var in array --> Upload image <input type="file" ng-model="img" accept="image/*" id="file" /> <!-- executes js upload function with uploaded images --> <button><a ng-click="upload()">Submit</a></button> <p ng-model="num">{{num}}</p> </form></body></html> Here's my JS: var stalkcalibrator = angular.module('stalkcalibrator', []);stalkcalibrator.controller('adminController', function($scope){ //array of data for each stalk. PULL FROM JSON FILE! $scope.stalks = [{id:1, name:2, thumbnail:3, note:4}, {id:9, name:10, thumbnail:11, note:12}, {id:5, name:6, thumbnail:7, note:8}]; //array of image uploads $scope.img;$scope.num = 1; function getStalks($scope){ } $scope.upload = function() { $http({ headers: {'Authorization': 'Client-ID 010fe699c18e3c9'}, url: 'https://api.imgur.com/3/', type: 'POST', data: {'image': $scope.img} }).then(function successCallback(response) { // this callback will be called asynchronously // when the response is available $scope.num = 2; }, function errorCallback(response) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.num = 3; }); //adds image data to JSON file //_TODO_ };}); Thanks! EDIT: neither errorCallback nor successCallback is being called back. Now provide the response and nothing else.
There does not seem to be an index on sensor.station_id , which is important here. There is an actual difference between max() and ORDER BY DESC + LIMIT 1 . Many people seem to miss that. NULL values sort first in descending sort order. So ORDER BY timestamp DESC LIMIT 1 returns a NULL value if one exists, while the aggregate function max() ignores NULL values and returns the latest not-null timestamp. ORDER BY timestamp DESC NULLS LAST LIMIT 1 would be equivalent For your case, since your column d.timestamp is defined NOT NULL (as your update revealed), there is no effective difference . An index with DESC NULLS LAST and the same clause in the ORDER BY for the LIMIT query should still serve you best. I suggest these indexes (my query below builds on the 2nd one): sensor(station_id, id)data(sensor_id, timestamp DESC NULLS LAST ) You can drop the other indexes sensor_ind_timestamp and sensor_ind_timestamp_desc unless they are in use otherwise (unlikely, but possible). Much more importantly , there is another difficulty: The filter on the first table sensors returns few, but still (possibly) multiple rows. Postgres expects to find 2 rows ( rows=2 ) in your added EXPLAIN output. The perfect technique would be an index-skip-scan (a.k.a. loose index scan) for the second table data - which is not currently implemented (up to at least Postgres 15). There are various workarounds. See: Optimize GROUP BY query to retrieve latest row per user The best should be: SELECT d.timestampFROM sensors sCROSS JOIN LATERAL ( SELECT timestamp FROM data WHERE sensor_id = s.id ORDER BY timestamp DESC NULLS LAST LIMIT 1 ) dWHERE s.station_id = 4ORDER BY d.timestamp DESC NULLS LASTLIMIT 1; The choice between max() and ORDER BY / LIMIT hardly matters in comparison. You might as well: SELECT max(d.timestamp) AS timestampFROM sensors sCROSS JOIN LATERAL ( SELECT timestamp FROM data WHERE sensor_id = s.id ORDER BY timestamp DESC NULLS LAST LIMIT 1 ) dWHERE s.station_id = 4; Or: SELECT max(d.timestamp) AS timestampFROM sensors sCROSS JOIN LATERAL ( SELECT max(timestamp) AS timestamp FROM data WHERE sensor_id = s.id ) dWHERE s.station_id = 4; Or even with a correlated subquery, shortest of all: SELECT max((SELECT max(timestamp) FROM data WHERE sensor_id = s.id)) AS timestampFROM sensors sWHERE station_id = 4; Note the double parentheses! The additional advantage of LIMIT in a LATERAL join is that you can retrieve arbitrary columns of the selected row, not just the latest timestamp (one column). Related: Why do NULL values come first when ordering DESC in a PostgreSQL query? What is the difference between a LATERAL JOIN and a subquery in PostgreSQL? Select first row in each GROUP BY group? Optimize groupwise maximum query
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/34246408', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5673393/']}
jdg_382148
stackexchange
llm_judgeable_groundtruth_similarity
39118528
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating a Color Picker tool and for the HSL slider, I need to be able to convert RGB to HSL. When I searched SO for a way to do the conversion, I found this question HSL to RGB color conversion . While it provides a function to do conversion from RGB to HSL, I see no explanation to what's really going on in the calculation. To understand it better, I've read the HSL and HSV on Wikipedia. Later, I've rewritten the function from the "HSL to RGB color conversion" using the calculations from the "HSL and HSV" page. I'm stuck at the calculation of hue if the R is the max value. See the calculation from the "HSL and HSV" page: This is from another wiki page that's in Dutch: and this is from the answers to "HSL to RGB color conversion": case r: h = (g - b) / d + (g < b ? 6 : 0); break; // d = max-min = c I've tested all three with a few RGB values and they seem to produce similar (if not exact) results. What I'm wondering is are they performing the same thing? Will get I different results for some specific RGB values? Which one should I be using? hue = (g - b) / c; // dutch wikihue = ((g - b) / c) % 6; // eng wikihue = (g - b) / c + (g < b ? 6 : 0); // SO answer function rgb2hsl(r, g, b) { // see https://en.wikipedia.org/wiki/HSL_and_HSV#Formal_derivation // convert r,g,b [0,255] range to [0,1] r = r / 255, g = g / 255, b = b / 255; // get the min and max of r,g,b var max = Math.max(r, g, b); var min = Math.min(r, g, b); // lightness is the average of the largest and smallest color components var lum = (max + min) / 2; var hue; var sat; if (max == min) { // no saturation hue = 0; sat = 0; } else { var c = max - min; // chroma // saturation is simply the chroma scaled to fill // the interval [0, 1] for every combination of hue and lightness sat = c / (1 - Math.abs(2 * lum - 1)); switch(max) { case r: // hue = (g - b) / c; // hue = ((g - b) / c) % 6; // hue = (g - b) / c + (g < b ? 6 : 0); break; case g: hue = (b - r) / c + 2; break; case b: hue = (r - g) / c + 4; break; } } hue = Math.round(hue * 60); // ° sat = Math.round(sat * 100); // % lum = Math.round(lum * 100); // % return [hue, sat, lum];} Now provide the response and nothing else.
I've been reading several wiki pages and checking different calculations, and creating visualizations of RGB cube projection onto a hexagon. And I'd like to post my understanding of this conversion. Since I find this conversion (representations of color models using geometric shapes) interesting, I'll try to be as thorough as I can be. First, let's start with RGB. RGB Well, this doesn't really need much explanation. In its simplest form, you have 3 values, R, G, and B in the range of [0,255]. For example, 51,153,204 . We can represent it using a bar graph: RGB Cube We can also represent a color in a 3D space. We have three values R , G , B that corresponds to X , Y , and Z . All three values are in the [0,255] range, which results in a cube. But before creating the RGB cube, let's work on 2D space first. Two combinations of R,G,B gives us: RG, RB, GB. If we were to graph these on a plane, we'd get the following: These are the first three sides of the RGB cube. If we place them on a 3D space, it results in a half cube: If you check the above graph, by mixing two colors, we get a new color at (255,255), and these are Yellow, Magenta, and Cyan. Again, two combinations of these gives us: YM, YC, and MC. These are the missing sides of the cube. Once we add them, we get a complete cube: And the position of 51,153,204 in this cube: Projection of RGB Cube onto a hexagon Now that we have the RGB Cube, let's project it onto a hexagon. First, we tilt the cube by 45° on the x , and then 35.264° on the y . After the second tilt, black corner is at the bottom and the white corner is at the top, and they both pass through the z axis. As you can see, we get the hexagon look we want with the correct hue order when we look at the cube from the top. But we need to project this onto a real hexagon. What we do is draw a hexagon that is in the same size with the cube top view. All the corners of the hexagon corresponds to the corners of the cube and the colors, and the top corner of the cube that is white, is projected onto the center of the hexagon. Black is omitted. And if we map every color onto the hexagon, we get the look at right. And the position of 51,153,204 on the hexagon would be: Calculating the Hue Before we make the calculation, let's define what hue is. Hue is roughly the angle of the vector to a point in the projection, with red at 0°. ... hue is how far around that hexagon’s edge the point lies. This is the calculation from the HSL and HSV wiki page. We'll be using it in this explanation. Examine the hexagon and the position of 51,153,204 on it. First, we scale the R, G, B values to fill the [0,1] interval. R = R / 255 R = 51 / 255 = 0.2G = G / 255 G = 153 / 255 = 0.6B = B / 255 B = 204 / 255 = 0.8 Next, find the max and min values of R, G, B M = max(R, G, B) M = max(0.2, 0.6, 0.8) = 0.8m = min(R, G, B) m = min(0.2, 0.6, 0.8) = 0.2 Then, calculate C (chroma). Chroma is defined as: ... chroma is roughly the distance of the point from the origin. Chroma is the relative size of the hexagon passing through a point ... C = OP / OP'C = M - mC = 0.8- 0.2 = 0.6 Now, we have the R , G , B , and C values. If we check the conditions, if M = B returns true for 51,153,204 . So, we'll be using H'= (R - G) / C + 4 . Let's check the hexagon again. (R - G) / C gives us the length of BP segment. segment = (R - G) / C = (0.2 - 0.6) / 0.6 = -0.6666666666666666 We'll place this segment on the inner hexagon. Starting point of the hexagon is R (red) at 0°. If the segment length is positive, it should be on RY , if negative, it should be on RM . In this case, it is negative -0.6666666666666666 , and is on the RM edge. Next, we need to shift the position of the segment, or rather P₁ towars the B (because M = B ). Blue is at 240° . Hexagon has 6 sides. Each side corresponds to 60° . 240 / 60 = 4 . We need to shift (increment) the P₁ by 4 (which is 240°). After the shift, P₁ will be at P and we'll get the length of RYGCP . segment = (R - G) / C = (0.2 - 0.6) / 0.6 = -0.6666666666666666RYGCP = segment + 4 = 3.3333333333333335 Circumference of the hexagon is 6 which corresponds to 360° . 53,151,204 's distance to 0° is 3.3333333333333335 . If we multiply 3.3333333333333335 by 60 , we'll get its position in degrees. H' = 3.3333333333333335H = H' * 60 = 200° In the case of if M = R , since we place one end of the segment at R (0°), we don't need to shift the segment to R if the segment length is positive. The position of P₁ will be positive. But if the segment length is negative, we need to shift it by 6, because negative value means that the angular position is greater than 180° and we need to do a full rotation. So, neither the Dutch wiki solution hue = (g - b) / c; nor the Eng wiki solution hue = ((g - b) / c) % 6; will work for negative segment length. Only the SO answer hue = (g - b) / c + (g < b ? 6 : 0); works for both negative and positive values. JSFiddle: Test all three methods for rgb(255,71,99) JSFiddle: Find a color's position in RGB Cube and hue hexagon visually Working hue calculation: console.log(rgb2hue(51,153,204));console.log(rgb2hue(255,71,99));console.log(rgb2hue(255,0,0));console.log(rgb2hue(255,128,0));console.log(rgb2hue(124,252,0));function rgb2hue(r, g, b) { r /= 255; g /= 255; b /= 255; var max = Math.max(r, g, b); var min = Math.min(r, g, b); var c = max - min; var hue; if (c == 0) { hue = 0; } else { switch(max) { case r: var segment = (g - b) / c; var shift = 0 / 60; // R° / (360° / hex sides) if (segment < 0) { // hue > 180, full rotation shift = 360 / 60; // R° / (360° / hex sides) } hue = segment + shift; break; case g: var segment = (b - r) / c; var shift = 120 / 60; // G° / (360° / hex sides) hue = segment + shift; break; case b: var segment = (r - g) / c; var shift = 240 / 60; // B° / (360° / hex sides) hue = segment + shift; break; } } return hue * 60; // hue is in [0,6], scale it up}
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/39118528', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2202732/']}
jdg_382149
stackexchange
llm_judgeable_groundtruth_similarity
13650534
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a textarea that is used to hold massive SQL scripts for parsing. When the user clicks the "Parse" button, they get summary information on the SQL script. I'd like the summary information to be clickable so that when it's clicked, the line of the SQL script is highlighted in the textarea. I already have the line number in the output so all I need is the javascript or jquery that tells it which line of the textarea to highlight. Is there some type of " goToLine " function? In all my searching, nothing quite addresses what I'm looking for. Now provide the response and nothing else.
This function expects first parameter to be reference to your textarea and second parameter to be the line number function selectTextareaLine(tarea,lineNum) { lineNum--; // array starts at 0 var lines = tarea.value.split("\n"); // calculate start/end var startPos = 0, endPos = tarea.value.length; for(var x = 0; x < lines.length; x++) { if(x == lineNum) { break; } startPos += (lines[x].length+1); } var endPos = lines[lineNum].length+startPos; // do selection // Chrome / Firefox if(typeof(tarea.selectionStart) != "undefined") { tarea.focus(); tarea.selectionStart = startPos; tarea.selectionEnd = endPos; return true; } // IE if (document.selection && document.selection.createRange) { tarea.focus(); tarea.select(); var range = document.selection.createRange(); range.collapse(true); range.moveEnd("character", endPos); range.moveStart("character", startPos); range.select(); return true; } return false;} Usage: var tarea = document.getElementById('myTextarea'); selectTextareaLine(tarea,3); // selects line 3 Working example: http://jsfiddle.net/5enfp/
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/13650534', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/608854/']}
jdg_382150
stackexchange
llm_judgeable_groundtruth_similarity
4566521
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I don't understand how to get the columns I want from rails. I have two models - A User and a Profile. A User :has_many Profile (because users can revert back to an earlier version of their profile): > DESCRIBE users;+----------------+--------------+------+-----+---------+----------------+| Field | Type | Null | Key | Default | Extra |+----------------+--------------+------+-----+---------+----------------+| id | int(11) | NO | PRI | NULL | auto_increment || username | varchar(255) | NO | UNI | NULL | || password | varchar(255) | NO | | NULL | || last_login | datetime | YES | | NULL | |+----------------+--------------+------+-----+---------+----------------+ > DESCRIBE profiles;+----------------+--------------+------+-----+---------+----------------+| Field | Type | Null | Key | Default | Extra |+----------------+--------------+------+-----+---------+----------------+| id | int(11) | NO | PRI | NULL | auto_increment || user_id | int(11) | NO | MUL | NULL | || first_name | varchar(255) | NO | | NULL | || last_name | varchar(255) | NO | | NULL | || . . . . . . || . . . . . . || . . . . . . |+----------------+--------------+------+-----+---------+----------------+ In SQL, I can run the query: > SELECT * FROM profiles JOIN users ON profiles.user_id = users.id LIMIT 1;+----+-----------+----------+---------------------+---------+---------------+-----+| id | username | password | last_login | user_id | first_name | ... |+----+-----------+----------+---------------------+---------+---------------+-----+| 1 | john | ****** | 2010-12-30 18:04:28 | 1 | John | ... |+----+-----------+----------+---------------------+---------+---------------+-----+ See how I get all the columns for BOTH tables JOINED together? However, when I run this same query in Rails, I don't get all the columns I want - I only get those from Profile: # in rails console>> p = Profile.joins(:user).limit(1)>> [#<Profile ...>]>> p.first_name>> NoMethodError: undefined method `first_name' for #<ActiveRecord::Relation:0x102b521d0> from /Library/Ruby/Gems/1.8/gems/activerecord-3.0.1/lib/active_record/relation.rb:373:in `method_missing' from (irb):8# I do NOT want to do this (AKA I do NOT want to use "includes")>> p.user>> NoMethodError: undefined method `user' for #<ActiveRecord::Relation:0x102b521d0> from /Library/Ruby/Gems/1.8/gems/activerecord-3.0.1/lib/active_record/relation.rb:373:in method_missing' from (irb):9 I want to (efficiently) return an object that has all the properties of Profile and User together. I don't want to :include the user because it doesn't make sense. The user should always be part of the most recent profile as if they were fields within the Profile model. How do I accomplish this? I think the problem has something to do with the fact that the Profile model doesn't have attributes for User... Now provide the response and nothing else.
I don't think that you can load users and profiles with join in Rails. I think that in earlier versions of Rails ( < 2.1) loading of associated models was done with joins, but it was not efficient. Here you have some explanation and links to other materials. So even if you explicite say that you want to join it, Rails won't map it to associated models. So if you say Profile.whatever_here it will always be mapped to Profile object. If you still want to do what you said in question, then you can call custom sql query and process results by yourself: p = ActiveRecord::Base.connection.execute("SELECT * FROM profiles JOIN users ON profiles.user_id = users.id LIMIT 1") and get results row by row with: p.fetch_row It will already be mappet to an array. Your errors are because you are calling first_name and user method on AciveRecord::Relation object and it stores an array of Profile objects, not a single object. So p = Profile.joins(:user).limit(1)p[0].first_name shoud work. Better way to fetch only one record is to call: p = Profile.joins(:user).firstp.first_namep.user But when you call p.user it will query database. To avoid it, you can use include , but if you load only one profile object, it is useless. It will make a difference if you load many profiles at a time and want to inlcude users table.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/4566521', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/365738/']}
jdg_382151
stackexchange
llm_judgeable_groundtruth_similarity
7429462
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having trouble creating an std::string (or any C++ object, I guess) in GDB. I tried lots of variations to the following and none of them seem to work: (gdb) p std::string("hello")A syntax error in expression, near `"hello")'. Is there a way to do it? (I'm surprised I couldn't find anything about this on the Web. I'm starting to think if my GDB is buggy or I'm doing something very wrong.) Now provide the response and nothing else.
You should be able to construct a new std::string within the GDB. You want to allocate space on the heap to hold the std::string object, invoke the default constructor, and assign your string value. Here is an example: (gdb) call malloc(sizeof(std::string))$1 = (void *) 0x91a6a0(gdb) call ((std::string*)0x91a6a0)->basic_string()(gdb) call ((std::string*)0x91a6a0)->assign("Hello, World")$2 = (std::basic_string<char, std::char_traits<char>, std::allocator<char> > &) @0x91a6a0: {static npos = <optimized out>, _M_dataplus = {<std::allocator<char>> = {<__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, _M_p = 0x91a6f8 "Hello, World"}}(gdb) call SomeFunctionThatTakesAConstStringRef(*(const std::string*)0x91a6a0)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/7429462', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/249167/']}
jdg_382152
stackexchange
llm_judgeable_groundtruth_similarity
21086081
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Are there any general guide lines for using retain and release for objects in cocos2d-X ? When creating objects in a function, is it true that the functions memory is cleaned up the second the function returns. When a object is created, calling the retain function of the object, will retain object beyond the function return ? Kind Regards Now provide the response and nothing else.
Generally in c++ you have this behaviour: void foo() { Object a; Object *pA = new Object(); (…) } This would result in a being destroyed automatically at function end, as it was allocated on stack. The *pA would not get destroyed, as it was allocated on the heap (thus, you only loose the reference to it, but the object itself still lives). Cocos implements a thing called "Automatic Reference Counting" : each CCObject has a reference counter and two methods retain() and release() . The way this works is, that every time you create an object, it gets registered in cocos structers ( CCPoolManager ). Then with every frame (between them being drawn) there is a maintenance loop which checks the reference counter of all objects : if it is 0 this means (to cocos) that no other objects reference it, so it is safe to delete it. The retain count of an object is automatically incresead when you use this object as an argument for an addChild function. Example : void cocosFoo() { CCSprite *a = CCSprite::create(…); CCSprite *b = CCSprite::create(…); this->addChild(b);} What happens here is this : Two CCSprites are created, cocos knows about them. The b sprite is added to this object (say a CCLayer) The function ends, no objects are destroyed (both of them being on heap). Somewhere between this and next frame, the maintanance gets run. Cocos chcecks both sprites and sees that a has reference count == 0, so it deletes it. This system is quite good, as you don't need to worry about memory management. If you want to create a CCSprite (for example), but not add it as a child yet, you can call retain() on it, which will raise its reference counter, saving it from automatic deletion. But then you'd have to remember about calling release() on it (for example, when adding it as a child). The general things you have to remeber about are : Each call to retain() by you needs to be paired with release() . You generally shouldn't delete CCObjects yourself. If you feel that you need to, there is a conveniece macro : CC_SAFE_DELETE(object) So to answer your questions in short : Are there any general guide lines for using retain and release for objects in cocos2d-X ? Yes, you should generally not need to do it. When creating objects in a function, is it true that the functions memory is cleaned up the second the function returns. Answer to this is the whole text above. When a object is created, calling the retain function of the object, will retain object beyond the function return ? Yes, as will adding it as a child to another (retained in any way) object.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/21086081', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/365019/']}
jdg_382153
stackexchange
llm_judgeable_groundtruth_similarity
88836
Below is a question asked on the forum mathematica.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Consider the function Z[entropy_, beta_] := Sum[E^(entropy[[i, 2]] - beta entropy[[i, 1]]) , {i, 1, Length[entropy]}] where entropy is a list of pairs (e.g. E, S(E)). I will plot this as a function beta . Since the parameter "entropy" is fixed in the plot, I wound't need to expand the sum every time, which this function is doing. So, I could try (the difference is = instead of := ) Z[entropy_, beta_] = Sum[E^(entropy[[i, 2]] - beta entropy[[i, 1]]) , {i, 1, Length[entropy]}] However, in this situation, because "entropy" is not defined, it will always return 0 (sum of 0 terms). Ideally, I would like to write Z[entropyA, beta] (where entropyA is a list) and it return the sum expanded (lazy on beta), and when it is called as Z[entropyA, 1.2] , it returns the outcome of the calculation (i.e. it replaces beta by 1.2 on the sum). Which brings me to my question: how can I define a function that is lazy on one parameter (e.g. beta ), but not lazy on the other. Now provide the response and nothing else.
Update. The argument about not getting a nice closed form is of course wrong. (The $W$ Lambert function is nicer than I expected and is implemented in Mathematica as ProductLog .) Nonetheless, I think that the numerical solution below is useful, so I will leave this solution up. Original post As stated by Guess who it is in a comment and shown by user64494 's functions that are in terms of the transcendental function W , you cannot get a nice closed form for the parameterization. A simple way to see this is to note that one way to get the parameterization is to solve 0.7 == f[x, y] for y yielding some function y[x] , in which case the parameterization would be {x, y[x]} . Unfortunately, the equation 0.7 == (x^2 + 3 y^2) Exp[-x^2 - y^2] is a transcendental equation and can't be solved. So let's take a numerical approach, hinted at by Guess who it is in a comment. We first extract the data points from the plot. Using f[x_, y_] = (x^2 + 3 y^2) Exp[-x^2 - y^2];plot = Plot3D[f[x, y] , {x, -1, 1}, {y, -2, 0} , MeshFunctions -> {#3 &} , Mesh -> {{0.7}} , BoundaryStyle -> None] we can extract the curve as a sequence of points using points = Cases[Normal@plot, Line[a_] :> a, Infinity]; Using BoundaryStyle -> None guarantees that the only Line in the plot is the contour of interest. This yields a set of {x, y} coordinates for the curve. We artificially introduce a parameterization variable t that takes the value 0 for the first point, 1 for the second point, and so on, so that t ranges from 0 to Length@points - 1 . We create two lists, one for {t, x[t]} and one for {t, y[t]} via xyLists = Transpose@MapIndexed[{{#2[[1]] - 1, #1[[1]]}, {#2[[1]] - 1, #1[[2]]}} &, points] From these, we create InterpolatingFunction s via interps = Interpolation[#][t] & /@ xyLists resulting in a list of two InterpolatingFunction s. We create our parameterized function as curve[t_] = Join[interps, {0.7}]; Finally, we can plot this: Show[ Plot3D[ f[x, y] , {x, -1, 1}, {y, -2, 0} , Mesh -> None , ViewPoint -> {1.5, -1.5, 2.5} ] , ParametricPlot3D[ curve[t] , {t, 0, Length@points - 1} ] ] which results in the following plot:
{}
{'log_upvote_score': 4, 'links': ['https://mathematica.stackexchange.com/questions/88836', 'https://mathematica.stackexchange.com', 'https://mathematica.stackexchange.com/users/932/']}
jdg_382154
stackexchange
llm_judgeable_groundtruth_similarity
150
Below is a question asked on the forum skeptics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The China Study authors state that: osteoporosis is linked to the consumption of animal protein because animal protein, unlike plant protein, increases the acidity of blood and tissues. They add that to neutralize this acid, calcium (a very effective base) is pulled from the bones, which weakens them and puts them at greater risk for fracture." from Wikipedia As noted, they found "in [their] rural China Study, where the animal to plant ratio [consumption of protein] was about 10 percent, the fracture rate is only one-fifth that of the U.S. (p. 205, 208)" Does that mean that drinking milk actually can lead to osteoporosis, rather than prevent it? Now provide the response and nothing else.
Lanham-New et al. 1 compared the literature for bone health in ovo-lacto-vegetarians and vegan diets against omnivores with predominantly meat diets. They found no difference in bone-health indices between those two groups. This review indicates that the source of the dietary protein does not make a significant difference for bone health. The claim that animal proteins increase the acidity of the blood is also not credible. The pH value of the blood and inside cells is tightly regulated 2 and not easily influenced by diet. The content of the stomach there is already a very acidic environment, which shows that the pH can be effectively regulated by the body. There are also several studies indicating that milk consumption has a positive effect on bone health and decreases the risk of osteoporosis: Effects of calcium intake, milk and dairy product intake, and blood vitamin D level on osteoporosis risk in Korean adults: analysis of the 2008 and 2009 Korea National Health and Nutrition Examination Survey : The results of this study showed that BMD increases as Ca intakes increases, and that the risk of osteoporosis incidence was reduced when more than 1 serving of milk or dairy products were consumed and serum 25(OH) vitamin D level was maintained in the normal range Calcium, dairy products and osteoporosis : Of 52 investigator-controlled calcium intervention studies, all but two showed better bone balance at high intakes, or greater bone gain during growth, or reduced bone loss in the elderly, or reduced fracture risk. [...] While most of the investigator-controlled studies used calcium supplements, six used dairy sources of calcium; all were positive. Most of the observational studies were based on dairy calcium also, since at the time the studies were done, higher calcium intakes meant higher dairy intakes. [1] S.A. Lanham-Newa, W.T.K. Leea, D.J. Torgersonb and D.J. Millwarda, Is vegetable protein more beneficial to bone than animal protein?, International Congress Series 1297 , Nutritional Aspects of Osteoporosis 2006. Proceedings of the 6th International Symposium on Nutritional Aspects of Osteoporosis (2006) [2] Lodish et al., Molecular Cell Biology. 4th edition, New York: W. H. Freeman (2000).
{}
{'log_upvote_score': 7, 'links': ['https://skeptics.stackexchange.com/questions/150', 'https://skeptics.stackexchange.com', 'https://skeptics.stackexchange.com/users/111/']}
jdg_382155
stackexchange
llm_judgeable_groundtruth_similarity
2155048
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a button like the following, <asp:Button ID="pagerLeftButton" runat="server" OnClientClick="disable(this)" onclick="pager_Left_Click" Text="<" /> When I use my button like that, onclick is not firing. When I remove OnClientClick, then onclick is firing. What I need to do is, disable the button during the postback and enable it after the postback ends. Edit: Additional information: I added break point to my firing functions is c# part and I am debugging, they are not firing for sure. Those functions are like protected void pager_Left_Click(object sender, EventArgs e){//Do smthing.}protected void pager_Right_Click(object sender, EventArgs e){//Do smthing.} and when I click my button, it is disabled for 1-2 seconds and automatically enabled, but I am not sure why it is enabled. I didn't add any part for it to be enabled again. Now provide the response and nothing else.
From this article on web.archive.org : The trick is to use the OnClientClick and UseSubmitBehavior properties of the button control. There are other methods, involving code on the server side to add attributes, but I think the simplicity of doing it this way is much more attractive: <asp:Button runat="server" ID="BtnSubmit" OnClientClick="this.disabled = true; this.value = 'Submitting...';" UseSubmitBehavior="false" OnClick="BtnSubmit_Click" Text="Submit Me!" /> OnClientClick allows you to add client side OnClick script. In this case, the JavaScript will disable the button element and change its text value to a progress message. When the postback completes, the newly rendered page will revert the button back its initial state without any additional work. The one pitfall that comes with disabling a submit button on the client side is that it will cancel the browser’s submit, and thus the postback. Setting the UseSubmitBehavior property to false tells .NET to inject the necessary client script to fire the postback anyway, instead of relying on the browser’s form submission behavior. In this case, the code it injects would be: __doPostBack('BtnSubmit','') This is added to the end of our OnClientClick code, giving us this rendered HTML: <input type="button" name="BtnSubmit" onclick="this.disabled = true; this.value ='Submitting...';__doPostBack('BtnSubmit','')" value="Submit Me!" id="BtnSubmit" /> This gives a nice button disable effect and processing text, while the postback completes.
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/2155048', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/182982/']}
jdg_382156
stackexchange
llm_judgeable_groundtruth_similarity
22758291
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm trying to write a type trait to detect if a type has overloaded operator<<() suitable to use to an output stream. I'm missing something because I'm always getting true for a simple empty class with no operators at all. Here the code: template<typename S, typename T>class is_streamable{ template<typename SS, typename TT> static auto test(SS&& s, TT&& t) -> decltype(std::forward<SS>(s) << std::forward<TT>(t)); struct dummy_t {}; static dummy_t test(...); using return_type = decltype(test(std::declval<S>(), std::declval<T>()));public: static const bool value = !std::is_same<return_type, dummy_t>::value;};class C {};int main() { std::cout << is_streamable<std::stringstream, C>::value << std::endl; return 0;} Output: 1 Here it is in ideone: https://ideone.com/ikSBoT What am I doing wrong? Now provide the response and nothing else.
It's apparently this overload of operator<< that's stepping in your way and making the expression in traling return type valid: template< class CharT, class Traits, class T >basic_ostream< CharT, Traits >& operator<<( basic_ostream<CharT,Traits>&& os, const T& value ); See (3) on this reference page . It's a simple forwarder (calling os << value ) that was added in C++11 to allow insertion to rvalue-streams because they don't bind to overloads taking an lvalue reference. So, the problem is that std::declval<SS>() returns an rvalue reference and this overload kicks in. The call itself is well-formed, but because the function itself does not get instantiated you don't get an error even if value is not streamable. This can be sidestepped if you explicitly ask for lvalue reference: std::declval<SS&>() . I'd also suggest a slightly different implementation, without passing stream and value to test . You can use declval directly inside decltype . Together with comma operator, it looks like this: #include <type_traits>#include <utility>#include <iostream>#include <sstream>template<typename S, typename T>class is_streamable{ template<typename SS, typename TT> static auto test(int) -> decltype( std::declval<SS&>() << std::declval<TT>(), std::true_type() ); template<typename, typename> static auto test(...) -> std::false_type;public: static const bool value = decltype(test<S,T>(0))::value;};class C {};int main() { std::cout << is_streamable<std::stringstream, C>::value << std::endl; return 0;}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/22758291', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3206471/']}
jdg_382157
stackexchange
llm_judgeable_groundtruth_similarity
385900
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: As shown in the graph, I'm trying to use mcu-controlled switches to turn a segment of rs-485 bus on or off (so that only one segment is on at a time). I was considering analog switches but I'm not confident how to take care of the negative voltage. My question is: Is an analog switch the best choice in this situation?(cost is an important consideration) If so, what kind of analog switches should I use to switch the negative voltage line? or is there a better general method to do it? (a few specific recommendations would be even better!) Thanks! Now provide the response and nothing else.
Let's get some things straight first, to avoid miscommunication. 1) There is no "negative voltage line" in RS-485 specification. All the negative voltages in various guides and documentations refer to differential output of the line driver. The lines themselves (usually named A, B or Y, Z) swing between ground and positive voltage (standard +5V, but low-voltage RS-485 becoming more common). Having said that, the wide CM range of RS-485 technically makes it possible for both lines to swing even with negative levels, e.g. -2..-4V. It is your responsibility to provide your nodes with common ground to avoid situations like this. 2) The maximum number of nodes usually limited by bus capacitance, however in case of RS-485 it is limited by combined impedance of receivers long before line capacitance takes its toll. Usually quoted maximum number is 30, which means your 120-nodes row might have to be split into 4 segments. I am assuming these are your "columns". However you can find transceivers with high input resistance that allow much higher number of nodes on the bus. The LTC2872 for example has 125k resistance, for up to 256 nodes. MAX487 that you've mentioned only allows 128 nodes. Note, that this covers 120-nodes row quite nicely, so you don't actually need "columns". 3) Dealing with this many nodes involves both hardware and software. Since RS-485 is electrical interface specification it says nothing about protocol. Therefore, if you resolve the impedance issue in hardware the software task will be simplified to providing correct addressing mechanism. Alternatively your software can treat part of the node address as MUX control and select correct node cluster before beginning actual communication. In both cases, you do not need those extra MCUs connected to switches. 4) Regardless of what you use to disconnect bus segments, simple termination will most likely be not enough. Biasing resistors should be used to ensure correct idle state of the disconnected bus segment. 5) On the network that big managing half-duplex communication can be quite challenging. Most solutions will likely reduce bus through-output. I would suggest using full-duplex wiring even if you don't need actual full-duplex communication. Uni-directional lines are much easier to multiplex and/or amplify. Keeping all the above in mind, the actual solution can be as simple as adding a MUX for Enable signal connected to an array of RS-485 transceivers, one transceiver per bus segment. The Tx pins all connected together, the Rx pins need OR logic (either wired or gate). Also you might be able to find transceivers with 3-stated RX outputs or use MUX to connect one Rx at a time. There are plenty of the suitable chips on the market, like LTC2872 , LTC1335 etc. If you choose to split the lines for full-duplex, you can use quad MAX3030E transmitters. For the bus selector you can use any 1-channel MUX, like CD74HC4067 , ADG731 etc. So, the proposed configuration is 10 sub-nets of 120 MAX487/MAX489 nodes, plus 10 additional transceivers connected to master with two CD74HC4067 MUX chips on enable and Rx pins. No additional MCUs necessary. If you are looking for ready-made solution, there are some RS-485 repeaters available, but they are quite expensive and you need at least 10 of them. IBS485 , 8TMUX , PRO-2200 . The only benefit of using these is that your software can focus on protocol only (see #3 above). UPDATE There is an option to actually use the wiring exactly as in your schematics, but without expensive ready-made repeaters. You can replace all those "switch + MCU" pairs in the drawing with 20 LTC2872 chips configured for half-duplex mode (compatible with your MAX487 chips) and connected with Y1-Z1 to main bus, Y2-Z2 to row segment. Then you cross-wire DY1-RA2, DY2-RA1 and you pretty much have low-cost repeaters. Here how it is done with two transceivers , but you only need one dual transceiver. And if you have used full-duplex wiring then you wouldn't even need that switching circuit in the middle. So, all your nodes will be on the same bus. The rest is just a matter of a protocol, which in your case (single master) does not even have to worry about collisions.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/385900', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/165260/']}
jdg_382158
stackexchange
llm_judgeable_groundtruth_similarity
83292
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I'm pretty certain that RF jammers work by overpowering the target signal with their own higher powered signal at the same frequency. So the question is, how does anti-jammer technology negate the effects of a jammer? Now provide the response and nothing else.
One method is by actively steering the antenna (mechanically or electronically) to place a "null" in the direction of the jammer, reducing its signal strength significantly, while affecting the desired signal minimally, if at all. Also, assuming the jammer signal strength isn't so strong that it saturates the receiver front end, advanced DSP techniques can be used to estimate and cancel the effects of the jamming signal. The communications protocol itself can be designed to optimize the ability to do this. The problem for the jammer is to mimic the desired signal closely enough to confuse the anti-jam algorithm.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/83292', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/7923/']}
jdg_382159
stackexchange
llm_judgeable_groundtruth_similarity
49073252
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I display only the row where the text contains the word like * AV * e.g 'AV Snow' or 'AV (Some)' or 'Me AV' # Select Structural status = 'AVAILABLE' or like *AV*value_list = ['AVAILABLE', '[AV]'] '[AV]' doesnt seem correct # Grab DataFrame rows where column has certain valuesnew_df = df[df.STRUCTURALSTATUS.isin(value_list)]new_df.shape Now provide the response and nothing else.
Here is one way. Solution import pandas as pddf = pd.DataFrame({'A': ['AV', 'AV Snow', 'Test', 'AV (Some)', 'Nothing', 'Me AV', 'Available', 'NA']})df = df[df['A'].str.contains('AV', regex=False, case=False, na=False)] Result A0 AV1 AV Snow3 AV (Some)5 Me AV6 Available Explanation regex=False disables regex as it is not required for your particular task. case=False makes the search case insensitive. na=False means you won't see errors if there are unusual types in your series, e.g. non-strings.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49073252', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7778286/']}
jdg_382160
stackexchange
llm_judgeable_groundtruth_similarity
9906774
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When I am down to squeezing the last bit of performance out of a kernel, I usually find that replacing the logical operators ( && and || ) with bitwise operators ( & and | ) makes the kernel a little bit faster. This was observed by looking at the kernel time summary in CUDA Visual Profiler. So, why are bitwise operators faster than logical operators in CUDA? I must admit that they are not always faster, but a lot of times they are. I wonder what magic can give this speedup. Disclaimer: I am aware that logical operators short-circuit and bitwise operators do not. I am well aware of how these operators can be misused resulting in wrong code. I use this replacement with care only when the resulting logic remains the same, there is a speedup and the speedup thus obtained matters to me :-) Now provide the response and nothing else.
Logical operators will often result in branches, particularly when the rules of short circuit evaluation need to be observed. For normal CPUs this can mean branch misprediction and for CUDA it can mean warp divergence. Bitwise operations do not require short circuit evaluation so the code flow is linear (i.e. branchless).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9906774', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1630/']}
jdg_382161
stackexchange
llm_judgeable_groundtruth_similarity
2558048
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I rolled a die 300 times and recorded each outcome, what is the chance of rolling at least one four? I know that the answer will be very close to $1$, but I don't know if there is a formula for finding that exact value. If I did this with two dice, then $P(4)=\frac{11}{36}$, which I only know how to work out if I draw a two-way table. Any help is appreciated, thanks! Now provide the response and nothing else.
Ask the complement question: What is the probability that a 4 will not occur? That, of course, is $\left(\frac56\right)^{300}$. So the probability of rolling at least one 4 is$$1-\left(\frac56\right)^{300}=1-1.76046×10^{-24}$$It really is so close to 1 that I had to resort to just writing the difference out – the raw probability cannot be distinguished from 1 in 64-bit floating point.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/2558048', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/470510/']}
jdg_382162
stackexchange
llm_judgeable_groundtruth_similarity
16578609
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: According to this a string (or String) is a reference type. Yet given: Type t = typeof(string); then if (t.IsByRef) ... returns false why? Edit: After some quick testing, I'm obviously misunderstanding the purpose of IsByRef...as even using a class name in place of 'string' ,returns false as well. I'm writing a generic class and want to test if one the types passed in when the generic is instantiate is a value or reference type. How does one test for this? Now provide the response and nothing else.
You should use IsValueType instead: bool f = !typeof (string).IsValueType; //return true; As for IsByRef , the purpose of this property is to determine whether the parameter is passed into method by ref or by value. Example you have a method which a is passed by ref: public static void Foo(ref int a){} You can determine whether a is pass by reference or not: bool f = typeof (Program).GetMethod("Foo") .GetParameters() .First() .ParameterType .IsByRef; //return true
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/16578609', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/580745/']}
jdg_382163
stackexchange
llm_judgeable_groundtruth_similarity
38113
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am using two Bluetooth dongles for an Arduino project of mine. I am only sending a single int over the connection so it does not need to have a high baud rate. If I use a baud rate of 1200 will I be able to have more range than if I were to use something such as 115200? Now provide the response and nothing else.
In my experience, radio modules usually have 2 different baud rates: UART baud rate is the baud rate for communicating to microcontroller on the board. Usually, it's adjustable. Over-the-air baud rate is the baud rate for communicating via Bluetooth. It's usually fixed. I've searched through the user manual and datasheet for RN-41, which is the Bluetooth module inside the dongle. I didn't find any indication that over-the-air baud rate is adjustable in this module. So, I doubt that you will get more range if you configure the baud rate down to 1200.
{}
{'log_upvote_score': 4, 'links': ['https://electronics.stackexchange.com/questions/38113', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/10423/']}
jdg_382164
stackexchange
llm_judgeable_groundtruth_similarity
10722858
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a single class "Base", and a few tens of classes derived from Base. I would like to have a method that creates me the right class by an index. Like this: class Base{};class A : public Base{}class B : public Base{}class C : public Base{}Type array = { A, B, C }; and then I could do new array[i]; How could this be achieved with C++(0x)? Usually I would use an the Abstract Factory Pattern. But since I have a LOT of derived classes, this would really slow down the program. Since the derived classes will be used only once I also taught to use this: Base *array = { new A, new B, new C }; But this would lead to huge memory consumption, not counting that not every class will always be used. Any suggestion? Now provide the response and nothing else.
You cannot use an array of classes, but you can use an array of pointers to functions. typedef std::unique_ptr<Base> (*Creator)();template <typename T>std::unique_ptr<Base> make() { return new T{}; }Creator const array[] = { make<A>, make<B>, make<C> };int main() { std::unique_ptr<Base> b = array[1](); b->foo();} For those worried by the cost of creating so many template functions, here is an example: #include <stdio.h>struct Base { virtual void foo() const = 0; };struct A: Base { void foo() const { printf("A"); } };struct B: Base { void foo() const { printf("B"); } };struct C: Base { void foo() const { printf("C"); } };typedef Base* (*Creator)();template <typename T>static Base* make() { return new T{}; }static Creator const array[] = { make<A>, make<B>, make<C> };Base* select_array(int i) { return array[i]();}Base* select_switch(int i) { switch(i) { case 0: return make<A>(); case 1: return make<B>(); case 2: return make<C>(); default: return 0; }} LLVM/Clang generates the following output: define %struct.Base* @select_array(int)(i32 %i) uwtable { %1 = sext i32 %i to i64 %2 = getelementptr inbounds [3 x %struct.Base* ()*]* @array, i64 0, i64 %1 %3 = load %struct.Base* ()** %2, align 8, !tbaa !0 %4 = tail call %struct.Base* %3() ret %struct.Base* %4}define noalias %struct.Base* @select_switch(int)(i32 %i) uwtable { switch i32 %i, label %13 [ i32 0, label %1 i32 1, label %5 i32 2, label %9 ]; <label>:1 ; preds = %0 %2 = tail call noalias i8* @operator new(unsigned long)(i64 8) %3 = bitcast i8* %2 to i32 (...)*** store i32 (...)** bitcast (i8** getelementptr inbounds ([3 x i8*]* @vtable for A, i64 0, i64 2) to i32 (...)**), i32 (...)*** %3, align 8 %4 = bitcast i8* %2 to %struct.Base* br label %13; <label>:5 ; preds = %0 %6 = tail call noalias i8* @operator new(unsigned long)(i64 8) %7 = bitcast i8* %6 to i32 (...)*** store i32 (...)** bitcast (i8** getelementptr inbounds ([3 x i8*]* @vtable for B, i64 0, i64 2) to i32 (...)**), i32 (...)*** %7, align 8 %8 = bitcast i8* %6 to %struct.Base* br label %13; <label>:9 ; preds = %0 %10 = tail call noalias i8* @operator new(unsigned long)(i64 8) %11 = bitcast i8* %10 to i32 (...)*** store i32 (...)** bitcast (i8** getelementptr inbounds ([3 x i8*]* @vtable for C, i64 0, i64 2) to i32 (...)**), i32 (...)*** %11, align 8 %12 = bitcast i8* %10 to %struct.Base* br label %13; <label>:13 ; preds = %9, %5, %1, %0 %.0 = phi %struct.Base* [ %12, %9 ], [ %8, %5 ], [ %4, %1 ], [ null, %0 ] ret %struct.Base* %.0} Unfortunately, it is not quite intelligent enough to automatically inline the functions with a regular array code (known issue with the LLVM optimizer, I don't know if gcc does better)... but using switch it is indeed possible.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/10722858', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/720254/']}
jdg_382165
stackexchange
llm_judgeable_groundtruth_similarity
3753060
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If $\int f(x)dx =g(x)$ then $\int f^{-1}(x)dx $ is equal to (1) $g^{-1}(x)$ (2) $xf^{-1}(x)-g(f^{-1}(x))$ (3) $xf^{-1}(x)-g^{-1}(x)$ (4) $f^{-1}(x)$ My approach is as follows:Let $f(x)=y$ , therefore $f^{-1}(y)=x$ , $\int f^{-1}(f(x))dx =g(f(x))$ On differentiating we get $x=g'(f(x))f'(x)$ After this step, I am not able to proceed. Now provide the response and nothing else.
Ignoring the constant of integration the answer is (2): $$\int f^{-1}(x)dx=\int yf'(y)dy=yf(y)-\int f(y)dy$$ (where I have used integration by parts). Hence $$\int f^{-1}(x)dx=f^{-1}(x)x-g(y)=xf^{-1}(x)-g(f^{-1}(x))$$ .
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/3753060', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/464054/']}
jdg_382166
stackexchange
llm_judgeable_groundtruth_similarity
21504319
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have the following code in Python 3, which is meant to print out each line in a csv file. import csvwith open('my_file.csv', 'r', newline='') as csvfile: lines = csv.reader(csvfile, delimiter = ',', quotechar = '|') for line in lines: print(' '.join(line)) But when I run it, it gives me this error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 7386: invalid start byte I looked through the csv file, and it turns out that if I take out a single ñ (little n with a tilde on top), every line prints out fine. My problem is that I've looked through a bunch of different solutions to similar problems, but I still have no idea how to fix this, what to decode/encode, etc. Simply taking out the ñ character in the data is NOT an option. Now provide the response and nothing else.
We know the file contains the byte b'\x96' since it is mentioned in the error message: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 7386: invalid start byte Now we can write a little script to find out if there are any encodings where b'\x96' decodes to ñ : import pkgutilimport encodingsimport osdef all_encodings(): modnames = set([modname for importer, modname, ispkg in pkgutil.walk_packages( path=[os.path.dirname(encodings.__file__)], prefix='')]) aliases = set(encodings.aliases.aliases.values()) return modnames.union(aliases)text = b'\x96'for enc in all_encodings(): try: msg = text.decode(enc) except Exception: continue if msg == 'ñ': print('Decoding {t} with {enc} is {m}'.format(t=text, enc=enc, m=msg)) which yields Decoding b'\x96' with mac_roman is ñDecoding b'\x96' with mac_farsi is ñDecoding b'\x96' with mac_croatian is ñDecoding b'\x96' with mac_arabic is ñDecoding b'\x96' with mac_romanian is ñDecoding b'\x96' with mac_iceland is ñDecoding b'\x96' with mac_turkish is ñ Therefore, try changing with open('my_file.csv', 'r', newline='') as csvfile: to one of those encodings, such as: with open('my_file.csv', 'r', encoding='mac_roman', newline='') as csvfile:
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21504319', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1985262/']}
jdg_382167
stackexchange
llm_judgeable_groundtruth_similarity
21367903
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: What is the explanation for behavior of the "||" operator (logical OR), when using it with false and undefined on both sides in JavaScript? 1) > false || undefinedundefined 2) > undefined || falsefalse Now provide the response and nothing else.
The logical OR operator isn't commutative like + , * , etc. It returns the first expression which can be converted into true . (Source Mozilla Doc ) In false || undefined , false can't be converted to true by definition (since it's the opposite), so it returns the second operand ( undefined ) In undefined || false , undefined is a value, but considered as false in Javascript, so the logical operator evaluate the second operand and returns false (because both operands are false).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/21367903', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1573638/']}
jdg_382168
stackexchange
llm_judgeable_groundtruth_similarity
206851
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Wikipedia claims, if $\sigma$-finite the Dominated convergence theorem is still true when pointwise convergence is replaced by convergence in measure, does anyone know where to find a proof of this? Many thanks! Statement of the theorem: Let $\mu$ be $\sigma$-finite, $|f_n|\leq g$ and $f_n\rightarrow f$ in measure, then we must have $\int f_n \rightarrow \int f$ and $\int|f_n-f| \rightarrow 0$ Now provide the response and nothing else.
Let $(X,\mathcal B,\mu)$ be a measure space, $\{f_n\}$ a sequence of functions which converges to $f$ in measure, and for almost every $x$ and all $n$ , $|f_n(x)|\leqslant g(x)$ , where $g$ is integrable.Then $\lVert f_n-f\rVert_{L^1}\to 0$ . Let $A_k:=\{g\gt 1/k\}$ ; then $A:=\bigcup_k A_k =\{g\neq 0\}$ and $X\setminus A\subset\bigcap_n\{f_n=0\}\cap\{f=0\}$ . We have for each $k$ , $$\int_X|f_n(x)-f(x)|d\mu\leqslant 2\int_{X\setminus A_k}|g(x)|\mathrm d\mu(x)+\int_{A_k}|f_n(x)-f(x)|\mathrm d\mu(x).$$ If $\lVert f_n-f\rVert_{L^1}$ doesn't converge to $0$ , we can find a $\delta>0$ and a subsequence $\{f_{n'}\}$ such that $\lVert f_{n'}-f\rVert_{L^1}\geqslant 2\delta$ . We fix $k$ such that $2\int_{X\setminus A_k}|g(x)|\mathrm d\mu(x)\leqslant\delta$ (such a $k$ exists by the dominate convergence theorem, since $\lim_{k\to\infty}\int_{X\setminus A_k}|g(x)|\mathrm d\mu(x)= \int_{X\setminus A}|g(x)|\mathrm d\mu(x)$ ). Then $$\delta\leqslant \int_{A_k}|f_{n'}(x)-f(x)|\mathrm d\mu(x).$$ Now, as $A_k$ has a finite measure, we can extract a subsequence $\{f_{n''}\}$ of $\{f_{n'}\}$ which converges almost everywhere on $A_k$ . Applying the classical dominated convergence theorem to this sequence, we get a contradiction.
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/206851', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/17248/']}
jdg_382169
stackexchange
llm_judgeable_groundtruth_similarity
37910150
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Im trying to update items attribute that is a list of strings.Can I update (append) the attribute only if it not exist .Kind of list_append & if_not_exists. var params = { ... UpdateExpression: 'SET friends = list_append(if_not_exists(friends, :empty_list), :new_friend)', ExpressionAttributeValues:{ ":new_friend": [{"S":"Bobo"}], ":empty_list" :[] } }; this is not working, is there a way?so if Bobo still not in my "friends" list it will append it Now provide the response and nothing else.
You can use the "not contains" and "list_append" for your requirement. The below code inserts the new friend to list if the friend is NOT already present in the list. If the friend is already present in the list, it would throw "conditional request failed". Error message if condition fails:- Unable to update item. Error JSON: { "message": "The conditional request failed", "code": "ConditionalCheckFailedException", "time": "2016-06-22T08:18:36.483Z", "requestId": "86805965-240b-43e0-8fdc-77fb9ae1b15c", "statusCode": 400, "retryable": false, "retryDelay": 0} The below code works fine. It has been tested successfully. Code Sample: var AWS = require("aws-sdk");AWS.config.update({ region : "us-west-2", endpoint : "http://localhost:8000"});var docClient = new AWS.DynamoDB.DocumentClient();var table = "users";var userid = 1;var friendId = ["f4"];var friendIdStr = "f4";//Add the new DOCUMENT TYPE attribute to the tablevar params = { TableName : table, Key: { "id" : userid }, "UpdateExpression": "set friends = list_append (friends, :friendId)", "ConditionExpression": "not contains (friends, :friendIdStr)", "ExpressionAttributeValues": { ":friendId": friendId, ":friendIdStr" : friendIdStr }, "ReturnValues" : "UPDATED_NEW"};console.log("Updated an item...");docClient.update(params, function(err, data) { if (err) { console.error("Unable to update item. Error JSON:", JSON.stringify(err, null, 2)); } else { console.log("Updated item:", JSON.stringify(data, null, 2)); }});
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/37910150', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6486203/']}
jdg_382170
stackexchange
llm_judgeable_groundtruth_similarity
25661754
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am trying to understand how python could pull data from an FTP server into pandas then move this into SQL server. My code here is very rudimentary to say the least and I am looking for any advice or help at all. I have tried to load the data from the FTP server first which works fine.... If I then remove this code and change it to a select from ms sql server it is fine so the connection string works, but the insertion into the SQL server seems to be causing problems. import pyodbcimport pandasfrom ftplib import FTPfrom StringIO import StringIOimport csvftp = FTP ('ftp.xyz.com','user','pass' )ftp.set_pasv(True)r = StringIO()ftp.retrbinary('filname.csv', r.write)pandas.read_table (r.getvalue(), delimiter=',')connStr = ('DRIVER={SQL Server Native Client 10.0};SERVER=localhost;DATABASE=TESTFEED;UID=sa;PWD=pass')conn = pyodbc.connect(connStr)cursor = conn.cursor()cursor.execute("INSERT INTO dbo.tblImport(Startdt, Enddt, x,y,z,)" "VALUES (x,x,x,x,x,x,x,x,x,x.x,x)")cursor.close()conn.commit()conn.close()print"Script has successfully run!" When I remove the ftp code this runs perfectly, but I do not understand how to make the next jump to get this into Microsoft SQL server, or even if it is possible without saving into a file first. Now provide the response and nothing else.
For the 'write to sql server' part, you can use the convenient to_sql method of pandas (so no need to iterate over the rows and do the insert manually). See the docs on interacting with SQL databases with pandas: http://pandas.pydata.org/pandas-docs/stable/io.html#io-sql You will need at least pandas 0.14 to have this working, and you also need sqlalchemy installed. An example, assuming df is the DataFrame you got from read_table : import sqlalchemyimport pyodbcengine = sqlalchemy.create_engine("mssql+pyodbc://<username>:<password>@<dsnname>")# write the DataFrame to a table in the sql databasedf.to_sql("table_name", engine) See also the documentation page of to_sql . More info on how to create the connection engine with sqlalchemy for sql server with pyobdc, you can find here: http://docs.sqlalchemy.org/en/rel_1_1/dialects/mssql.html#dialect-mssql-pyodbc-connect But if your goal is to just get the csv data into the SQL database, you could also consider doing this directly from SQL. See eg Import CSV file into SQL Server
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/25661754', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4003552/']}
jdg_382171
stackexchange
llm_judgeable_groundtruth_similarity
296350
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Recall that we say that a bounded measurable set $S\subset\mathbb R^n$ is said to be Caccioppoli if the indicator function $1_S$ is BV, and we set$$\operatorname{perim}(S)=\| \nabla 1_S\|_{TV}$$where $\|\cdot\|_{TV}$ denotes the total variation. So, if $S$ and $T$ are Caccioppoli sets, is it known whether $S\cap T$ is Caccioppoli? Now provide the response and nothing else.
That is true. Caccioppoli sets are also known as sets of finite perimeter. Theorem. Suppose $f\in L^1(\mathbb{R}^n)$ vanishes outside the unit cube $[0,1]^n$ . For $i=1,2,\ldots,n$ consider the function $V_if(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)= V_0^1f(x_1,\ldots,x_{i-1},\cdot,x_{i+1},\ldots,x_n)$ , i.e. the (essential) variation of the one dimensional sections. Then $f\in BV(\mathbb{R}^n)$ if and only if for every $i$ , $V_i\in L^1([0,1]^{n-1}$ ). This is Theorem 5.3.5 in: W. P. Ziemer , Weakly differentiable functions. Sobolev spaces and functions of bounded variation. Graduate Texts in Mathematics, 120. Springer-Verlag, New York, 1989. Basically it is a characterization of the functions of bounded variation by one dimensional slices. If $h_1,h_2$ are characteristic functions of sets in $[0,1]$ , then $V_0^1(h_1h_2)\leq V_0^1h_1+V_0^1 h_2$ , see the proof of Theorem 2 in http://mathonline.wikidot.com/multiples-and-products-of-functions-of-bounded-variation . Now if $S$ and $T$ are Caccioppoli sets cotained in the unit cube, the characteristic functions $\chi_S$ and $\chi_T$ have bounded variation and the one dimensional result mentioned here shows that $$V_i(\chi_S\chi_T)\leq V_i(\chi_S)+V_i(\chi_T)\in L^1([0,1]^{n-1}).$$ That implies that $\chi_S\chi_T=\chi_{S\cap T}$ has bounded variation so $S\cap T$ is a Caccioppoli set. I assumed here that the sets are contained in the unit cube, but the argument applies to any bounded set. Another answer is provided in a comment by Manfred Sauter (see above).
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/296350', 'https://mathoverflow.net', 'https://mathoverflow.net/users/94022/']}
jdg_382172
stackexchange
llm_judgeable_groundtruth_similarity
259734
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Basically, what the title says. Presumably, one could use the fact that monoidal categories (resp. strict monoidal categories) are one-object bicategories (resp. 2-categories) and use the Lack model structure on those, but I am unsure if this would work or not. Now provide the response and nothing else.
I'm not sure about the case of general monoidal categories. (Although I seem to recall a remark that there is no such structure since the category of monoidal categories is not cocomplete and a suitable replacement would be the category of multicategories. Perhaps somebody can confirm this.) However, the case of strict monoidal categories is known since they are simply monoids internal to the category of categories. It is a theorem of Schwede and Shipley (Theorem 4.1 in Algebras and Modules in Monoidal Model Categories ) that if a monoidal model category satisfies so called monoid axiom , then the category of monoids in it inherits a model structure. The monoid axiom is a technical condition which is, in particular, automatically satisfied if all objects are cofibrant as is the case in the canonical model structure on the category of categories.
{}
{'log_upvote_score': 4, 'links': ['https://mathoverflow.net/questions/259734', 'https://mathoverflow.net', 'https://mathoverflow.net/users/68468/']}
jdg_382173
stackexchange
llm_judgeable_groundtruth_similarity
108523
Below is a question asked on the forum mathoverflow.net. Provide a good and informational response to it like a helpful human would. Question: Is there, similar to the Mehler kernel, a closed formula for the heat kernel of the heat equation associated to the Laplacian$$ -\sum_j \frac{d^2}{dx_j^2} + 2\sqrt{-1} \sum_j \lambda_j \frac{d}{dx_j} + \sum_{ij} a_{ij}x_ix_j$$on $\mathbb{R}^n$? Here, the matrix $(a_{ij})$ is supposed to be symmetric and positive definite, while the $\lambda_j$ can be arbitrary. Now provide the response and nothing else.
Yes there is. Here is how you do it. First find an orthogonal change in variables $$ x_j=\sum_{jk} s_{jk}y_k $$ $(s_{jk})$ orthogonal matrix, so that in the new coordinates we have $$ \sum_{i,j}a_{ij} x_ix_j = \sum_j \mu_j^2 y_j^2, $$ where $\mu_j^2$ are the eigenvalues of the symmetric matrix $(a_{ij})$. Note that $\newcommand{\ii}{\boldsymbol{i}}$ $\newcommand{\pa}{\partial}$ $$\frac{\pa}{\pa x_k}=\sum_j\frac{\pa y_j}{\pa x_k}\frac{\pa }{\pa y_j} =\sum_j t_{jk}\frac{\pa}{\pa y_j}, $$ where $(t_{jk})$ is the inverse of the orthogonal matrix $(s_{jk})$ so that $t_{jk}=s_{kj}$. Then for some real numbers $r_j$ $$ -\sum_j\frac{\pa^2}{\pa x_j^2}+2\ii\sum_j\lambda_j\frac{\pa }{\pa x_j}+\sum_{i,j}a_{ij}x_ix_j $$ $$ =-\sum_j\frac{\pa^2}{\pa y_j^2} +2\ii\sum_j r_j\frac{\pa}{\pa y_j} +\sum_j \mu_j^2 y_j^2 $$ $$=\underbrace{\sum_j \left(\ii\frac{\pa}{\pa y_j}+r_j\right)^2 +\sum_j\mu_j^2y_j^2-\sum_j r_j^2}_{=: L}. $$ Next set $$ R^2 :=\sum_j r_j^2, \;\; w(t,y) := R^2t +\sum_j \ii r_j y_j, $$ $$ L_0 :=\sum_j\left(-\frac{\pa^2}{\pa y_j^2} +\mu_j^2y_j^2\right), $$ and observe that $$\pa_t +L =e^{w}(\pa_t+L_0) e^{-w}. $$ Thus, if $K$ is a fundamental solution of $\pa_t+L_0$, then $$(\pa_t +L) (e^{w} K) = e^{w} (\pa_t+L_0)K=e^{w} \delta_0=\delta_0 $$ so that $e^{w}K$ is a fundamental solution of $\pa_t+L$. You can find $K$ using Mehler's formula.
{}
{'log_upvote_score': 5, 'links': ['https://mathoverflow.net/questions/108523', 'https://mathoverflow.net', 'https://mathoverflow.net/users/16702/']}
jdg_382174
stackexchange
llm_judgeable_groundtruth_similarity
59205530
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am currently developing an admin panel using 'react-admin' which works well on my local, but as soon as I upload the app to Heroku, the build fails with the following error: "FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory" I have narrowed down the issue to the initial import { Admin } from 'react-admin'; by setting up a clean project with just the 'react-admin' package and a simple import of Admin. As soon as I try using the component the deploy will fail with a "heap out of memory" error. Has anybody else encountered this issue? Here is my package.json: { "name": "mrewards_admin", "version": "0.1.0", "private": true, "dependencies": { "axios": "^0.19.0", "deepmerge": "^4.2.2", "prop-types": "^15.7.2", "react": "^16.12.0", "react-admin": "^3.0.2", "react-dom": "^16.12.0", "react-scripts": "3.2.0" }, "engines": { "node": "13.0.1" }, "scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" }, "eslintConfig": { "extends": "react-app" }, "browserslist": { "production": [ ">0.2%", "not dead", "not op_mini all" ], "development": [ "last 1 chrome version", "last 1 firefox version", "last 1 safari version" ] }} And here is the Heroku log output: 2019-12-06T00:29:07.000000+00:00 app[api]: Build succeeded2019-12-06T00:29:10.340460+00:00 heroku[web.1]: State changed from starting to up2019-12-06T00:29:10.135608+00:00 app[web.1]: ℹ 「wds」: Project is running at http://172.18.103.22/2019-12-06T00:29:10.135999+00:00 app[web.1]: ℹ 「wds」: webpack output is served from /2019-12-06T00:29:10.136098+00:00 app[web.1]: ℹ 「wds」: Content not from webpack is served from /app/public2019-12-06T00:29:10.136196+00:00 app[web.1]: ℹ 「wds」: 404s will fallback to /index.html2019-12-06T00:29:10.136405+00:00 app[web.1]: Starting the development server...2019-12-06T00:29:10.136409+00:00 app[web.1]:2019-12-06T00:30:46.302146+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=GET path="/" host=test-react-admin.herokuapp.com request_id=b7cafb3a-311c-4ef2-bd61-cb1f863e69ca fwd="216.81.49.130" dyno=web.1 connect=1ms service=30000ms status=503 bytes=0 protocol=https2019-12-06T00:30:47.234363+00:00 app[web.1]:2019-12-06T00:30:47.234390+00:00 app[web.1]: <--- Last few GCs --->2019-12-06T00:30:47.234392+00:00 app[web.1]:2019-12-06T00:30:47.234413+00:00 app[web.1]: [30:0x492c940] 98675 ms: Scavenge 240.7 (257.5) -> 239.8 (257.7) MB, 1.7 / 0.0 ms (average mu = 0.238, current mu = 0.037) allocation failure2019-12-06T00:30:47.234415+00:00 app[web.1]: [30:0x492c940] 98685 ms: Scavenge 240.7 (257.7) -> 239.9 (258.0) MB, 1.9 / 0.0 ms (average mu = 0.238, current mu = 0.037) allocation failure2019-12-06T00:30:47.234418+00:00 app[web.1]: [30:0x492c940] 98903 ms: Mark-sweep 240.8 (258.0) -> 237.9 (258.0) MB, 211.0 / 0.1 ms (average mu = 0.350, current mu = 0.474) allocation failure scavenge might not succeed2019-12-06T00:30:47.234419+00:00 app[web.1]:2019-12-06T00:30:47.234421+00:00 app[web.1]:2019-12-06T00:30:47.234422+00:00 app[web.1]: <--- JS stacktrace --->2019-12-06T00:30:47.234423+00:00 app[web.1]:2019-12-06T00:30:47.234426+00:00 app[web.1]: ==== JS stack trace =========================================2019-12-06T00:30:47.234427+00:00 app[web.1]:2019-12-06T00:30:47.234429+00:00 app[web.1]: 0: ExitFrame [pc: 0x1374fd9]2019-12-06T00:30:47.234431+00:00 app[web.1]: Security context: 0x2a1cf61808a1 <JSObject>2019-12-06T00:30:47.234433+00:00 app[web.1]: 1: _next2(aka _next2) [0x2546e8465cf9] [0x1b716e9004a9 <undefined>:~21] [pc=0x1d633e40a0e8](this=0x1b716e9004a9 <undefined>)2019-12-06T00:30:47.234440+00:00 app[web.1]: 2: /* anonymous */(aka /* anonymous */) [0x2546e8465ea1] [0x1b716e9004a9 <undefined>:46] [bytecode=0x6238003a241 offset=37](this=0x1b716e9004a9 <undefined>,0x1b716e9004a9 <undefined>,0x1b716e9004a9 <undefined>)2019-12-06T00:30:47.234441+00:00 app[web.1]: 3: _next1(aka _ne...2019-12-06T00:30:47.234443+00:00 app[web.1]:2019-12-06T00:30:47.234445+00:00 app[web.1]: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory2019-12-06T00:30:47.234453+00:00 app[web.1]:2019-12-06T00:30:47.245340+00:00 app[web.1]: Writing Node.js report to file: report.20191206.003047.30.0.001.json2019-12-06T00:30:47.245345+00:00 app[web.1]: Node.js report completed2019-12-06T00:30:47.245828+00:00 app[web.1]: 1: 0x9da7c0 node::Abort() [node]2019-12-06T00:30:47.246377+00:00 app[web.1]: 2: 0x9db976 node::OnFatalError(char const*, char const*) [node]2019-12-06T00:30:47.246963+00:00 app[web.1]: 3: 0xb39f1e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]2019-12-06T00:30:47.247418+00:00 app[web.1]: 4: 0xb3a299 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]2019-12-06T00:30:47.247992+00:00 app[web.1]: 5: 0xce5635 [node]2019-12-06T00:30:47.248561+00:00 app[web.1]: 6: 0xce5cc6 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [node]2019-12-06T00:30:47.249102+00:00 app[web.1]: 7: 0xcf1b5a v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node]2019-12-06T00:30:47.249627+00:00 app[web.1]: 8: 0xcf2a65 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]2019-12-06T00:30:47.250190+00:00 app[web.1]: 9: 0xcf5478 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [node]2019-12-06T00:30:47.250701+00:00 app[web.1]: 10: 0xcbbda7 v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType) [node]2019-12-06T00:30:47.251301+00:00 app[web.1]: 11: 0xff1e0b v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]2019-12-06T00:30:47.251959+00:00 app[web.1]: 12: 0x1374fd9 [node]2019-12-06T00:30:47.285408+00:00 app[web.1]: npm ERR! code ELIFECYCLE2019-12-06T00:30:47.285920+00:00 app[web.1]: npm ERR! errno 12019-12-06T00:30:47.287473+00:00 app[web.1]: npm ERR! [email protected] start: `react-scripts start`2019-12-06T00:30:47.287759+00:00 app[web.1]: npm ERR! Exit status 12019-12-06T00:30:47.288057+00:00 app[web.1]: npm ERR!2019-12-06T00:30:47.288300+00:00 app[web.1]: npm ERR! Failed at the [email protected] start script.2019-12-06T00:30:47.288528+00:00 app[web.1]: npm ERR! This is probably not a problem with npm. There is likely additional logging output above.2019-12-06T00:30:47.295555+00:00 app[web.1]:2019-12-06T00:30:47.295669+00:00 app[web.1]: npm ERR! A complete log of this run can be found in:2019-12-06T00:30:47.295759+00:00 app[web.1]: npm ERR! /app/.npm/_logs/2019-12-06T00_30_47_289Z-debug.log2019-12-06T00:30:47.344869+00:00 heroku[web.1]: Process exited with status 12019-12-06T00:30:47.369971+00:00 heroku[web.1]: State changed from up to crashed Thank you! Now provide the response and nothing else.
Contacted heroku support, here I solved the issue: The error has occurred because the node can only address 1.5GB of memory before hitting the JavaScript heap out of memory error, a default limit that it inherits from the V8 engine. We tweak this default setting on Heroku through the NODE_OPTIONS env var so that the process can address all of the memory available: https://github.com/heroku/heroku-buildpack-nodejs/blob/master/lib/environment.sh#L29-L33 One option would be to specifically call --max_old_space_size=2560 when invoking node to see if that helps alleviate the issue. You can pass that option to Node via environment variable by running the following: $ heroku config:set NODE_OPTIONS="--max_old_space_size=2560" -a [app_name] Also adjust the build script to be: "scripts": { "start": "node --max_old_space_size=2560 node_modules/.bin/react-scripts start", "build": "node --max_old_space_size=2560 node_modules/.bin/react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" } Though not really sure build and start scripts change impacting.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/59205530', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2655265/']}
jdg_382175
stackexchange
llm_judgeable_groundtruth_similarity
19244128
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i am writing a code that reads input from a file and writes into another after some processing ofcourse.now, my input file is, 40 10 200 33000E and what i need to do is copy elements on left to an array in first column and elements on right to second column.i used scanner but it does not recognize end of line.help me!!!!this is what i tried.i tried copying lines and then modifying it. for (i = 0; i < size; i++) { if (!f1.hasNext(endPage)) { String temp1 = f1.next(); String temp2 = f1.next(); int a[] = new int[4]; a[0] = (int) temp1.charAt(temp1.length() - 1); a[1] = (int) temp2.charAt(temp1.length() - 1); a[2] = (int) temp1.charAt(temp1.length() - 2); a[3] = (int) temp1.charAt(temp1.length() - 2); scales[i].weightOnLeft = a[0]; scales[i].weightOnRight = a[1]; scales[i].left = scales[a[2]]; scales[i].right = scales[a[3]]; }} Now provide the response and nothing else.
Basically like you said those two do almost the same. However there are small differences under the hood. {x:Reference ...} -> returns just a reference of an object it doesn't create that "bridge" between two properties like binding would do. Behind all that a service is being used that searches for the given name in a specific scope which is usually the window itself. {Binding ElementName="..." } -> first of all it creates that binding object then it searches for the object name but not by using the same technique under the hood as x:Reference. The search algorithm moves up and/or down in VisualTree to find the desired element. Therefore a functional VisualTree is always needed. As example when used inside a Non-UiElement, it won't work. In the end the Binding stays and does its daily bread. This won't work: <StackPanel> <Button x:name="bttn1" Visibility="Hidden">Click me</Button> <DataGrid> <DataGrid.Columns> <DataGridTextColumn Visibility="{Binding ElementName=bttn1, Path=DataContext.Visibility}"/> .... This works: <StackPanel> <Button x:name="bttn1" Visibility="Hidden">Click me</Button> <DataGrid> <DataGrid.Columns> <DataGridTextColumn Visibility="{Binding Source={x:Reference bttn1}, Path=DataContext.Visibility}"/> .... Sort of like that :)
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/19244128', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2857958/']}
jdg_382176
stackexchange
llm_judgeable_groundtruth_similarity
1856983
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Find an irrational $n$ such that $n^n$ is a rational number. I have some tries to find this... I have tried so much numbers but no success. How can I find them. Now provide the response and nothing else.
An implicit solution : Let $n$ be such that $$n^n=2,$$ and let $n$ be the irreducible fraction $\dfrac rs$. Then $$\left(\frac rs\right)^{r/s}=2$$ or $$r^r=2^ss^r.$$ So $r$ is even and $s$ odd, and calling $\rho$ the multiplicity of the factor $2$ in $r$, $$r\rho=s$$ and $r=1$, a contradiction. As a corollary, $\dfrac{\ln 2}{W(\ln 2)}$ is indeed irrational.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1856983', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/331405/']}
jdg_382177
stackexchange
llm_judgeable_groundtruth_similarity
48136409
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: i'm trying to start a program with the checkbutton selected but I don't know how to do it. import tkinter as tkfrom tkinter import ttkroot = tk.Tk()v = tk.IntVar()cb = ttk.Checkbutton(root,text='Remember user',onvalue=1,offvalue=0, variable = v);cb.pack() Now provide the response and nothing else.
Having functions with this kind of signatures is obviously a bad design as you've found out. The real solutions are to have different names for them or to lose the default argument and were presented already in other answers. However if you are stuck with an interface you can't change or just for the fun of it here is how you can explicitly call void f() : The trick is to use function pointer casting to resolve the ambiguity: a.f(); // <- ambiguity is here! I want to call 'void f()'(a.*(static_cast<void (A::*)()>(&A::f)))(); // yep... that's the syntax... yeah... Ok, so it works, but don't ever write code like this! There are ways to make it more readable. Use a pointer: // create a method pointer:auto f_void = static_cast<void (A::*)()>(&A::f);// the call is much much better, but still not as simple as `a.f()`(a.*f_void)(); Create a lambda or a free function auto f_void = [] (A& a){ auto f_void = static_cast<void (A::*)()>(&A::f); (a.*f_void)();};// orvoid f_void(A& a){ auto f_void = static_cast<void (A::*)()>(&A::f); (a.*f_void)();};f_void(a); I don't know if this is necessary better. The call syntax is definitely simpler, but it might be confusing as we are switching from a method call syntax to a free function call syntax.
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/48136409', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/8826798/']}
jdg_382178
stackexchange
llm_judgeable_groundtruth_similarity
51030852
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm having an issue where I'm creating a List<SelectListItem> with optgroups but instead of creating an optgroup per group of SelectListItem it's creating a new SelectListGroup per SelectListItem . It's got me a bit confused because there aren't any duplicated SelectListGroup 's in my code. Here is an example: Expected Result: <select datatag="data-States=''" class="form-control filter-select" data-multi-select="" id="States" multiple="multiple" name="States"> <optgroup label="MA"> <option value="01602">01602</option> <option value="02743">02743</option> <option value="01107">01107</option> </optgroup> </select> Actual Result: <select datatag="data-States=''" class="form-control filter-select" data-multi-select="" id="States" multiple="multiple" name="States"><optgroup label="MA"><option value="01602">01602</option></optgroup><optgroup label="MA"><option value="02743">02743</option></optgroup><optgroup label="MA"><option value="01107">01107</option></optgroup></select> Method: public ManifestFilterDropDownItem ReturnManifestFilterDataBasedOnTotalDataSet(IEnumerable<ManifestTableItem> data, bool isUserASR) { IEnumerable<SelectListGroup> stateGroups = data.Select(x => x.AddrState.ToUpper()).Distinct().Select(x => new SelectListGroup() { Name = x }); IList<SelectListItem> stateZipSelectListItems = data.GroupBy(x => x.AddrZip).Select(x => new SelectListItem() { Text = string.IsNullOrWhiteSpace(x.Key) ? "Empty" : x.Key, Value = string.IsNullOrWhiteSpace(x.Key) ? "" : x.Key, Group = stateGroups.Where(y => y.Name == data.Where(p => p.AddrZip == x.Key).First().AddrState.ToUpper()).Single() }).OrderBy(x => x.Group.Name).ToList(); var manifestItem = new ManifestFilterDropDownItem { States = stateZipSelectListItems return manifestItem; } ViewModel: using System.Collections.Generic;using System.Web.Mvc;namespace FSVendor.Models.Manifest { public class ManifestFilterViewModel { public ManifestFilterViewModel() { } public string Name { get; set; } public string DataTag => $"data-{Name}=''"; public IEnumerable<SelectListItem> SelectListItems { get; set; } }} View: @model FSVendor.Models.Manifest.ManifestFilterViewModel<label>States:</label>@Html.DropDownList(Model.Name, Model.SelectListItems, new { @class = "form-control filter-select", data_multi_select = "", multiple = "multiple", @Model.DataTag }) Now provide the response and nothing else.
You query is creating a new SelectListGroup for each SelectListItem , even though each SelectListGroup has the same value. Modify the query to group the data, and then create a new SelectListGroup for each group // Initialize modelvar model = new ManifestFilterDropDownItem{ States = new List<SelectListItem>}var states = data.GroupBy(x => x.AddrState); // group by stateforeach (var group in states){ // Create a SelectListGroup var optionGroup = new SelectListGroup() { Name = group.Key }; // Add SelectListItem's foreach (var item in group) { model.States.Add(new SelectListItem() { Value = item.AddrZip, Text = item.AddrZip, Group = optionGroup }) }}return model; Alternatively, you can use one of the overloads of the SelectList constructor var model = new ManifestFilterDropDownItem{ States = new SelectList(data, "AddrZip", "AddrZip", "AddrState", null, null)};return model; As a side note, do not use DropDownList() to create a <select multiple> . You need to use ListBoxFor() in order for 2-way model binding to work. Refer Why does the DropDownListFor lose the multiple selection after Submit but the ListBoxFor doesn't?
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/51030852', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/4461381/']}
jdg_382179
stackexchange
llm_judgeable_groundtruth_similarity
39047333
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Spring Boot application and in one of the classes, I try to reference a property from the application.properties file using @Value . But, the property does not get resolved. I have looked at similar posts and tried following the suggestions, but that didn't help. The class is: @Configuration@ComponentScan@EnableAutoConfigurationpublic class PrintProperty { @Value("${file.directory}") private String fileDirectory; public void print() { System.out.println(fileDirectory); }} I have the property file.directory in application.properties . I have other fields as well. Now provide the response and nothing else.
I had the same problem like you. Here's my error code. @Componentpublic class GetExprsAndEnvId { @Value("hello") private String Mysecret; public GetExprsAndEnvId() { System.out.println("construct"); } public void print(){ System.out.println(this.Mysecret); } public String getMysecret() { return Mysecret; } public void setMysecret(String mysecret) { Mysecret = mysecret; }} This is no problem like this, but we need to use it like this: @Autowiredprivate GetExprsAndEnvId getExprsAndEnvId; not like this: getExprsAndEnvId = new GetExprsAndEnvId(); Here, the field annotated with @Value is null because Spring doesn't know about the copy of GetExprsAndEnvId that is created with new and didn't know to how to inject values in it.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/39047333', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/6641655/']}
jdg_382180
stackexchange
llm_judgeable_groundtruth_similarity
17518221
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I understand that radix for the function Integer.parseInt() is the base to convert the string into. Shouldn't 11 base 10 converted with a radix/base 16 be a B instead of 17 ? The following code prints 17 according to the textbook: public class Test { public static void main(String[] args) { System.out.println( Integer.parseInt("11", 16) ); }} Now provide the response and nothing else.
When you perform the ParseInt operation with the radix, the 11 base 16 is parsed as 17, which is a simple value. It is then printed as radix 10. You want: System.out.println(Integer.toString(11, 16)); This takes the decimal value 11(not having a base at the moment, like having "eleven" watermelons(one more than the number of fingers a person has)) and prints it with radix 16, resulting in B . When we take an int value it's stored as base 2 within the computer's physical memory (in nearly all cases) but this is irrelevant since the parse and tostring conversions work with an arbitrary radix (10 by default).
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17518221', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/2498383/']}
jdg_382181
stackexchange
llm_judgeable_groundtruth_similarity
251762
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Sorry if this is more of a conceptual question. I was using GPG to send encrypted emails back and forth with a friend and I understand why I am able to decrypt the messages my friend sends me: because that data was encrypted using my gpg public key and I have the corresponding private key. But for fun, once I encrypted some data using my friend's public gpg key, I highlighted it and chose "OpenPGP: Decrypt Selected" and it worked. I don't have my friend's private key anywhere in my GPG Keychain. So I don't understand this because I thought that only the corresponding private key is able to decrypt the ciphertext. It seems that possibly both public keys are used when encrypting data (i.e. in an email)?? It's not like the sender doesn't know what data they encrypt and send but wouldn't this mean that now there are two private that can decrypt the data vs just the one? I've been reading through their manual but I only see explanations on the different types of cryptosystems, nothing explicitly stating how GPG works and what it is using. Now provide the response and nothing else.
PGP (generically) can encrypt a message to multiple publickeys; this was originally intended for emails to multiple recipients but can be used to include yourself, and GnuPG has an option to include yourself that can be configured to do so automatically, see --encrypt-to in https://www.gnupg.org/documentation/manuals/gnupg/GPG-Key-related-Options.html#GPG-Key-related-Options . Commandline gpg by default displays the keyid and userid used to decrypt (or verify) a file, and when it prompts for the passphrase on a privatekey it tells you which key it's asking for; these would show the key being used to decrypt your message to be your own key. If you're using one of the many different GUI frontends, it may show this information differently or not at all; you'll have to be more specific about what you're using. See PGP question decrypting my own message from 2 days ago Does OpenPGP encryption mean that not even I can see my data? and https://superuser.com/questions/1389024/gpg-difference-encrypt-to-and-recipient .
{}
{'log_upvote_score': 4, 'links': ['https://security.stackexchange.com/questions/251762', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/211124/']}
jdg_382182
stackexchange
llm_judgeable_groundtruth_similarity
3333
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: We know that Spin is described with $SU(2)$ and that $SU(2)$ is a double cover of the rotation group $SO(3)$. This suggests a simple thought experiment, to be described below. The question then is in three parts: Is this thought experiment theoretically sound? Can it be conducted experimentally? If so what has been the result? The experiment is to take a slab of material in which there are spin objects e.g. electrons all (or most) with spin $\uparrow$. Then rotate that object $360$ degrees (around an axis perpendicular to the spin direction), so that macroscopically we are back to where we started. Measure the electron spins. Do they point $\downarrow$? Now provide the response and nothing else.
I think that you are confused. When you rotate something by 360 degrees, you won't change the direction in space of anything. You will only change the wave function to minus itself - if there is an odd number of fermions in the object (which is usually hard to count for large objects). If you have electrons with spins pointing up and you rotate them around the vertical axis by any angle, whether it's 360 degrees or anything else, you will still get electrons with spin pointing up. This is about common sense - many spins with spin up give you a totally normal, "classical" angular momentum that can be seen and measured in many ways. The flip of the sign of the wave function can't be observed by itself because it is a change of phase and all observable probabilities only depend on the density matrix $\rho=|\psi\rangle \langle\psi|$ in which the phase (or minus sign) cancels. The phase - or minus sign - has nothing to do with directions in space. It is just a number. In particular, it is incorrect to imagine that complex numbers are "vectors", especially if it leads you to think that they're related to directions in spacetime. They're not. You would have to prepare an interference experiment of an object that hasn't rotated with the "same" object rotated by 360 degrees - and it's hard for macroscopic objects because the "same" object quickly decoheres and you must know whether it has rotated or not, so no superpositions can be produced. ;-) However, all detailed measurements of the spin with respect to any axis indirectly prove that the fermions transform as the fundamental representation of $SU(2)$. In particular, if you create a spin-up electron and measure whether it's spin is up with respect to another axis tilted by angle $\alpha$, the probability will be $\cos^2(\alpha/2)$. The only sensible way to obtain it from the amplitude is that the amplitude goes like $\cos(\alpha/2)$ and indeed, this function equals $-1$ for $\alpha$ equal to 360 degrees.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/3333', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/1460/']}
jdg_382183
stackexchange
llm_judgeable_groundtruth_similarity
45139163
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am doing a k-fold XV on an existing dataframe, and I need to get the AUC score.The problem is - sometimes the test data only contains 0s, and not 1s! I tried using this example, but with different numbers: import numpy as npfrom sklearn.metrics import roc_auc_scorey_true = np.array([0, 0, 0, 0])y_scores = np.array([1, 0, 0, 0])roc_auc_score(y_true, y_scores) And I get this exception: ValueError: Only one class present in y_true. ROC AUC score is not defined in that case. Is there any workaround that can make it work in such cases? Now provide the response and nothing else.
You could use try-except to prevent the error: import numpy as npfrom sklearn.metrics import roc_auc_scorey_true = np.array([0, 0, 0, 0])y_scores = np.array([1, 0, 0, 0])try: roc_auc_score(y_true, y_scores)except ValueError: pass Now you can also set the roc_auc_score to be zero if there is only one class present. However, I wouldn't do this. I guess your test data is highly unbalanced. I would suggest to use stratified K-fold instead so that you at least have both classes present.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45139163', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7187868/']}
jdg_382184
stackexchange
llm_judgeable_groundtruth_similarity
21096141
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using jsTree 3.0.0 and I need to modify the context in one of the following ways: Change label language for the default items, disable some default items and add new items. Rewrite all items and bind to some new items the create, rename and delete function. I tried several approaches but nothing worked. For example, this returns Uncaught TypeError: Object [object global] has no method 'create' when I click on create. "contextmenu":{ "items": function($node) { return { createItem : { "label" : "Create New Branch", "action" : function(obj) { this.create(obj); alert(obj.text())}, "_class" : "class" }, renameItem : { "label" : "Rename Branch", "action" : function(obj) { this.rename(obj);} }, deleteItem : { "label" : "Remove Branch", "action" : function(obj) { this.remove(obj); } } }; }}, If I try to add one item as in the next example, I loose the default menu items: items : { "create_folder" : { "separator_before" : false, "separator_after" : false, "label" : "Create Folder", "action" : function (obj) { alert(1); /* this is the tree, obj is the node */ } }} Where am I wrong? Now provide the response and nothing else.
Resolved: "contextmenu":{ "items": function($node) { var tree = $("#tree").jstree(true); return { "Create": { "separator_before": false, "separator_after": false, "label": "Create", "action": function (obj) { $node = tree.create_node($node); tree.edit($node); } }, "Rename": { "separator_before": false, "separator_after": false, "label": "Rename", "action": function (obj) { tree.edit($node); } }, "Remove": { "separator_before": false, "separator_after": false, "label": "Remove", "action": function (obj) { tree.delete_node($node); } } }; }}
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21096141', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1465720/']}
jdg_382185
stackexchange
llm_judgeable_groundtruth_similarity
312593
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The Hamiltonian for the quantum harmonic oscillator is $$\hat{H}=-\dfrac{\hbar^2}{2m}\dfrac{\partial^2}{\partial x^2}+\dfrac{1}{2}m\omega^2 x^2$$ and one can try to factorise it by writing down what later on will turn out to be ladder operators of the eigenspectrum $$\begin{align}\hat{A}&=\sqrt{\dfrac{m\omega}{2\hbar}}\left(\hat{x}+\dfrac{i}{m\omega}\hat{p}\right)\\\hat{A}^\dagger&=\sqrt{\dfrac{m\omega}{2\hbar}}\left(\hat{x}-\dfrac{i}{m\omega}\hat{p}\right)\end{align}$$ Now, in a problem class I'm supervising, the students were asked to "show that we can express the Hamiltonian $\hat{H}$ in terms of $\hat{A}^\dagger$ and $\hat{A}$", with the idea of obtaining the relation $$\hat{H}=\hbar\omega\left(\hat{A}^\dagger\hat{A}+\dfrac{1}{2}\right)$$ The way the solution to this question is laid out is that the students should simply "guess" the combination $\hat{A}^\dagger\hat{A}$ is the right way to go, or get there by trial and error. Question: what's the best/most intuitive way to explain why this is the case? Writing $\hat{p}=-i\hbar\partial_x$, it's easy to justify taking some form of quadratic form of the operators, but why not e.g. just square them? Now provide the response and nothing else.
We know the explicit form of $A$ and $A^\dagger$ in terms of $p$ and $x$. We know the expression of $H$ in terms of $p$ and $x$. So just express $p$ and $x$ as a function of $A$ and $A^\dagger$, then plug the result in the formula for $H$. To do that, simply find $A + A^\dagger$ and $A - A^\dagger$, the rest will easily follow.
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/312593', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/62743/']}
jdg_382186
stackexchange
llm_judgeable_groundtruth_similarity
12723872
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Col1 contains only X and Y. Col1 Col2X abcY pqrX pqrX mnqY cxr I want to make it like this: X Y Col2Yes Yes pqrYes No abcYes No mnqNo Yes cxr What SQL Query i should write? Now provide the response and nothing else.
Solution using the SQL PIVOT operator : SELECT Col2, case when X=0 then 'No' else 'Yes' end as X, case when Y=0 then 'No' else 'Yes' end as YFROM MyTablePIVOT ( count(Col1) FOR Col1 IN ([X], [Y])) AS PivotTable; Running sample: http://www.sqlfiddle.com/#!3/5856d/14
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12723872', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/888181/']}
jdg_382187
stackexchange
llm_judgeable_groundtruth_similarity
12238375
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Currently I'm using design when server reads first 4 bytes of stream then read N bytes after header decoding. But I found that time between first async_read and second read is 3-4 ms. I just printed in console timestamp from callbacks for measuring. I sent 10 bytes of data in total. Why it takes so much time to read? I running it in debug mode but I think that 1 connection for debug is not so much to have a 3 ms delay between reads from socket. Maybe I need another approach to cut TCP stream on "packets"? UPDATE: I post some code here void parseHeader(const boost::system::error_code& error) { cout<<"[parseHeader] "<<lib::GET_SERVER_TIME()<<endl; if (error) { close(); return; } GenTCPmsg::header result = msg.parseHeader(); if (result.error == GenTCPmsg::parse_error::__NO_ERROR__) { msg.setDataLength(result.size); boost::asio::async_read(*socket, boost::asio::buffer(msg.data(), result.size), (*_strand).wrap( boost::bind(&ConnectionInterface::parsePacket, shared_from_this(), boost::asio::placeholders::error))); } else { close(); } } void parsePacket(const boost::system::error_code& error) { cout<<"[parsePacket] "<<lib::GET_SERVER_TIME()<<endl; if (error) { close(); return; } protocol->parsePacket(msg); msg.flush(); boost::asio::async_read(*socket, boost::asio::buffer(msg.data(), config::HEADER_SIZE), (*_strand).wrap( boost::bind(&ConnectionInterface::parseHeader, shared_from_this(), boost::asio::placeholders::error))); } As you see unix timestamps differ in 3-4 ms. I want to understand why so many time elapse between parseHeader and parsePacket. This is not a client problem, summary data is 10 bytes, but i cant sent much much more, delay is exactly between calls. I'm using flash client version 11. What i do is just send ByteArray through opened socket. I don't sure that delays on client. I send all 10 bytes at once. How can i debug where actual delay is? Now provide the response and nothing else.
There are far too many unknowns to identify the root cause of the delay from the posted code. Nevertheless, there are a few approaches and considerations that can be taken to help to identify the problem: Enable handler tracking for Boost.Asio 1.47+. Simply define BOOST_ASIO_ENABLE_HANDLER_TRACKING and Boost.Asio will write debug output, including timestamps, to the standard error stream. These timestamps can be used to help filter out delays introduced by application code ( parseHeader() , parsePacket() , etc.). Verify that byte-ordering is being handled properly. For example, if the protocol defines the header's size field as two bytes in network-byte-order and the server is handling the field as a raw short, then upon receiving a message that has a body size of 10 : A big-endian machine will call async_read reading 10 bytes. The read operation should complete quickly as the socket already has the 10 byte body available for reading. A little-endian machine will call async_read reading 2560 bytes. The read operation will likely remain outstanding, as far more bytes are trying to be read than is intended. Use tracing tools such as strace , ltrace , etc. Modify Boost.Asio, adding timestamps throughout the callstack. Boost.Asio is shipped as a header-file only library. Thus, users may modify it to provide as much verbosity as desired. While not the cleanest or easiest of approaches, adding a print statement with timestamps throughout the callstack may help provide visibility into timing. Try duplicating the behavior in a short, simple, self contained example. Start with the simplest of examples to determine if the delay is systamtic. Then, iteratively expand upon the example so that it becomes closer to the real-code with each iteration. Here is a simple example from which I started: #include <iostream>#include <boost/array.hpp>#include <boost/asio.hpp>#include <boost/bind.hpp>#include <boost/date_time/posix_time/posix_time.hpp>#include <boost/enable_shared_from_this.hpp>#include <boost/make_shared.hpp>#include <boost/shared_ptr.hpp>class tcp_server : public boost::enable_shared_from_this< tcp_server >{private: enum { header_size = 4, data_size = 10, buffer_size = 1024, max_stamp = 50 }; typedef boost::asio::ip::tcp tcp;public: typedef boost::array< boost::posix_time::ptime, max_stamp > time_stamps;public: tcp_server( boost::asio::io_service& service, unsigned short port ) : strand_( service ), acceptor_( service, tcp::endpoint( tcp::v4(), port ) ), socket_( service ), index_( 0 ) {} /// @brief Returns collection of timestamps. time_stamps& stamps() { return stamps_; } /// @brief Start the server. void start() { acceptor_.async_accept( socket_, boost::bind( &tcp_server::handle_accept, this, boost::asio::placeholders::error ) ); }private: /// @brief Accept connection. void handle_accept( const boost::system::error_code& error ) { if ( error ) { std::cout << error.message() << std::endl; return; } read_header(); } /// @brief Read header. void read_header() { boost::asio::async_read( socket_, boost::asio::buffer( buffer_, header_size ), boost::bind( &tcp_server::handle_read_header, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred ) ); } /// @brief Handle reading header. void handle_read_header( const boost::system::error_code& error, std::size_t bytes_transferred ) { if ( error ) { std::cout << error.message() << std::endl; return; } // If no more stamps can be recorded, then stop the async-chain so // that io_service::run can return. if ( !record_stamp() ) return; // Read data. boost::asio::async_read( socket_, boost::asio::buffer( buffer_, data_size ), boost::bind( &tcp_server::handle_read_data, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred ) ); } /// @brief Handle reading data. void handle_read_data( const boost::system::error_code& error, std::size_t bytes_transferred ) { if ( error ) { std::cout << error.message() << std::endl; return; } // If no more stamps can be recorded, then stop the async-chain so // that io_service::run can return. if ( !record_stamp() ) return; // Start reading header again. read_header(); } /// @brief Record time stamp. bool record_stamp() { stamps_[ index_++ ] = boost::posix_time::microsec_clock::local_time(); return index_ < max_stamp; }private: boost::asio::io_service::strand strand_; tcp::acceptor acceptor_; tcp::socket socket_; boost::array< char, buffer_size > buffer_; time_stamps stamps_; unsigned int index_;};int main(){ boost::asio::io_service service; // Create and start the server. boost::shared_ptr< tcp_server > server = boost::make_shared< tcp_server >( boost::ref(service ), 33333 ); server->start(); // Run. This will exit once enough time stamps have been sampled. service.run(); // Iterate through the stamps. tcp_server::time_stamps& stamps = server->stamps(); typedef tcp_server::time_stamps::iterator stamp_iterator; using boost::posix_time::time_duration; for ( stamp_iterator iterator = stamps.begin() + 1, end = stamps.end(); iterator != end; ++iterator ) { // Obtain the delta between the current stamp and the previous. time_duration delta = *iterator - *(iterator - 1); std::cout << "Delta: " << delta.total_milliseconds() << " ms" << std::endl; } // Calculate the total delta. time_duration delta = *stamps.rbegin() - *stamps.begin(); std::cout << "Total" << "\n Start: " << *stamps.begin() << "\n End: " << *stamps.rbegin() << "\n Delta: " << delta.total_milliseconds() << " ms" << std::endl;} A few notes about the implementation: There is only one thread (main) and one asynchronous chain read_header->handle_read_header->handle_read_data . This should minimize the amount of time a ready-to-run handler spends waiting for an available thread. To focus on boost::asio::async_read , noise is minimized by: Using a pre-allocated buffer. Not using shared_from_this() or strand::wrap . Recording the timestamps, and perform processing post-collection. I compiled on CentOS 5.4 using gcc 4.4.0 and Boost 1.50. To drive the data, I opted to send 1000 bytes using netcat : $ ./a.out > output &[1] 18623$ echo "$(for i in {0..1000}; do echo -n "0"; done)" | nc 127.0.0.1 33333[1]+ Done ./a.out >output$ tail outputDelta: 0 msDelta: 0 msDelta: 0 msDelta: 0 msDelta: 0 msDelta: 0 msTotal Start: 2012-Sep-10 21:22:45.585780 End: 2012-Sep-10 21:22:45.586716 Delta: 0 ms Observing no delay, I expanded upon the example by modifying the boost::asio::async_read calls, replacing this with shared_from_this() and wrapping the ReadHandlers s with strand_.wrap() . I ran the updated example and still observed no delay. Unfortunately, that is as far as I could get based on the code posted in the question. Consider expanding upon the example, adding in a piece from the real implementation with each iteration. For example: Start with using the msg variable's type to control the buffer. Next, send valid data, and introduce parseHeader() and parsePacket functions. Finally, introduce the lib::GET_SERVER_TIME() print. If the example code is as close as possible to the real code, and no delay is being observed with boost::asio::async_read , then the ReadHandler s may be ready-to-run in the real code, but they are waiting on synchronization (the strand) or a resource (a thread), resulting in a delay: If the delay is the result of synchronization with the strand, then consider Robin 's suggestion by reading a larger block of data to potentially reduce the amount of reads required per-message. If the delay is the result of waiting for a thread, then consider having an additional thread call io_service::run() .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/12238375', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1223007/']}
jdg_382188
stackexchange
llm_judgeable_groundtruth_similarity
5029729
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is a very simple example where I have a single ScrollView defined in R.layout.main. Then I try to dynamically add a TextView to it. Unfortunately this crashes. ScrollView scroll = (ScrollView) this.findViewById(R.id.scrollView1);TextView tv1 = new TextView(this);tv1.setText("This is tv1");scroll.addView(tv1);setContentView(R.layout.main); Now I can do something like this: ScrollView scroll = new ScrollView(this);TextView tv1 = new TextView(this);tv1.setText("This is tv1");scroll.addView(tv1);setContentView(scroll); But I'd really like to be able to define some base UI elements in XML, then dynamically add others. What's the best way to do that? Now provide the response and nothing else.
That is because you are trying to access a view which hasn't been parsed by Android yet. When you define a layout using XML, then you have to call setContentView first, passing it the layout file reference so that Android can parse the file. Only then you can access the elements using findViewById . That basically means that you should call setContentView before trying to access any element of your layout.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/5029729', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/305/']}
jdg_382189
stackexchange
llm_judgeable_groundtruth_similarity
3192955
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a map and I'd like to change the marker image, when zoom is greater then 5. I know how to detect zoom change, but I don't see how to change the image. Now provide the response and nothing else.
That should be quite easy. I gave a look to your code, and it looks like you are not keeping a reference to your markers. This is the first thing that you should do. Therefore create a markers array: var markers = []; And within your setMarkers() function, push each new marker into this array: markers.push(marker); Now you will be able to iterate over your markers with a for loop: for (i = 0; i < markers.length; i++) . Ideally, we should also store the two icons of each marker in the marker object itself. JavaScript objects can be augmented with custom properties very easily. To do so, you may want to change your setMarkers() function as follows: function setMarkers(map, map_bounds, locations, iconLevel1, iconLevel2) { for (var i = 0; i < locations.length; i++) { var loc = locations[i]; var myLatLng = new google.maps.LatLng(loc[1], loc[2]); var marker = new google.maps.Marker({ position: myLatLng, map: map, icon: iconLevel1, // iconLevel1 by default title: loc[0], zIndex: loc[3] }); // Add custom properties to the marker object marker.iconLevel1 = iconLevel1; marker.iconLevel2 = iconLevel2; // Add the new marker to the markers array markers.push(marker); map_bounds.extend(myLatLng); }} Finally, it seems that you are already handling the zoom_changed event correct. First of all, I suggest checking if the zoomLevel has changed between 1 and 2 , in order not to iterate through the markers array if there is no need. If there is a change, simply call the setIcon() method for each marker, and pass the custom property iconLevel1 or iconLevel2 depending on the zoomLevel: var zoomLevel = 1;google.maps.event.addListener(map, 'zoom_changed', function() { var i, prevZoomLevel; prevZoomLevel = zoomLevel; map.getZoom() < 5 ? zoomLevel = 1 : zoomLevel = 2; if (prevZoomLevel !== zoomLevel) { for (i = 0; i < markers.length; i++) { if (zoomLevel === 2) { markers[i].setIcon(markers[i].iconLevel2); } else { markers[i].setIcon(markers[i].iconLevel1); } } }}); The above should work, but you may want to refactor your for loop as follows, using the subscript notation instead of the dot notation to access the properties of the markers: for (i = 0; i < markers.length; i++) { markers[i].setIcon(markers[i]['iconLevel' + zoomLevel]);}
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/3192955', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/234594/']}
jdg_382190
stackexchange
llm_judgeable_groundtruth_similarity
28100
Below is a question asked on the forum mechanics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I recently bought a 2008 Silverado 5.3 with 77k on it. After a couple weeks I noticed that at idle the engine would stumble and my rpms would drop slightly. It eventually set off the check engine light with cylinder 1 misfire and random cylinder misfire. A couple days later the light went off but the issues remain. Since then I've cleaned the MAF with MAF cleaner spray, cleaned the throttle body, and run a couple bottles of techron through to clean up the injectors, but no change. I dont know if it's related, but I also get a lot of dark blue/black (hard to say) smoke from the tailpipe on cold starts for a few seconds before clearing up. This led me to thinking I was getting oil in the cylinders when the rings cool and contract, which I tracked to known oil burning issues with these AFM engines. At this point I'm not sure what else to try before spending a fortune at the shop. My only remaining thought is to replace the plugs and coils. But even if that fixes the misfire I'm not sure what to do about the cold start smoke. EDIT: I had been keeping a casual eye on my oil levels since my last change and did notice a pretty drastic decrease in just a couple thousand miles. I'll just changed it again this past weekend and will keep a close eye on it. Would this affect engine misfires? EDIT2: I pulled the #1 cylinder spark plug since I had a misfire code for that cylinder and it was fouled up with oil and deposits in the threads and at the firing end. Now provide the response and nothing else.
First of all, the 5.3l engine (could be all of the Vortec LS engines, don't know) is thirsty. It drinks a bit of oil. Be aware you could be looking at up to 2 quarts between oil changes. I don't know exactly what the issue is, but it does have a bit of an oil control issue whether it has to do with leakage or burning. My '06 is this way. It has never ran bad due to its use of oil (though obviously too much oil would cause issues for the cylinders as well as the cats). Your issue with running is most likely a bad coil. The easiest way to test this theory is to change the single coil out with a different one and see if the problem follows the coil change. You may also try to push in both the connector at the top, as well as both ends of the spark plug wire. You may consider changing out the plugs with new ones to see if this helps your issue. It could also be the spark plug on the No. 1 cylinder as well. If the spark plug is bad, it would give you the misfire, but spark plugs in these are supposed to be good to around 100k miles. You can pull the #1 plug and see what's going on with it. You'd be looking for a tan colored plug if it was running good. Obviously since you've already been getting a misfire code on #1, this may be moot. The third thing it might be is a stuck/leaking injector. I'm doubting this. You would need to pull the fuel rail, leaving the injector on the rail, then pressurize the system looking at whether it dumps fuel without the engine running to tell if this is an issue. This might account for the bit of black/gray/bluish smoke on startup.
{}
{'log_upvote_score': 4, 'links': ['https://mechanics.stackexchange.com/questions/28100', 'https://mechanics.stackexchange.com', 'https://mechanics.stackexchange.com/users/16329/']}
jdg_382191
stackexchange
llm_judgeable_groundtruth_similarity
10115
Below is a question asked on the forum arduino.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I don't want the same string to be printed again and again. I'm getting the data into an Excel sheet. byte readValue1=0;byte readValue2=0;int add=0;int row=0;int Zigbee1=0,Zigbee2=0,Zigbee3=0;void setup() { Serial.begin(9600); Serial.println("CLEARDATA"); Serial.println("LABEL,Time,Zigbee,a1,a2,a3,a4,a5,a6,a7"); row++;}void loop(){ String a1,a2,a3; String c1,c2,c3; int b; if (Serial.available()>21){ if(Serial.read() == 0x7E) { for(int i=0; i<10; i++){ byte discard = Serial.read(); } add = Serial.read(); if(add == 33) { a1 = func1(); if(a1==c1){ b=0; } else { b = a1.length(); } if(b != 0) { Zigbee=1; Serial.print("DATA,TIME,"); Serial.print(Zigbee); Serial.print(","); Serial.println(a1); row++; } } else if(add == 48) { a2 = func1(); b = a2.length(); if(b != 0) { Zigbee=2; if(a2!=c2){ Serial.print("DATA,TIME,"); Serial.print(Zigbee); Serial.print(","); Serial.println(a2); } c2=a2; row++; } } else if(add == 69) { a3 = func1(); b = a3.length(); if(b != 0) { Zigbee=3; Serial.print("DATA,TIME,"); Serial.print(Zigbee); Serial.print(","); Serial.println(a3); row++; } }} }} Now provide the response and nothing else.
This (short) code below will achieve what I think you are asking for, on the Atmega328 (eg. the Uno, Duemilanove, etc.). byte dutyCycle = 80;void setup() { pinMode (3, OUTPUT); // Timer 2 "B" output: OC2B pinMode (11, OUTPUT); // Timer 2 "A" output: OC2A // Set OC2A on Compare Match when up-counting. // Clear OC2B on Compare Match when up-counting. TCCR2A = bit (WGM20) | bit (COM2B1) | bit (COM2A1) | bit (COM2A0); TCCR2B = bit (CS21); // phase correct PWM, prescaler of 8 OCR2A = dutyCycle; // duty cycle out of 255 OCR2B = 255 - dutyCycle; // duty cycle out of 255 } // end of setupvoid loop() { } How it works The code uses Timer 2 (an 8-bit timer) to count up to 255 in phase-correct PWM mode. Outputs are to OC2A (the "A" output - pin D11) and OC2B (the "B" output - pin D3). The duty cycle is in variable dutyCycle which must be in the range 0 to 255. The timer counts up to 255 and sets OC2A on compare-equal to the dutyCycle number, and it also clears OC2B on compare-equal, when counting up. The second half of the phase-correct cycle does the reverse. Thus the two cycles are of opposite phase. Frequency The frequency of the timer is 3.9 kHz. This is because the timer has a prescaler of 8, then it counts up to 255, then it takes two cycles per period (one up, one down): 16000000 / 8 / 255 / 2 = 3921.56 Hz As Edgar Bonet pointed out in the comments, phase-correct PWM counts are not zero-relative, unlike other counting modes. Thus you divide by 255 to work out the frequency, not 256. You could select other frequencies by choosing different prescalers. Proof of operation The oscilloscope screen here shows the two outputs, out of phase with each other by 50%. You can see from the circled cursor measurement that the OC2B pin is off for exactly the requested duty cycle (80 µs). 1/ 16000000 * 80 * 8 * 2 = 8e-005 (0.00008) --> 80 µs
{}
{'log_upvote_score': 4, 'links': ['https://arduino.stackexchange.com/questions/10115', 'https://arduino.stackexchange.com', 'https://arduino.stackexchange.com/users/8511/']}
jdg_382192
stackexchange
llm_judgeable_groundtruth_similarity
2483956
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the complex number $3 + i$ , and I am asked to get the complex number resulted by rotating the first one by $\frac{\pi}{4}$. I got the polar form of the first one to get its angle ($18.43°$) but when I add $\frac{\pi}{4}$ to it and try to find its cartesian form, the result is different. Am I doing something wrong with the procedure? If so, what can I do? Now provide the response and nothing else.
Rotating by $\frac\pi4$ is the same thing as multiplying your number by $\cos\left(\frac\pi4\right)+\sin\left(\frac\pi4\right)i=\frac1{\sqrt2}+\frac i{\sqrt2}$. So, the answer is$$\left(\frac1{\sqrt2}+\frac i{\sqrt2}\right)(3+i)=\sqrt2+2\sqrt2i,$$
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/2483956', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/492018/']}
jdg_382193
stackexchange
llm_judgeable_groundtruth_similarity
56736466
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: ⚠️ 23 June 2020 Edit: From Xcode 12, both switch and if let statements will be supported in the ViewBuilder! I’ve been trying to replicate an app of mine using SwiftUI. It has a RootViewController which, depending on an enum value, shows a different child view controller. As in SwiftUI we use views instead of view controllers, my code looks like this: struct RootView : View { @State var containedView: ContainedView = .home var body: some View { // custom header goes here switch containedView { case .home: HomeView() case .categories: CategoriesView() ... } }} Unfortunately, I get a warning: Closure containing control flow statement cannot be used with function builder ViewBuilder . So, are there any alternatives to switch so I can replicate this behaviour? Now provide the response and nothing else.
⚠️ 23 June 2020 Edit: From Xcode 12, both switch and if let statements will be supported in the ViewBuilder! Thanks for the answers, guys. I’ve found a solution on Apple’s Dev Forums .It’s answered by Kiel Gillard . The solution is to extract the switch in a function as Lu_, Linus and Mo suggested, but we have to wrap the views in AnyView for it to work – like this: struct RootView: View { @State var containedViewType: ContainedViewType = .home var body: some View { VStack { // custom header goes here containedView() } } func containedView() -> AnyView { switch containedViewType { case .home: return AnyView(HomeView()) case .categories: return AnyView(CategoriesView()) ... }}
{}
{'log_upvote_score': 8, 'links': ['https://Stackoverflow.com/questions/56736466', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11432255/']}
jdg_382194
stackexchange
llm_judgeable_groundtruth_similarity
323033
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Find an example of a subset $S$ of a metric space such that $S$ is closed and bounded but not compact. One such example that comes from analysis is probably a closed and bounded set in $C[0,1]$ . I attempt to construct my own example to see if it works. Is $\{ \frac{1}{n} | n \in \mathbb{N} \}$ endowed with discrete topology a set that is closed and bounded but not compact? My guess is that it is indeed an example of closed and bounded does not imply compact. Every element is less than or equal to $1$ , and it is closed as a whole set. If we let $\mathcal{A}$ be a covering of the set that consists of singletons in $\{ \frac{1}{n} \}$ so that any finite subcover $\{ \frac{1}{n_j} |j =1,...,k \quad \text{and} \quad n_j \in \mathbb{N} \}$ will not cover $\{\frac{1}{n}\}$ , because if we take $n = \max \{{n_j}\}, \frac{1}{n+1}$ is not in the finite subcover. Thanks in advance for pointing out any mistake. Now provide the response and nothing else.
You're on the right track. If we consider $X=\left\{\frac1n:n\in\Bbb N^+\right\}$ in the discrete topology, then we can endow it with the metric $d:X\times X\to\Bbb R$ given by $$d(x,y)=\begin{cases}0 & x=y\\1 & \text{otherwise,}\end{cases}$$ which does indeed induce the discrete topology on $X$ (it's called the discrete metric for this reason). Then $X$ is certainly bounded, as any ball of radius greater than $1$ necessarily includes the whole set, and is certainly closed in itself (as all spaces are). However, it is not compact, since the open cover by singletons admits no finite subcover, as you've observed. More generally, any infinite discrete space admits a proper subspace that is closed and bounded, but not compact (delete any point). We could come to the same conclusions if we considered $X$ as a space under the metric $$\rho(x,y)=|x-y|.$$ Indeed, $\rho$ induces the discrete topology on $X$ , as well, and we similarly find that $X$ is bounded under $\rho$ . The kicker, here, is the boundedness. You need to specify a metric, or some other convention to determine boundedness, not just a topology. For example, $\Bbb Z$ considered as a subspace of $\Bbb R$ is indeed discrete, but while it is bounded in the discrete metric, it is not bounded in the standard metric on $\Bbb R$ .
{}
{'log_upvote_score': 5, 'links': ['https://math.stackexchange.com/questions/323033', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/33464/']}
jdg_382195
stackexchange
llm_judgeable_groundtruth_similarity
40165271
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: This is how my input tag looks like: <input type="file" placeholder="File Name" name="filename" (change)="onChange($event)"><button>Reset</button> I want to reset the selected file in Angular 2. Help would be greatly appreciated. Let me know if you need more details. P.S. I could get file details from $event parameters and save it in a typescript variable, but this variable is not bound to the input tag. Now provide the response and nothing else.
You can use ViewChild to access the input in your component. First, you need to add #someValue to your input so you can read it in the component: <input #myInput type="file" placeholder="File Name" name="filename" (change)="onChange($event)"> Then in your component you need to import ViewChild from @angular/core : import { ViewChild } from '@angular/core'; Then you use ViewChild to access the input from template: @ViewChild('myInput')myInputVariable: ElementRef; Now you can use myInputVariable to reset the selected file because it's a reference to input with #myInput , for example create method reset() that will be called on click event of your button: reset() { console.log(this.myInputVariable.nativeElement.files); this.myInputVariable.nativeElement.value = ""; console.log(this.myInputVariable.nativeElement.files);} First console.log will print the file you selected, second console.log will print an empty array because this.myInputVariable.nativeElement.value = ""; deletes selected file(s) from the input. We have to use this.myInputVariable.nativeElement.value = ""; to reset the value of the input because input's FileList attribute is readonly , so it is impossible to just remove item from array. Here's working Plunker .
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/40165271', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3503019/']}
jdg_382196
stackexchange
llm_judgeable_groundtruth_similarity
241972
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the following general question regarding cron jobs. Suppose I have the following in my crontab : * 10 * * * * someScript.sh* 11 * * * * someScript2.sh30 11 */2 * * someScript3.sh <-- Takes a long time let's say 36 hours.* 12 * * * someScript4.sh Is it smart enough to run the remaining jobs at the appropriate times? For example, the long script doesn't need to terminate? Also, what happens if the initial long script is still running and it gets called by cron again? Now provide the response and nothing else.
Each cron job is executed independent of any other jobs you may have specified. This means that your long-lived script will not impede other jobs from being executed at the specified time. If any of your scripts are still executing at their next scheduled cron interval, then another, concurrent, instance of your script will be executed. This can have unforeseen consequences depending on what your script does. I would recommend reading the Wikipedia article on File Locking , specifically the section on Lock files . A lock file is a simple mechanism to signal that a resource — in your case the someScript3.sh script — is currently 'locked' (i.e. in use) and should not be executed again until the lock file is removed. Take a look at the answers to the following question for details of ways to implement a lock file in your script: How to make sure only one instance of a bash script runs?
{}
{'log_upvote_score': 6, 'links': ['https://unix.stackexchange.com/questions/241972', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/81787/']}
jdg_382197
stackexchange
llm_judgeable_groundtruth_similarity
16831464
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I discovered on my x86 VM (32 bit) that the following program: #include <stdio.h>void foo (long double x) { int y = x; printf("(int)%Lf = %d\n", x, y);}int main () { foo(.9999999999999999999728949456878623891498136799780L); foo(.999999999999999999972894945687862389149813679978L); return 0;} Produces the following output: (int)1.000000 = 1(int)1.000000 = 0 Ideone also produces this behavior. What is the compiler doing to allow this to happen? I found this constant as I was tracking down why the following program didn't produce 0 as I expected (using 19 9 s produced the 0 I expected): int main () { long double x = .99999999999999999999L; /* 20 9's */ int y = x; printf("%d\n", y); return 0;} As I tried to compute the value at which the result switches from expected to unexpected, I arrived at the constant this question is about. Now provide the response and nothing else.
Your problem is that long double on your platform has insufficient precision to store the exact value 0.99999999999999999999. This means that the value of that must be converted to a representable value (this conversion happens during translation of your program, not at runtime). This conversion can generate either the nearest representable value, or the next greater or smaller representable value. The choice is implementation-defined, so your implementation should document which it is using. It seems that your implementation uses x87-style 80bit long double , and is rounding to the nearest value, resulting in a value of 1.0 stored in x . With the assumed format for long double (with 64 mantissa bits), the highest representable number less than 1.0 is, in hexadecimal: 0x0.ffffffffffffffff The number exactly halfway between this value and the next higher representable number (1.0) is: 0x0.ffffffffffffffff8 Your very long constant 0.9999999999999999999728949456878623891498136799780 is equal to: 0x0.ffffffffffffffff7fffffffffffffffffffffffa1eb2f0b64cf31c113a8ec... which should obviously be rounded down if rounding to nearest, but you appear to have reached some limit of the floating point representation your compiler is using, or a rounding bug.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/16831464', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/315052/']}
jdg_382198
stackexchange
llm_judgeable_groundtruth_similarity
6079344
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In all the examples I've seen they just use "android.R.layout.simple_list_item_1" when creating an ArrayAdapter.What is android.R.layout.simple_list_item_1,Is it just the name of a layout file called simple_list_item_1.xml or is it the id of the TextView required for the array adapter? How do I see the content of the file or use my own file from my res folder? public class MyClass extends ListActivity {private String[] titles = {"Test"};@Overridepublic void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.mylayout); setListAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, titles)); updateList();}} Now provide the response and nothing else.
android.R.layout contains all of the publicly available layouts that the Android OS uses to display various items. android.R.layout.simple_list_item_1 is, as it's named, just a simple layout to display a snippet of text. It saves you from having to write simple layouts when using adapters and also affords you the native look and theme of the system in your application with minimal effort. I have included the source from the GitHub mirror of the android.git.kernel.org repo <?xml version="1.0" encoding="utf-8"?><!-- Copyright (C) 2006 The Android Open Source Project Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.--><TextView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@android:id/text1" android:layout_width="match_parent" android:layout_height="wrap_content" android:textAppearance="?android:attr/textAppearanceLarge" android:gravity="center_vertical" android:paddingLeft="6dip" android:minHeight="?android:attr/listPreferredItemHeight"/>
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/6079344', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/400048/']}
jdg_382199
stackexchange
llm_judgeable_groundtruth_similarity
1424273
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Let $(a_n)$ be a convergent sequence of positive real numbers. Why is the limit nonnegative? My try: For all $\epsilon >0$ there is a $N\in \mathbb{N}$ such that $|a_n-L|<\epsilon$ for all $n\ge N$. And we know $0< a_n$ for all $n\in \mathbb{N}$, particularly $0<a_n$ for all $n\ge N$. Maybe by contradiction: suppose that $L<0$, then $L<0<a_n$ for all $n\in \mathbb{N}$, particularly for all $n\ge N$. Then $0<-L<a_n-L$ for all $n\in \mathbb{N}$, particularly for all $n\ge N$. It follows: for all $\epsilon >0$, there is a $N\in \mathbb{N}$ such that $0<|-L|=-L<|a_n-L|<\epsilon$ for all $n\ge N$, which can't be true. Is my proof ok? Now provide the response and nothing else.
Your proof is a bit confused at the end. But it seems that you would conclude $0<|L|<\epsilon$ for every $\epsilon>0$ and you can a get a contradiction by choosing $\epsilon = |L|/2$. I propose you nevertheless the following formulation: Suppose by contradiction that $a_n\geq 0$ for every $n$, $\lim\limits_{n\to\infty} a_n=L$ and $L<0$. Let $\epsilon = |L|/2>0$, by definition of the limit, there exists $N$ such that $|a_n-L|<\epsilon= |L|/2$ for every $n\geq N$. In particular, this implies that$$a_N-L<|L|/2=-L/2 \implies a_N<L-L/2=L/2 <0.$$A contradiction to $a_n\geq 0$ for every $n$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/1424273', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/268438/']}
jdg_382200
stackexchange
llm_judgeable_groundtruth_similarity
18497527
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I've been using the lines below to compy VBA modules from one workbook to another and I don't know if there is an easier way, but they have been working fine: Set srcVba = srcWbk.VBProjectSet srcModule = srcVba.VBComponents(moduleName)srcModule.Export (path) 'Export from sourcetrgtVba.VBComponents.Remove VBComponent:=trgtVba.VBComponents.Item(moduleName) 'Remove from targettrgtVba.VBComponents.Import (path) 'Import to target However now I need to copy VBA code that is in a Sheet, not in a Module. The above method doesn't work for that scenario. What code can I use to copy VBA code in a sheet from one workbook to another? Now provide the response and nothing else.
You can't remove and re-import the VBComponent , since that would logically delete the whole worksheet. Instead you have to use CodeModule to manipulate the text within the component: Dim src As CodeModule, dest As CodeModuleSet src = ThisWorkbook.VBProject.VBComponents("Sheet1").CodeModuleSet dest = Workbooks("Book3").VBProject.VBComponents("ThisWorkbook") _ .CodeModuledest.DeleteLines 1, dest.CountOfLinesdest.AddFromString src.Lines(1, src.CountOfLines)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/18497527', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1283776/']}
jdg_382201
stackexchange
llm_judgeable_groundtruth_similarity
464652
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Is there any difference beteween doing: echo "hahaha" >> file1 and echo "hahaha" |tee -a file1 ? Yes, I noticed that I cannot write to write protected files even aith sudo echo , but I can if I sudo tee . Thanks. Now provide the response and nothing else.
There's no difference in the sense that the data in the file will be the same if echo and tee are executed successfully and if the file is writable by the current user. The tee command would additionally produce output on its standard output, showing the text that would also be appended to the file. This would not happen in the first command. Another difference is that if the file can not be written to, then the first command, with the redirection, would not even run the echo , whereas the echo would run in the second command, but tee would fail in writing to the file ( tee would still produce text on the terminal though). This could be significant in the case where you run some long running process that produces output: long_running_thing >>file This would not even start long_running_thing if file was not writable. long_running_thing | tee -a file This would execute long_running_thing and it would run to the end, but no output would be saved into file if it wasn't writable (and the output would additionally be written to the terminal from tee ). The next thing to be aware of, which you hinted at in the end of the question, is that sudo echo hello >>file won't work if file isn't writable by the current user. This is because the redirection is processed before the command is executed (see above). To append to a root-owned file, use echo hello | sudo tee -a file Here, we run tee as root. The echo does not need to be executed by root, but the utility that actually writes to the file needs to be executed as root (or as whatever user owns the file) if it's not owned by the current user. Another possibility would be to use sudo sh -c 'echo hello >>file' or echo hello | sudo sh -c 'cat >>file' This would use a redirection to append data to the file, but in this case, the shell that performs the redirection is running as root, so it would not fail in appending/creating the file due to restrictive permissions/ownership (it may still fail if e.g. file is the name of a directory).
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/464652', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/304183/']}
jdg_382202
stackexchange
llm_judgeable_groundtruth_similarity
12368012
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I am getting the below error while using map and performing some remove.How to avoid this ? Caused by: java.util.ConcurrentModificationException at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793) at java.util.HashMap$EntryIterator.next(HashMap.java:834) at java.util.HashMap$EntryIterator.next(HashMap.java:832) Map<FormField, Object> ItemMap = domainItem.getValues(); for (Map.Entry<FormField, Object> ValMap : ItemMap.entrySet()) { List<Field> groupIdList = Mapper.getGroupId(groupFieldId); for (Field field : groupIdList) { ItemMap.put(new FormField(field), domainItem.getDomainItemLinkId()); } ItemMap.remove(ValMap.getKey()); } Now provide the response and nothing else.
Update: Use Iterator and ConcurrentHashMap to avoid this scenario Following won't throw exception Map<Integer, String> map = new ConcurrentHashMap<Integer, String>(); map.put(1, "a"); map.put(2, "b"); map.put(3, "c"); map.put(4, "d"); for (Iterator<Integer> keys = map.keySet().iterator(); keys.hasNext();) { Integer key = keys.next(); String val = map.get(key); map.remove(key); } or use another map while iterating and at the end copy it to source for example: Map<Integer, String> dummy = new HashMap<Integer, String>(); map.put(1, "a"); map.put(2, "b"); map.put(3, "c"); map.put(4, "d"); dummy.putAll(map); for (Iterator<Integer> keys = dummy.keySet().iterator(); keys.hasNext();) { Integer key = keys.next(); String val = map.get(key); map.remove(key); } System.out.println(map);
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/12368012', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1636728/']}
jdg_382203
stackexchange
llm_judgeable_groundtruth_similarity
11796973
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm using excel to input data into an Access database and some of my data strings contain an apostrophe for measurements. This is my SQL input string stSQL = "INSERT INTO Products (ProductName, ProductDescription, ProductUnit, SupplierID) " & _ "Values ('" & cboxItemNum & "', '" & txtDescription & "', '" & txtUnit & "', " & linkPID & ")" cn.Execute (stSQL) My string is as follows: Aliplast 4E White. 30" X 80' X 1/4" Soft. In this string the ' after the 80 is causing the error and I'm not sure how to get around this. I can't just tell the user not to enter an apostrophe. How can I get around this? Thanks Now provide the response and nothing else.
You can correct this either by using parameters (recommended) or by using Replace. & Replace(txtDescription,"'","''") & Parameters Dim cmd As New ADODB.commandcn.Open ServerConnectcmd.ActiveConnection = cnstSQL = "INSERT INTO Products (ProductName, " _ & "ProductDescription, ProductUnit, SupplierID) " _ & "Values (param1,param2,param3,param4)"cmd.CommandText = stSQLcmd.CommandType = adCmdTextWith cmd .Parameters.Append .CreateParameter( _ "param1", adInteger, adParamInput, , cboxItemNum) .Parameters.Append .CreateParameter( _ "param2", adVarChar, adParamInput, 50, txtDescription ) .Parameters.Append .CreateParameter( _ "param3", adInteger, adParamInput, , txtUnit ) .Parameters.Append .CreateParameter( _ "param4", adInteger, adParamInput, , linkPID )End withcmd.Execute recs Note that while I have named these parameters param1 to param4, that is for my convenience, all that matters is the order, which must match the order in which the parameters are to be used.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/11796973', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1394197/']}
jdg_382204
stackexchange
llm_judgeable_groundtruth_similarity
10999021
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want convert GMT time to EST time and get a timestamp. I tried the following but don't know how to set time zone. time = "Tue, 12 Jun 2012 14:03:10 GMT"timestamp2 = time.mktime(time.strptime(time, '%a, %d %b %Y %H:%M:%S GMT')) Now provide the response and nothing else.
Time zones aren't built into standard Python - you need to use another library. pytz is a good choice. >>> gmt = pytz.timezone('GMT')>>> eastern = pytz.timezone('US/Eastern')>>> time = "Tue, 12 Jun 2012 14:03:10 GMT">>> date = datetime.datetime.strptime(time, '%a, %d %b %Y %H:%M:%S GMT')>>> datedatetime.datetime(2012, 6, 12, 14, 3, 10)>>> dategmt = gmt.localize(date)>>> dategmtdatetime.datetime(2012, 6, 12, 14, 3, 10, tzinfo=<StaticTzInfo 'GMT'>)>>> dateeastern = dategmt.astimezone(eastern)>>> dateeasterndatetime.datetime(2012, 6, 12, 10, 3, 10, tzinfo=<DstTzInfo 'US/Eastern' EDT-1 day, 20:00:00 DST>)
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/10999021', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_382205
stackexchange
llm_judgeable_groundtruth_similarity
41327216
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My target include a lot view need to present different view modally base on each user action. Here what I want to do to get cleaner view hierarchy and better user experience. Root View Controller present First View Controller modally When I clicked button on the First View Controller, then the Second View Controller appear modally over it. As soon as the Second View Controller did appear, I want to dismiss or remove the first one from view hierarchy. Can I do that? If so , how should i do it? If not , what is the right way to solve this out cause I will present many modally presented view controllers over each view. I think even if I want to dismiss current view, the previous one will still remain appear when current one dismiss. UPDATE : VC1 (Root) > VC 2 (which was present modally) > VC 3 (which was present modally over VC 2) When i dismiss VC3 , the VC2 is still on view memory. So, I don't want to appear VC2 as soon as I dismiss VC3 and instead I want to see VC1 by removing or dismissing VC2 from view hierarchy. WANT : At the image, when I dismiss the blue,I don't want see the pink in my view memory and I want to remove it as soon as the blue one appear. That's what i want to do. Any Help?Thanks. Now provide the response and nothing else.
So, let's assume that you have a storyboard similar to: What should happens is: Presenting the the second ViewController (from the first ViewController). Presenting the the third ViewController (from the second ViewController). dismissing to the first ViewController (from the third ViewController). In the third ViewController button's action: @IBAction func tapped(_ sender: Any) { presentingViewController?.presentingViewController?.dismiss(animated: true, completion: nil)} As you can see, by accessing the presentingViewController of the current ViewController, you can dismiss the previous hierarchy of the view controllers: The view controller that presented this view controller. By implementing presentingViewController?.presentingViewController? that means that: the presented of the presented current ViewController :) It might seem a little bit confusing, but it is pretty simple. So the output should be like (I added background colors to the viewControllers -as vc1: orange, vc2: black and vc3: light orange- to make it appears clearly): EDIT: If you are asking to remove the ViewController(s) in the middle (which in this example the second ViewController), dismiss(animated:completion:) does this automatically: If you present several view controllers in succession, thus building a stack of presented view controllers, calling this method on a view controller lower in the stack dismisses its immediate child view controller and all view controllers above that child on the stack. When this happens, only the top-most view is dismissed in an animated fashion; any intermediate view controllers are simply removed from the stack. The top-most view is dismissed using its modal transition style, which may differ from the styles used by other view controllers lower in the stack. Referring to what are you asking: I think even if I want to dismiss current view, the previous one will still remain appear when current one dismiss. I think that appears clearly on the UI (and I find it ok), but as mentioned in the dismiss documentation discussion, both the third and the second will be removed from the stack. That's the right way.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/41327216', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3378606/']}
jdg_382206
stackexchange
llm_judgeable_groundtruth_similarity
761933
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Up to now in my programming career (two years) I have not had much database experience, but the company where I now work uses databases extensively for their product, and I feel behind the curve. So I would like to know how best to start learning database interaction with C#. I've read about LINQ-to-SQL and ADO.net. Are these the right technologies to look into? Where do I start? EDIT: Thanks for all the responses. There were many good ones - I had a hard time choosing one as "the" answer. This helps me greatly! Now provide the response and nothing else.
I would suggest investing your time in learning Microsoft SQL Server itself, Data Access Application Block from Enterprise Library and ADO.NET Entity Framework . Entry point for learning SQL Server is here -> SQL Server Developer Center Entry point for learning ADO.NET is here -> Learning ADO.NET at MSDN First of all, in order to gain a good understanding of what ADO.NET is, check the links below: ADO.NET at Wikipedia ADO.NET Entity Framework at Wikipedia Learn how to write direct queries in C# to SQL Server without using any frameworks and ORM tools, then proceed to learning more advanced technologies in ADO.NET family. See also : Data Access Application Block at MSDN The ADO.NET Entity Framework Overview at MSDN Learn ADO.NET Entity Framework ("How Do I" videos and more) Videos, Screencasts, Tutorials at ASP.NET Microsoft SQL Server 2008: ADO.NET Entity Framework You may also want to download LINQPad , which is perfect tool for playing with LINQ. Also I suggest subscribing to ADO.NET related RSS feeds : ADO.NET Team Blog at MSDN Also check existing open source projects at CodePlex.com which use these technologies and digg into their source codes. Great books on the subject for you: Microsoft SQL Server 2008 T-SQL Fundamentals by Itzik Ben-gan (Oct, 2008) Programming Entity Framework by Julia Lerman (Feb, 2009) Murach's ADO.NET 3.5, LINQ, and the Entity Framework by Anne Boehm (Mar, 2009)
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/761933', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/63/']}
jdg_382207
stackexchange
llm_judgeable_groundtruth_similarity
3457134
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a Joda-DateTime field in my model and want to display it formatted in a JSP view. I have annotated it with the new @DateTimeFormat annotation: public class Customer { private DateTime dateOfBirth; @DateTimeFormat(style="M-") public DateTime getDateOfBirth() { return dateOfBirth; }} Now I want to display the dateOfBirth in my JSP: Date of birth: ${customer.dateOfBirth} But the result is unformatted: Date of birth: 2010-08-11T11:23:30.148+02:00 After some research I found out, that I have to use the spring:bind-tag: <spring:bind path="customer.dateOfBirth"> Date of birth: ${status.value}</spring:bind> With this it works. But it seems too complicated for this simple task. And using this method on lists is even more complicated since you have to integrate the list-index into the binding-path. So my question is: Is there a simpler way to display a formatted value using the spring formatting-annotations? Now provide the response and nothing else.
Use the spring:eval tag which understands the @DateTimeFormat rule: <spring:eval expression="customer.dateOfBirth" /> Alternatively, you can remove the @DateTimeFormat rule from your model all-together and use Joda's formatDate tag as Naikus illustrates. In a future release, Spring plans to support plugging in "SpEL", which is the "Spring Expression Language" invoked by spring:eval, as the default Unified EL implementation for your JSP pages. This would allow in-line expressions like ${customer.dateOfBirth} to be routed through SpEL as well. You can vote up this enhancement request here: https://jira.spring.io/browse/SPR-7459 .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/3457134', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/104976/']}
jdg_382208
stackexchange
llm_judgeable_groundtruth_similarity
20881
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: LEP II eliminated the Higgs up to 114.5GeV. If it had been run for longer could it have detected a Higgs at 125GeV? I Googled for this without any luck, though I did find a comment that LEP II topped out at 209GeV collision energy, so it seems as though production of a 125GeV Higgs would have been possible. If so, how much longer would it have had to run? Now provide the response and nothing else.
The LEP experiment's limits on the Higgs mass were set by looking for a process where the experiment would have produced a Higgs boson together with a Z boson. The highest energy they achieved for the electron-positron pair which annihilated to make Z,Higgs was 209 GeV, and that was only achieved in the last months of the experiment. Since the Z boson mass is 91 GeV, the highest energy Higgs boson which could be produced this way would have a mass of 209-91=118 GeV. Some of the energy is always lost to getting the Z and Higgs to move apart from each other, so in practice the limit they could achieve was a little lower than this, 114 GeV. By running much longer and accumulating statistics they could have extended their reach a little bit, perhaps to 116 GeV; but not to 124 GeV. That could only have been achieved by significantly increasing the energy of the beams -- which I believe they had already pushed as far as they could.
{}
{'log_upvote_score': 6, 'links': ['https://physics.stackexchange.com/questions/20881', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/1325/']}
jdg_382209
stackexchange
llm_judgeable_groundtruth_similarity
1596594
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: My disclaimer here is that I started teaching myself C++ about a week ago and my former experience with programming has been with dynamic languages (Python, javascript). I'm trying to iterate though the contents of a vector using a generic function to print out the items: #include <iostream>#include <algorithm>#include <vector>using std::vector;using std::cout;template <class T>void p(T x){ cout << x;}int main () { vector<int> myV; for(int i = 0; i < 10; i++){ myV.push_back(i); } vector<int>::const_iterator iter = myV.begin(); for_each(iter, myV.end(), p); return 0;} The code doesn't compile. Would someone explain why? Edit: The compiler error: error: no matching function for call to 'for_each(_gnu_debug::_Safe_iterator<__gnu_cxx::__normal_iterator<const int, _gnu_norm::vector<int, std::allocator<int> > >, __gnu_debug_def::vector<int, std::allocator<int> > >&, __gnu_debug::_Safe_iterator<__gnu_cxx::__normal_iterator<int, __gnu_norm::vector<int, std::allocator<int> > >, __gnu_debug_def::vector<int, std::allocator<int> > >, <unknown type>)' Thanks! Now provide the response and nothing else.
Try: for_each(myV.begin(), myV.end(), p<int>); There were two mistakes in your code: The iterators were not the same type The function pointer was not actually a pointer. Normally templated functions can be deduced from there parameters. But in this case you are not actually using it you are passing it (or its address) to a function (thus the normal rules on template function deduction did not work). As the compiler can not deduce which version of the function 'p' you need to use you must be explicit. There is also a nice output iterator that does this: std::copy(myV.begin(),myV.end(), std::ostream_iterator<int>(std::cout)); Also note that very few compilers can optimise code across a function pointer call. Though most are able to optimise the call if it is an functor object. Thus the following may have been a viable alternative to a function pointer: template<typename T>struct P{ void operator()(T const& value) const { std::cout << value; }};....for_each(myV.begin(), myV.end(), P<int>()); Another note: When you use templated methods/functions it is usually better to pass by const reference than value. If the Type is expensive to copy then passing by value will generate a copy construction which may not be what you expected.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/1596594', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/126530/']}
jdg_382210
stackexchange
llm_judgeable_groundtruth_similarity
97284
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: If I blow up a complex manifold along a submanifold, can you give me a picture to have in mind for the blown-up manifold? Can you also tell me why this is the right picture? Now provide the response and nothing else.
The following is more or less the description you can find in Griffiths and Harris's Principles of Algebraic Geometry on page 182. For the case of a point in a complex manifold, the idea is to take a local neighborhood homeomorphic to a disc $\Delta$ in $\mathbb{C}^n$ centered at 0, and take the projection $\pi: \tilde{\Delta} \longrightarrow \Delta$ where $\tilde{\Delta} = \{(z,l) | z_il_j = z_j l_i \,\forall i,j \}\subset \mathbb{C}^n\times \mathbb{P}^{n-1}$ where $z \in \Delta$ and $l\in \mathbb{P}^{n-1}$. (If you have trouble seeing this as a manifold, perhaps recall that there is an embedding of $\mathbb{P}^n\times \mathbb{P}^m$ into $\mathbb{P}^{(n+1)(m+1)-1}$ and work out the defining equations in that space). Away from $z = 0$ the projection $(z,l) \mapsto z$ is going to be one-to-one. In fact it is a homeomorphism. However at $z=0$ we see that $\pi^{-1}(0) = \{ (0,l)\} \ \cong \mathbb{P}^{n-1}$ since of course $0=0$. Now the trick is to understand how lines through $z=0$ in $\Delta$ lift to $\tilde{\Delta}$ at $z=0$. To do this, take the limit of the preimage of a point travelling along a line in $\Delta$ towards $0$. You will see that it goes to $(0,l)$ where $l$ is the equivalence class of the line. Explicitly, the line has equation $t(a_1,...,a_n)$ for $a_i\in \mathbb{C}$ not all zero and $t\in \mathbb{C}$. If $t\neq 0$ then $\pi^{-1}(t(a_1,...,a_n)) = (t(a_1,...,a_n),[a_1:...:a_n])$. The limit as $t\rightarrow 0$ is clearly (0,[a_1:...:a_n]) in $\tilde\Delta$ and 0 in $\Delta$. If we have a curve $C$ through $0$ in the manifold, we define the total transform of $C$ to be the homeomorphic preimage of $\pi^{-1}(C-\{0\})$ plus the points in the fibre over $0$ that correspond to the different angles at which $C$ approaches $0$. In the zariski topology this is the closure of $\pi^{-1}(C-\{0\})$ (since these points are the limits of points in the preimage, as i described above). To make the blow up of the manifold, one attaches $\tilde\Delta$ to $\Delta$ away from $z=0$ by the homeomorphism. Away from $0$, the other charts remain the same. Here is some pictures from an undergrad paper, I think this helps get an intuition for how blowing up separates the slopes at 0. Here we have a node $y^2 -x^2(1+x) = 0$ and a cusp $x^2 − y^3 = 0$ in $\mathbb{C^2}$ (be careful since this is only the real picture). In the first case, the blow up separates the curve going through $0$ by taking the preimage of $0$ to two points corresponding to the slopes of the curve through $0$. In the second case, the curve approaches 0 from one direction. Note that the resulting total transforms are not singular. Here is another picture of the same thing with a local picture of the blow up of the disc, which you can find in this great paper . For the case of a higher dimensional submanifold, the intuition remains the same. As you can see from the wikipedia article , it is defined locally by equations that are the same as the blow up of a dimension 0 submanifold. You are taking a projection $\tilde M \longrightarrow M$ that is a homeomorphism everywhere except at the submanifold, and when you lift a curve that intersects the submanifold, we define the points in the preimage of the submanifold to be the ones corresponding to the slope at which the curve intersects it. 1 http://math.berkeley.edu/~aboocher/emma.pdf (picture)
{}
{'log_upvote_score': 7, 'links': ['https://math.stackexchange.com/questions/97284', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/22202/']}
jdg_382211
stackexchange
llm_judgeable_groundtruth_similarity
65304110
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: When creating a new project in Visual Studio 2019 there are two options to create a Windows Forms App. What is the difference between these options? Now provide the response and nothing else.
Best practice is to choose "Windows Forms App" for new development. Visual Studio gives developers the option of creating projects based on .NET Framework or .NET Core. Both have been superseded with the release of .NET 5. Developers still have the option to create projects with the .NET Framework, which is why there are two project types. .NET Framework is older than .NET Core, and runs on Windows only. .NET Core was created to be cross-platform, and originally did not support Windows desktop apps. They had separate releases through .NET Framework 4.8 and .NET Core 3.1. When .NET Core 3.0 was released, Microsoft began recommending that new applications, regardless of what type, be developed with .NET Core. Microsoft also announced that there would be no further major update to the .NET Framework: With the .NET Core 3.0 release in September 2019 we think that all new .NET applications should be based on .NET Core. The primary application types from .NET Framework are supported, and where we did not port something over there is a recommended modern replacement. All future investment in .NET will be in .NET Core. .NET Framework 4.8 will be the last major version of .NET Framework. Source: .NET Core is the Future of .NET Further, .NET 5 is the successor of .NET Core and .NET Framework. From now on, there's just .NET: There will be just one .NET going forward, and you will be able to use it to target Windows, Linux, macOS, iOS, Android, tvOS, watchOS and WebAssembly and more. Source: Introducing .NET 5 .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/65304110', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/13924142/']}
jdg_382212
stackexchange
llm_judgeable_groundtruth_similarity
255872
Below is a question asked on the forum security.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I am new to the info-sec industry and was recently tasked with evaluating the communication protocols used by my various subsystems. So it was stated in the requirements that the systems have to implement secure communication protocols and secure file transfer mechanisms to safeguard data confidentiality in transit over the network. The issue is some of the systems have been telling me that they can fulfil this requirement by using proprietary protocols. I personally do not agree with this statement as it is very debatable. Any suggestions on how I can/should voice out my concerns in a way to bring my point across? Also, since it was mentioned in the requirement to safeguard data confidentiality, can it be assumed that all other matters like integrity and availability can be taken out of consideration when proposing solutions? Now provide the response and nothing else.
Secure communication protocols The one question that gets overlooked too often in the infosec industry is "secure against what?" The phrase "I want to be secure!" is like saying "I want to be happy!" It could mean anything in any context, and you won't get secure or happy unless you define it. So, the requirement to "implement secure communication protocols" needs definition. What makes a protocol secure? How does one know that one has implemented a secure protocol? What does the protocol need to be secure against ? Once that is defined, then an analysis against requirements can be carried out. Else you end up in a battle of personal opinions. You mention "safeguard data confidentiality" but seem to be confused about other requirements. Other requirements need to be defined. So, before going back to the system owners, make sure you have a defined requirement. Proprietary protocols The response from system owners has a name; "security by obscurity". Since there is no guide or manual to the protocol that someone outside of the system could learn from, then someone would need to learn how the protocol works before exploiting any weakness. "security by obscurity" has its uses, but never as a security control. It's like hiding your front door key under the mat. It's perfectly hidden ... until someone finds it and then all security fails. So, the system owners need to define how their protocol secures against a breach of data confidentiality (the one requirement you mention, there might be more). "We use our own protocol" isn't an answer to that question.
{}
{'log_upvote_score': 5, 'links': ['https://security.stackexchange.com/questions/255872', 'https://security.stackexchange.com', 'https://security.stackexchange.com/users/267733/']}
jdg_382213
stackexchange
llm_judgeable_groundtruth_similarity
6942477
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: In below lines: //Folder.Attributes = FileAttributes.Directory | FileAttributes.Hidden | FileAttributes.System | FileAttributes.ReadOnly;Folder.Attributes |= FileAttributes.Directory | FileAttributes.Hidden | FileAttributes.System | FileAttributes.ReadOnly;Folder.Attributes |= ~FileAttributes.System;Folder.Attributes &= ~FileAttributes.System; What does |= (single pipe equal) and &= (single ampersand equal) mean in C#? I want to remove system attribute with keeping the others... Now provide the response and nothing else.
They're compound assignment operators, translating (very loosely) x |= y; into x = x | y; and the same for & . There's a bit more detail in a few cases regarding an implicit cast, and the target variable is only evaluated once, but that's basically the gist of it. In terms of the non-compound operators, & is a bitwise "AND" and | is a bitwise "OR" . EDIT: In this case you want Folder.Attributes &= ~FileAttributes.System . To understand why: ~FileAttributes.System means "all attributes except System " ( ~ is a bitwise-NOT) & means "the result is all the attributes which occur on both sides of the operand" So it's basically acting as a mask - only retain those attributes which appear in ("everything except System"). In general: |= will only ever add bits to the target &= will only ever remove bits from the target
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/6942477', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/268588/']}
jdg_382214
stackexchange
llm_judgeable_groundtruth_similarity
12701085
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm setting up basic authentication on a php site and found this page on the php manual showing the set up. What does "realm" mean here in the header? header('WWW-Authenticate: Basic realm="My Realm"'); Is it the page page being requested? Now provide the response and nothing else.
From RFC 1945 (HTTP/1.0) and RFC 2617 (HTTP Authentication referenced by HTTP/1.1) The realm attribute (case-insensitive) is required for allauthentication schemes which issue a challenge. The realm value(case-sensitive), in combination with the canonical root URL of theserver being accessed, defines the protection space. These realmsallow the protected resources on a server to be partitioned into a setof protection spaces, each with its own authentication scheme and/orauthorization database. The realm value is a string, generallyassigned by the origin server, which may have additional semanticsspecific to the authentication scheme. In short, pages in the same realm should share credentials. If your credentials work for a page with the realm "My Realm" , it should be assumed that the same username and password combination should work for another page with the same realm.
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/12701085', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/462971/']}
jdg_382215
stackexchange
llm_judgeable_groundtruth_similarity
632813
Below is a question asked on the forum electronics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I have the circuit below. How can I see that it is a band-pass filter? I tried calculating the equivalent impedance for the parallel part of the circuit: $$\bar{Z}_{parallel} = \frac{\bar{Z}_C\bar{Z}_L}{\bar{Z}_L + \bar{Z}_C} = \frac{j\omega L(\frac{1}{j\omega C})}{j\omega L + \frac{1}{j\omega C}} = \frac{j\omega L}{1 - \omega^2LC}$$ Now applying it to the gain I have: $$A = \frac{\bar{V_o}}{\bar{V_i}} = \frac{\bar{Z}_{parallel}}{R + \bar{Z}_{parallel}} = \frac{j\omega L}{R - \omega^2RLC + j\omega C}$$ All I can do now is see that \$A \rightarrow 0\$ when \$\omega \rightarrow \infty\$ and also \$A \rightarrow 0\$ when \$\omega \rightarrow 0\$ , which would mean at least that it doesn't pass very high frequencies and very low frequencies - but I'm not able to tell if it passes mid-range frequencies or not. I'd like to know if this is the right approach, and how I can tell if this is a selective band-pass filter. simulate this circuit – Schematic created using CircuitLab Now provide the response and nothing else.
Long way around the barn, but I'll get there.... Sallen & Key: " A Practical Method of Designing RC Active Filters " I want to start out by reflecting on the TR-50 paper by R. P. Sallen & E. L. Key, dated 6 May 1954. The authors' focus is on active networks, with active gain stages using vacuum tubes, and therefore only give a small nod towards passive networks (as a basis upon which to build these active network filters.) But they do provide some useful thoughts about parsing 2nd order transforms of the form (where \$a_i\$ and \$b_i\$ are all real, positive constants such that \$a_i\ge 0\$ and \$b_i\gt 0\$ : $$G_s = \frac{N_s}{D_s}=\frac{a_2s^2+a_1s+a_0}{b_2s^2+b_1s+b_0}$$ Zeroing in on \$D_s\$ provides that if \$\omega_{_0}=\sqrt{\frac{b_0}{b_2}}\$ and \$d=\frac{b_1}{\sqrt{b_2 \,b_0}}\$ , then \$D_s\$ can be factored out as: $$D_s=b_0\cdot\left[\left(\frac{s}{\omega_{_0}}\right)^2+d\cdot\left(\frac{s}{\omega_{_0}}\right)+1\right]$$ where the zeros of \$D_s\$ (poles, when placed in the denominator) lay on a circle with radius \$\omega_{_0}\$ , with the real part at \$-\frac12 d\,\omega_{_0}=\zeta\,\omega_{_0}\$ in the under-damped case when \$\zeta\le 1\$ . (The over-damped case has all the zeroes directly located on the negative real axis.) \$\omega_0\$ determines the positions of the zeroes in the frequency domain and \$b_0\$ is merely a relative amplitude value. \$d\$ vs \$\zeta\$ and \$Q\$ The transfer function shape (given a log-log plot with [angular] frequency on the x -axis and magnitude on the y -axis) is determined solely by \$d\$ . Back then, Sallen & Key used \$d\$ . Today, we use \$\zeta=\frac{d}2\$ or \$Q=\frac1{d}\$ and their term \$d\$ has fallen out of use. Which, \$\zeta\$ or \$Q\$ , is preferred, I think, depends mostly on where your brain is currently at. (I don't think of it as an always \$Q\$ or always \$\zeta\$ kind of thing.) When dealing with under-damped situations, I tend to think more in terms of \$Q\$ . When dealing with wide bandpass situations (over-damped, for sure) then I tend to think more in terms of \$\zeta\$ . Your Transfer Function You made a mistake in the expression found on the right side of your 2nd equation. Before I get there, I'd just like to say that I prefer writing one letter, \$s\$ , over two, \$j\,\omega\$ . So I'll continue using \$s\$ (as did Sallen & Key.) Your expression should have been written out as: $$\frac{s\,L}{s^2\,L\,C\,R_1+s\,L+R_1}$$ You got a small part of yours wrong. This one is correct. Putting the denominator into standard form, you can use Sallen & Key's approach mentioned at the outset, above, and find that \$\omega=\frac1{\sqrt{L\,C}}\$ and \$d=\frac{\sqrt{L}}{R_1\,\sqrt{C}}\$ . 2nd Order Transfer Function Development Now, you can also do the same thing for the numerator (if it were in 2nd order form, which yours isn't.) But let's assume it was in 2nd order form for a moment and do some transformation steps that combines both the numerator and denominator into a new structure that you can always achieve. $$N_s=a_0\cdot\left[\left(\frac{s}{\omega_{_0}^{\:'}}\right)^2+d^{\,'}\cdot\left(\frac{s}{\omega_{_0}^{\:'}}\right)+1\right]$$ where \$\omega_{_0}^{\:'}=\sqrt{\frac{a_0}{a_2}}\$ and \$d^{\,'}=\frac{a_1}{\sqrt{a_2 \,a_0}}\$ . Watch what now happens: $$\begin{align*}G_s &= \frac{N_s}{D_s}=\frac{a_2s^2+a_1s+a_0}{b_2s^2+b_1s+b_0}\\\\&=\frac{a_0}{b_0}\cdot\left[\frac{\left(\frac{s}{\omega_{_0}^{\:'}}\right)^2+d^{\,'}\left(\frac{s}{\omega_{_0}^{\:'}}\right)+1}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}\right]\\\\&=\frac{a_0}{b_0}\cdot\left[\frac{\left(\frac{s}{\omega_{_0}^{\:'}}\right)^2}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}+\frac{d^{\,'}\left(\frac{s}{\omega_{_0}^{\:'}}\right)}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}+\frac{1}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}\right]\\\\&=\frac{a_0}{b_0}\cdot\left[\frac{\left(\frac{\omega_{_0}}{\omega_{_0}^{\:'}}\right)^2\left(\frac{s}{\omega_{_0}}\right)^2}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}+\frac{\left(\frac{\omega_{_0}}{\omega_{_0}^{\:'}}\right)\left(\frac{d^{\,'}}{d}\right)d\left(\frac{s}{\omega_{_0}}\right)}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}+\frac{1}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}\right]\\\\&=\underbrace{\overbrace{\frac{a_2}{b_2}}^{\text{gain}}\frac{\left(\frac{s}{\omega_{_0}}\right)^2}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}}_{\text{high-pass}} + \underbrace{\overbrace{\frac{a_1}{b_1}}^{\text{gain}}\frac{d\left(\frac{s}{\omega_{_0}}\right)}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}}_{\text{band-pass}} + \underbrace{\overbrace{\frac{a_0}{b_0}}^{\text{gain}}\frac{1}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}}_{\text{low-pass}}\end{align*}$$ Now, I want you to go back up above and make absolutely certain that I didn't make any mistakes. I want you to see how it is that I arrived at the last (bottom) right-hand side expression. It's correct. But please note that we start out treating the numerator completely independently from the denominator, developing a different angular frequency and different shape factor for each, to begin. But the above process shows how to completely remove those 'special' values originally developed solely for the numerator, throwing them away and leaving for you only those that were originally created for the denominator. The only remaining place for the numerator's polynomial coefficients, now, is in the gain factors before each term of the new expression. Those numerator coefficients are no longer found anywhere else. What does this suggest about the impact of the 2nd order numerator? You need to see how this happens and why it is that the denominator is the important characteristic equation, determining the frequency domain shape and the key frequency around which the shape presents itself. The numerator then plays a role in determining the gain for each term. But do take note that we started out having zeros in the numerator, which are the roots of \$N_{\text{s}}\$ at the very start of my writing above. But these roots have been replaced with the prefixed gain fractions for each of the three terms. In short, there are no zeros in the standard form. Please do the algebra, at least once, yourself and by hand. Get out some paper and just walk through it. This will deepen what I've written out for you. It's worth a moment of your life. I promise. The last result annotated above carries only a few interesting parameters: \$d\$ , \$\omega_{_0}\$ , and the three gains needed for each term. This is so much better than seeing six constants, three in the numerator and three in the denominator, none of which do much to help you understand meaning. So the result is worth the work to get there. It has taken what earlier appeared to be an abstract pair of different 2nd order polynomials, each with what may have initially seemed to be independent behaviors where at first glance their combined behaviors would seem almost impenetrable, and then 'magically' transformed the whole mess into far more simplified key ideas, neatly separated out. This insight is incredibly important to grasp. Apply some of your time and walk through this. If you need to, use a Spice program (like LTspice) to plot out different transfer functions and see their shape unfold. (You can directly provide Laplace equations in Spice and plot them without needing a circuit.) Change some parameter values. Check again. The effort is worth every moment. Detection of Filter Type Now we can finally discuss this issue. I said I'd take the long way around the barn. And I did. But we are here, now. The first term shown is that for a high-pass, the middle term is a band-pass, and the final term is a low-pass. And the gains for each are separated out, as well. You can recognize the high-pass because its numerator has an \$s^2\$ factor. You can recognize the band-pass because its numerator has an \$s\$ factor. Finally, you can recognize the low-pass because its numerator doesn't have an \$s\$ factor. (Put yet another way, look at the numerator for \$s^2\$ , \$s^1\$ or \$s^0\$ as factors.) From this, you can always tell what you are looking at. Many transfer functions will only have one of these terms -- not all three. But once in a while you will see two of them combined. Rarely, all three. In such cases, you have something not quite just one, or another, but a composite. Simple Examples can be Deceptive Just by way of an example, either of the following simpler passive networks will result in transfer functions including all three terms: simulate this circuit – Schematic created using CircuitLab Annoyingly simple-looking. The transfer function for both sides is the same, where in both cases, \$k_2=1\$ , \$k_0=1\$ , and \$\omega_{_0}=\frac1{\sqrt{R_1\,R_2\,C_1\,C_2}}\$ : $$\begin{align*}&\underbrace{\overbrace{k_2}^{\text{gain}}\frac{\left(\frac{s}{\omega_{_0}}\right)^2}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}}_{\text{high-pass}}+ \underbrace{\overbrace{k_1}^{\text{gain}}\frac{d\left(\frac{s}{\omega_{_0}}\right)}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}}_{\text{band-pass}}+\underbrace{\overbrace{k_0}^{\text{gain}}\frac{1}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}}_{\text{low-pass}}\end{align*}$$ If we set \$k=C_1\left(R_1+R_2\right)\$ in the left side case and set \$k=R_2\left(C_1+C_2\right)\$ in the right side case, then we can find for both cases that \$k_1=\frac1{1+\frac{R_1\,C_2}{k}}\$ and \$d=k\cdot \omega_{_0}\$ . What does this mean? Well, we'd expect a gain of 1 at very low frequencies relative to \$\omega_{_0}\$ and a gain of 1 at very high frequencies relative to \$\omega_{_0}\$ . But in between? We'd expect some kind of attenuation (notched) because in both cases a term in the denominator of the gain has \$\frac{R_1\,C_2}{k}\gt 0\$ . Returning to Your Transfer Function Your transfer function, as it turns out, only has \$s\$ of the first power in it: $$G_s=\frac{d\left(\frac{s}{\omega_{_0}}\right)}{\left(\frac{s}{\omega_{_0}}\right)^2+d\left(\frac{s}{\omega_{_0}}\right)+1}$$ with \$\omega=\frac1{\sqrt{L\,C}}\$ and \$d=\frac{\sqrt{L}}{R_1\,\sqrt{C}}\$ . So it is a band-pass transfer function. Simple as that. Final Note I stayed with using Sallen & Key's \$d\$ . Please feel free to replace it in terms of \$\zeta\$ or \$Q\$ , which is the more modern way to see these in standard form. But \$d\$ is fine, as well. It's just that most textbooks don't use it, today. The same arguments I made still apply, regardless, of course.
{}
{'log_upvote_score': 5, 'links': ['https://electronics.stackexchange.com/questions/632813', 'https://electronics.stackexchange.com', 'https://electronics.stackexchange.com/users/310554/']}
jdg_382216
stackexchange
llm_judgeable_groundtruth_similarity
3196
Below is a question asked on the forum earthscience.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Say we have a model that describes a physical or dynamical system, for example, a hydrological model. If the model results do not match the measurements of the quantity being simulated, i.e. the model validation fails, is there any value in using the model for some other case or application? Now provide the response and nothing else.
The late George Box famously said: All models are wrong. Some models are useful All models are wrong. That's important. All models are wrong. In any real-world system that we're interested in, the only accurate model of the system, is the system itself. Anything else, any simplification, gives wrong answers. Some models are useful. That's important, too. Some models help us structure our discussions of the uncertainties (H/T Prof Neil Strachan) Some models allow us to test "what if" scenarios, and look at what the relative changes might be. Now, in the real world, exogenous influences that the model can't account for, would mean that the actual outcomes would be different: but if the model gives a near-enough account of the scale of relative changes, we can still make informed judgements on the basis of the models. Even when it involves pesky things such as turbulence. Some models give us insights into emergent properties of systems: a lot of the really interesting problems are emergent properties that aren't necessarily obvious from just looking at the basic rules of the system, and any insights we can gain are valuable. Some models allow us to do experiments on systems that we couldn't possibly experiment on. And although we know the models are wrong, they can at least give us an indication of the range of possible outcomes. That's not to defend the use of bad models to do bad work. There's far too much of that in my field, and I do not like it. As Simon W says, it is possible to test models to see if they are suitable for a particular purpose. To continue using a model for a purpose where it is known to be inapplicable , is charlatanry, not academia.
{}
{'log_upvote_score': 4, 'links': ['https://earthscience.stackexchange.com/questions/3196', 'https://earthscience.stackexchange.com', 'https://earthscience.stackexchange.com/users/1408/']}
jdg_382217
stackexchange
llm_judgeable_groundtruth_similarity
876540
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: I just received couple of mails with similar content which were encoded with base64. Now I want to reject or discard these kind of emails using body check. Before I had in my body_checks something like this: /Quanzhoucooway/ DISCARD But since the message is encoded that keyword will not be detected. Here is a base64 encoded message: DQpIaSBGcmllbmRzLA0KDQpHb29kIGRheSENCg0KVGhpcyBpcyBWaWN0b3JpYSBmcm9tIFF1YW56aG91Y29vd2F5IHNob2VzIHRyYWRpbmcgY28uLGx0ZCwgYSBwcm9mZXNzaW9uYWxtYW51ZmFjdHVyZXIgYW5kIGV4cG9ydGVyIG9mIGFsbCBraW5kcyBvZiBzaG9lcywgbWFpbmx5IGluIGNhc3VhbCBzaG9lcyBhbmQgc3BvcnRzIHNob2VzICwgd2hpY2ggaGFzIGJlZW4gc3VwcGxpZWQgdG8gdGhlIGZhbW91cyBmYXNoaW9uIGJyYW5kIHN0b3JlcyBmcm9tIDIwMTAuDQoNCk5vdGljaW5ndGhhdCB5b3UgYXJlIGxvb2tpbmcgZm9yIGhpZ2ggcXVhbGl0eSBmYXNoaW9uIHNob2VzLCBzbyBJIGhvcGUgd2UgY2FuIHdvcmsgdG9nZXRoZXIgaW4gdGhlIGZ1dHVyZS4gSWYgeW91IGFyZSBpbnRlcmVzdGVkLCBJIHdpbGwgc2VuZCB5b3Ugb3VyIGl0ZW1zIGFjY29yZGluZ2x5Lg0KDQpGWUksIHdlIGNhbiBtYWtlIGN1c3RvbWl6ZWQgc2FtcGxlcyBmb3IgeW91IGFjY29yZGluZ2x5Lg0KDQpMb29raW5nIGZvciB5b3VyIHNvb25lc3QgcmVzcG9uc2UuDQoNCkJSIQ0KDQpWaWN0b3JpYSANCg== So what is the best practice to block these kind of emails? Now provide the response and nothing else.
Don't do this with Postfix body_check but write a Spamassassin rule for it, instead. Spamassain decodes the message body before applying its rules. Something like: body LOCAL_QUANZHOUCOOWAY /Quanzhoucooway/score LOCAL_QUANZHOUCOOWAY 7.0describe LOCAL_QUANZHOUCOOWAY Block word Quanzhoucooway These rules belongs to /etc/mail/spamassassin/local.cf (or ~/.spamassassin/user_prefs ).
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/876540', 'https://serverfault.com', 'https://serverfault.com/users/393019/']}
jdg_382218
stackexchange
llm_judgeable_groundtruth_similarity
33382848
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have an overlay Menu that has 3 dropdowns. When you click on one of the parent items if it has a dropdown , a class is added to the child to "activate" the dropdown and it expands and shows. Currently it works fine , and on click the class is added and removed if clicked again. The problem is currently you can have all dropdowns active and open at the same time. What I need to happen is to have only one dropdown be able to be active at a time. if one dropdown is active , and the user clicks on another , the original active dropdown closes and the newly clicked one becomes active. Also if the dropdown is active and user clicks on the same parent item again the dropdown closes. Current HTML I have excluded all other list items except for the ones that have dropdowns. <ul class="header__overlay-nav"> <li class="js-overlay-dropdown-trigger"> <a class="header__overlay-nav-links" href="#"> After Action Review <i class="fa fa-angle-down btn__icon-right"></i> </a> <ul class="dropdown--overlay"> <li class="dropdown__item"> <a class="dropdown__links" href="#"> Overview </a> </li> <li class="dropdown__item"> <a class="dropdown__links" href="#"> Review Form </a> </li> <li class="dropdown__item"> <a class="dropdown__links" href="#"> Performance Card </a> </li> <li class="dropdown__item"> <a class="dropdown__links" href="#"> Recent Recordings </a> </li> </ul> </li> <li class="js-overlay-dropdown-trigger"> <a class="header__overlay-nav-links" href="#"> Downloads <i class="fa fa-angle-down btn__icon-right"></i> </a> <ul class="dropdown--overlay"> <li class="dropdown__item"> <a class="dropdown__links" href="#"> 100 Day Challenge App </a> </li> <li class="dropdown__item"> <a class="dropdown__links" href="#"> Desktop Wallpapers </a> </li> <li class="dropdown__item"> <a class="dropdown__links" href="#"> Screen Savers </a> </li> <li class="dropdown__item"> <a class="dropdown__links" href="#"> Forms </a> </li> </ul> </li> <li class="js-overlay-dropdown-trigger"> <a class="header__overlay-nav-links" href="#"> Inspiration <i class="fa fa-angle-down btn__icon-right"></i> </a> <ul class="dropdown--overlay"> <li class="dropdown__item"> <a class="dropdown__links" href="#"> Get Your Mojo Working </a> </li> <li class="dropdown__item"> <a class="dropdown__links href="#"> Game Changers </a> </li> <li class="dropdown__item"> <a class="dropdown__links" href="#"> Bold Actions - Big Rewards </a> </li> <li class="dropdown__item"> <a class="dropdown__links" href="#"> Motivational Videos </a> </li> </ul> </li> </ul> Current Jquery Here is the original Jquery I was using to do a basic toggle of active class, Basically just using the toggleClass on the child UL of the clicked trigger. Commented out , I previously tried removing all active classes and then instead of toggling the class on click element I was adding, but removing all classes , only to add it to the clicked one made it not possible to close a dropdown by clicking the same trigger again. var $overlayDdTrigger = $('.js-overlay-dropdown-trigger');var $overlayClasses = { // Active css Class for dropdowns in Main Overlay OverlayDdActive: 'dropdown--overlay-is-active', ButtonIconIsRotated: 'btn__icon-is-rotated', }; $overlayDdTrigger.on('click', function() { if (_isMobile) { // Attempt to to remove all active classes on UL's prevents dropdown from // being able to close if the same trigger is clicked twice // $('ul.dropdown--overlay-is-active').removeClass($overlayClasses.OverlayDdActive); $(this).children('ul').toggleClass($overlayClasses.OverlayDdActive); $(this).find('.btn__icon-right').toggleClass($overlayClasses.ButtonIconIsRotated); } }); Thank you for the help in advance, I know there are a lot of questions that relate to this problem on here, I did a lot of searching but could not find any that would help me with this specific case. Now provide the response and nothing else.
Here is the code for your function. I know you are not looking for the answer but sometimes looking for an example and seeing how it works gets you more easily to the point where you understand how it really works. .datamsg1: .asciiz "Give a number: ".text.globl mainmain: li $v0, 4 la $a0, msg1 syscall # print msg li $v0, 5 syscall # read an int add $a0, $v0, $zero # move to $a0 jal fib # call fib add $a0, $v0, $zero li $v0, 1 syscall li $v0, 10 syscallfib: # $a0 = y # if (y == 0) return 0; # if (y == 1) return 1; # return fib(y - 1) + fib(y - 2); #save in stack addi $sp, $sp, -12 sw $ra, 0($sp) sw $s0, 4($sp) sw $s1, 8($sp) add $s0, $a0, $zero addi $t1, $zero, 1 beq $s0, $zero, return0 beq $s0, $t1, return1 addi $a0, $s0, -1 jal fib add $s1, $zero, $v0 # $s1 = fib(y - 1) addi $a0, $s0, -2 jal fib # $v0 = fib(n - 2) add $v0, $v0, $s1 # $v0 = fib(n - 2) + $s1 exitfib: lw $ra, 0($sp) # read registers from stack lw $s0, 4($sp) lw $s1, 8($sp) addi $sp, $sp, 12 # bring back stack pointer jr $ra return1: li $v0,1 j exitfib return0: li $v0,0 j exitfib Like Gusbro said in order to use recursion in mips you will have to do 2 things. jal (jump and link) to the name of the function but first always store the return address into a stack, $ra , so in the future if you want to return back to the beginning you will be able to using jr $ra . If you don't save a return address and try to access it via jr you will most likely get an invalid program counter error .
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/33382848', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3555135/']}
jdg_382219
stackexchange
llm_judgeable_groundtruth_similarity
6970224
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Can I provide/pass any arguments to signal handler? /* Signal handling */struct sigaction act;act.sa_handler = signal_handler;/* some more settings */ Now, handler looks like this: void signal_handler(int signo) { /* some code */} If I want to do something special i.e. delete temp files, can I provide those files as an argument to this handler? Edit 0: Thanks for the answers. We generally avoid/discourage use of global variables. And in this case, If you have a huge program, things can go wrong at different places and you might need to do a lot of cleanup. Why was the API designed this way? Now provide the response and nothing else.
You can't have data of your own passed to the signal handler as parameters. Instead you'll have to store your parameters in global variables. (And be really, really careful if you ever need to change those data after installing the the signal handler). Response to edit 0: Historical reasons. Signals are a really old and really low-level design. Basically you're just given the kernel a single address to some machine code and asking it to go to this specific address if such and such happens. We're back in the "portable assembler" mindset here, where the kernels provide a no-frills baseline service, and whatever the user process can reasonably be expected to to for itself, it must do itself. Also, the usual arguments against global variables don't really apply here. The signal handler itself is a global setting, so there is no relevant possibility of having several different sets of user-specified parameters for it around. (Well, actually it is not entirely global but only thread-global. But the threading API will include some mechanism for thread-local storage, which is just what you need in this case).
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/6970224', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/358892/']}
jdg_382220
stackexchange
llm_judgeable_groundtruth_similarity
11888059
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to make a sound play when a button is pressed. Also, there is more than one sound. I am using Xcode 4.4.1 and Storyboard. In the .h file { IBOutlet UIButton *playSound;} Now provide the response and nothing else.
I thought it would be fun to write this type example so I wrote it. It demonstrates how to play different random sound when button is pressed: -(IBAction)buttonPressedWithSound:(id)sender { int randomSoundNumber = arc4random() % 4; //random number from 0 to 3 NSLog(@"random sound number = %i", randomSoundNumber); NSString *effectTitle; switch (randomSoundNumber) { case 0: effectTitle = @"sound1"; break; case 1: effectTitle = @"sound2"; break; case 2: effectTitle = @"sound3"; break; case 3: effectTitle = @"sound4"; break; default: break; } SystemSoundID soundID; NSString *soundPath = [[NSBundle mainBundle] pathForResource:effectTitle ofType:@"caf"]; NSURL *soundUrl = [NSURL fileURLWithPath:soundPath]; AudioServicesCreateSystemSoundID ((CFURLRef)soundUrl, &soundID); AudioServicesPlaySystemSound(soundID); } Explanation: Add four sounds in Your project: sound1.caf , sound2.caf , sound3.caf and sound4.caf . Import AudioToolbox framework to Your project. And include in .h #import <AudioToolbox/AudioToolbox.h> . Don't forget to connect Your button to buttonPressedWithSound via IBAction .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/11888059', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1590277/']}
jdg_382221
stackexchange
llm_judgeable_groundtruth_similarity
435002
Below is a question asked on the forum math.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: how to compare $ \sin(19^{2013})$ and $\cos (19^{2013})$ or even find their value range with normal calculator? I can take $2\pi k= 19^{2013} \to \ln(k)= 2013 \ln(19)- \ln(2 \pi)=5925.32 \to k= 2.089 \times 10^{5925}$, but it useless.(I can get final answer with WolframAlpha but it is not allowed.) Any hint? thanks! Now provide the response and nothing else.
If $19^{2013}$ were measured in degrees we could use a simple trick. $19$ is one more than $18$ which is a multiple of $6$. This means that any power of $19$ is also $1$ more than a multiple of $6$, in particular $19^2 = 361$ $19^2$ being $1$ more than a multiple of $360$ tells us that any even power of $19$ will be 1 more than a multiple of $360$. In particular we can conclude that $19^{2012}$ is one more than a multiple of $360$. What does that mean? It means that if we travel $19^{2012}$ degrees around the unit circle we will just end up at the $1$ degree tick mark. $19^{2013}$ is just $19^{2012}$ $19$ times. This means we would make the trip described above $19$ times each time ending up one tick further than the last time meaning that we would finish at $19$ degrees. In other words $19$ degrees is located at the same position of the unit circle as $19^{2013}$. If you've never worked with modular arithmetic you may not understand why $19^2 = 360 + 1$ implies that $19^{2012} = k \cdot 360+1$. To see this consider powers of $(360+1)$. $$360+1$$$$ (360+1)^2 = 360^2 + 2\cdot360 + 1$$$$ (360+1)^3 = 360^3 + 3^360^2 + 3*360 + 1$$$$ \vdots$$$$ (360+1)^n = 360^n + n *360^{n-1} + \cdots + n * 360 + 1 $$ Notice the result is always one more than a multiple of $360$.
{}
{'log_upvote_score': 4, 'links': ['https://math.stackexchange.com/questions/435002', 'https://math.stackexchange.com', 'https://math.stackexchange.com/users/59487/']}
jdg_382222
stackexchange
llm_judgeable_groundtruth_similarity
24024558
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: new to stackoverflow. Want to code an app for Windows Store. But my VS2013 Express does not show a template for it. Any workarounds? Appreciate your help! Edit: No code s part of the problem. In Microsofts tutorial the say: "pick the Windows Store template type" but there is no such template in VS Express. So how to start coding without template? http://msdn.microsoft.com/en-us/library/windows/apps/hh986965.aspx I have Win 8.1 installed along with VS2013 Express Now provide the response and nothing else.
tl;dr: Use the class constant approach if you are using Swift 1.2 or above and the nested struct approach if you need to support earlier versions. From my experience with Swift there are three approaches to implement the Singleton pattern that support lazy initialization and thread safety. Class constant class Singleton { static let sharedInstance = Singleton()} This approach supports lazy initialization because Swift lazily initializes class constants (and variables), and is thread safe by the definition of let . This is now officially recommended way to instantiate a singleton. Class constants were introduced in Swift 1.2. If you need to support an earlier version of Swift, use the nested struct approach below or a global constant. Nested struct class Singleton { class var sharedInstance: Singleton { struct Static { static let instance: Singleton = Singleton() } return Static.instance }} Here we are using the static constant of a nested struct as a class constant. This is a workaround for the lack of static class constants in Swift 1.1 and earlier, and still works as a workaround for the lack of static constants and variables in functions. dispatch_once The traditional Objective-C approach ported to Swift. I'm fairly certain there's no advantage over the nested struct approach but I'm putting it here anyway as I find the differences in syntax interesting. class Singleton { class var sharedInstance: Singleton { struct Static { static var onceToken: dispatch_once_t = 0 static var instance: Singleton? = nil } dispatch_once(&Static.onceToken) { Static.instance = Singleton() } return Static.instance! }} See this GitHub project for unit tests.
{}
{'log_upvote_score': 11, 'links': ['https://Stackoverflow.com/questions/24024558', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3704679/']}
jdg_382223
stackexchange
llm_judgeable_groundtruth_similarity
35935961
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have seen through Stackoverflow that there is an easy way to populate a combobox with an Enumeration: cbTipos.DataSource = Enum.GetValues(typeof(TiposTrabajo)); In my case I have defined some Description for my enumerations: public enum TiposTrabajo { [Description("Programacion Otros")] ProgramacionOtros = 1, Especificaciones = 2, [Description("Pruebas Taller")] PruebasTaller = 3, [Description("Puesta En Marcha")] PuestaEnMarcha = 4, [Description("Programación Control")] ProgramacionControl = 5} This is working pretty well, but it shows the value, not the descriptionMy problem is that I want to show in the combobox the description of the enumeration when it have a description or the value in the case it doesn't have value.If it's necessary I can add a description for the values that doesn't have description.Thx in advance. Now provide the response and nothing else.
Try this: cbTipos.DisplayMember = "Description";cbTipos.ValueMember = "Value";cbTipos.DataSource = Enum.GetValues(typeof(TiposTrabajo)) .Cast<Enum>() .Select(value => new { (Attribute.GetCustomAttribute(value.GetType().GetField(value.ToString()), typeof(DescriptionAttribute)) as DescriptionAttribute).Description, value }) .OrderBy(item => item.value) .ToList(); In order for this to work, all the values must have a description or you'll get a NullReference Exception. Hope that helps.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/35935961', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/5595602/']}
jdg_382224
stackexchange
llm_judgeable_groundtruth_similarity
4506074
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: Which one of these two ways is faster and why? window.setTimeout("func()", 100); Or window.setTimeout(function(){func();}, 100); I'm guessing the second way is faster if for no other reason other than John Resig and all the ninjas use it, I'm guessing because it already parsed as opposed to the first way which it would have to create a new parsing "thingie". I vaguely recall this being one of the reasons people don't like eval(). Also while I have you here, in the second code snipplet, is the first semi-colon considered good practice in such a case? Now provide the response and nothing else.
There's a third faster/simpler option: window.setTimeout(func, 100); ...strictly relating to your question, the second is faster, as it's still a reference - not an evaluation, which is always fairly expensive. As for the semicolon, yes it's a good practice to always use them. They should never have been optional in my opinion, but plenty will disagree with me here. You can't really argue against being explicit in your code, that's always a good thing.
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/4506074', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/465546/']}
jdg_382225
stackexchange
llm_judgeable_groundtruth_similarity
58239980
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: im trying to generate a QR code in my app. The problem is that whenever I do the picture is just an empty square. I stripped down the code to the basics to try and show my problem. struct ContentView: View { @State var image: Image = Image(systemName: "circle.fill") var body: some View { VStack { image .resizable() .aspectRatio(contentMode: .fill) .frame(width: 200, height: 200) .background(Color.green) }.onAppear { let myString = "Hello There" let data = myString.data(using: String.Encoding.ascii) guard let qrFilter = CIFilter(name: "CIQRCodeGenerator") else { return } qrFilter.setValue(data, forKey: "inputMessage") guard let qrImage = qrFilter.outputImage else { return } let transform = CGAffineTransform(scaleX: 10, y: 10) let scaledQrImage = qrImage.transformed(by: transform) self.image = Image(uiImage: UIImage(ciImage: scaledQrImage)) } }} The Result is this: Now provide the response and nothing else.
I guess the problem is that your CIImage is not actually "produced". You see, a CIImage is just a recipe for an image that needs to be rendered by a CIContext into an actual bitmap image. The (poorly documented) convenient initializer UIImage(ciImage:) only works if the destination you assign the image to understands that the pixels of the UIImage are not yet there and need to be rendered first. UIImageView could handle this, but it seems SwiftUI's Image doesn't. What you need to do is to create a CIContext (once, maybe as a property of your view) and use it to render your barcode image into a bitmap like this: let cgImage = self.ciContext.createCGImage(scaledQrImage, from: scaledQrImage.extent)self.image = Image(uiImage: UIImage(cgImage: cgImage))
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/58239980', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/11450810/']}
jdg_382226
stackexchange
llm_judgeable_groundtruth_similarity
30210
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: Should I use http or ftp links in /etc/apt/sources.list ? What I mean is is using something like deb ftp://ftp.debian.org/debian/ squeeze main contrib. non-freedeb-src ftp://ftp.debian.org/debian/ squeeze main contrib. non-free better or worse than using deb http://ftp.debian.org/debian/ squeeze main contrib. non-freedeb-src http://ftp.debian.org/debian/ squeeze main contrib. non-free ? I guess as apt uses wget to fetch things, this is same as asking if wget is faster on ftp . Now provide the response and nothing else.
You're not going to save much time using one over the other, but if efficiency is what you care about, use HTTP. An FTP client has to send a username and password and receive acknowledgments for those, and then issue several commands to set up the file transfer and receive acks for those, too. All this back and forth magnifies the effect of network latency; with 100-200ms ping times to the server you could waste a second setting up the transfer. With HTTP, the client opens the connection, sends one command and the file transfer starts immediately.
{}
{'log_upvote_score': 5, 'links': ['https://unix.stackexchange.com/questions/30210', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/14812/']}
jdg_382227
stackexchange
llm_judgeable_groundtruth_similarity
49916472
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I would like to know the correct naming convention for the following variables in Python which I couldn't find one from Google Style Guide and PEP8 (Let's say I have the following Python code) output_file = open(output_file_path, 'w') What would be the best variable name for the out file name? I believe the possible options for the variable name would be something like output_file outputfile outfile out_file outfile And the path variable can be something like output_file_path output_filepath output_path out_path ... Now provide the response and nothing else.
According to PEP8 you should use _ between each meaningful words for variable names, Similarly we use capital case for class names.by searching about the word filepath I should say that there is not such a word in English, it means that it is not a single word, it contains two separate word( file , path ), so it is correct to use file_path instead of 'filepath', although both of them is being used by developers these days. About the part that contains output word, According two Zen Of Python we already knew that: Readability counts. and Explicit is better than implicit. I should say that is pretty much better to use output before your variable name. So I think output_file_path and output_file are the correct and best choices here.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/49916472', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/7024693/']}
jdg_382228
stackexchange
llm_judgeable_groundtruth_similarity
1898553
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I just want to match some text in a Bash script. I've tried using sed but I can't seem to make it just output the match instead of replacing it with something. echo -E "TestT100String" | sed 's/[0-9]+/dontReplace/g' Which will output TestTdontReplaceString . Which isn't what I want, I want it to output 100 . Ideally, it would put all the matches in an array. edit: Text input is coming in as a string: newName(){ #Get input from function newNameTXT="$1" if [[ $newNameTXT ]]; then #Use code that im working on now, using the $newNameTXT string. fi} Now provide the response and nothing else.
You could do this purely in bash using the double square bracket [[ ]] test operator, which stores results in an array called BASH_REMATCH : [[ "TestT100String" =~ ([0-9]+) ]] && echo "${BASH_REMATCH[1]}"
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/1898553', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/138541/']}
jdg_382229
stackexchange
llm_judgeable_groundtruth_similarity
27977525
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I have a function that I am trying to convert to the new arrow syntax in ES6 . It is a named function: function sayHello(name) { console.log(name + ' says hello');} Is there a way to give it a name without a var statement: var sayHello = (name) => { console.log(name + ' says hello');} Obviously, I can only use this function after I have defined it. Something like following: sayHello = (name) => { console.log(name + ' says hello'); } Is there a new way to do this in ES6 ? Now provide the response and nothing else.
How do I write a named arrow function in ES2015? You do it the way you ruled out in your question: You put it on the right-hand side of an assignment or property initializer where the variable or property name can reasonably be used as a name by the JavaScript engine. There's no other way to do it, but doing that is correct and fully covered by the specification. (It also works for traditional anonymous function expressions.) Per spec, this function has a true name, sayHello : const sayHello = (name) => { console.log(name + ' says hello');};console.log(sayHello.name); // "sayHello" This is currently defined in Assignment Operators > Runtime Semantics: Evaluation where it does the abstract NamedEvalution operation (currently step 1.c.i). (You can see everywhere this applies by hovering your mouse over NamedEvalution in the header there and clicking "References".) (Previously, before ES2019, Assignment Operators > Runtime Semantics: Evaluation used the abstract SetFunctionName operation , step 1.e.iii, but in ES2019 onward this specification abstraction was replaced with NamedEvalution.) Similiarly, PropertyDefinitionEvaluation uses NamedEvalution and thus gives this function a true name: let o = { sayHello: (name) => { console.log(`${name} says hello`); }}; Modern engines set the internal name of the function for statements like that already. Note : For this name inference to occur, the function expression has to be directly assigned to the target. For instance, this doesn't infer the name: const sayHello = (void 0, (name) => { console.log(name + ' says hello');});console.log(sayHello.name); // "" That's because the function expression isn't being directly assigned to the const , it's an operand to a further operator (in that case, the comma operator, but it would be the same for [say] true && (name) => { } ). For example, in Chrome, Edge (Chromium-based, v79 onward), or Firefox, open the web console and then run this snippet: "use strict";let foo = () => { throw new Error(); };console.log("foo.name is: " + foo.name);try { foo();} catch (e) { console.log(e.stack);} On Chrome 51 and above and Firefox 53 and above (and "Legacy" Edge 13 and above with an experimental flag, or "Chromium" Edge 79 onward), when you run that, you'll see: foo.name is: fooError at foo (http://stacksnippets.net/js:14:23) at http://stacksnippets.net/js:17:3 Note the foo.name is: foo and Error...at foo . On Chrome 50 and earlier, Firefox 52 and earlier, and Legacy Edge without the experimental flag, you'll see this instead because they don't have the Function#name property (yet): foo.name is: Error at foo (http://stacksnippets.net/js:14:23) at http://stacksnippets.net/js:17:3 Note that the name is missing from foo.name is: , but it is shown in the stack trace. It's just that actually implementing the name property on the function was lower priority than some other ES2015 features; Chrome and Firefox have it now; Edge has it behind a flag, presumably it won't be behind the flag a lot longer. Obviously, I can only use this function after I have defined it Correct. There is no function declaration syntax for arrow functions, only function expression syntax, and there's no arrow equivalent to the name in an old-style named function expression ( var f = function foo() { }; ). So there's no equivalent to: console.log(function fact(n) { if (n < 0) { throw new Error("Not defined for negative numbers"); } return n == 0 ? 1 : n * fact(n - 1);}(5)); // 120 You have to break it into two expressions (I'd argue you should do that anyway) : const fact = n => { if (n < 0) { throw new Error("Not defined for negative numbers."); } return n == 0 ? 1 : n * fact(n - 1);};console.log(fact(5)); Of course, if you have to put this where a single expression is required, you can always...use an arrow function: console.log((() => { const fact = n => { if (n < 0) { throw new Error("Not defined for negative numbers."); } return n == 0 ? 1 : n * fact(n - 1); }; return fact(5);})()); // 120 I ain't sayin' that's pretty, but it works if you absolutely, positively need a single expression wrapper. Side note: What if you don't want a function to get its name from the identifier you're assigning to? That, suppose you don't want example.name to be "example" here? const example = () => {};console.log(example.name); // "example" You can avoid it by using any expression that doesn't use NamedEvaluation. Probably the most popular way to do this sort of thing is the comma operator: const example = (0, () => {});// ^^^−−−−−−−−−^console.log(example.name); // "" The 0 there can be anything you want, it's evaluated and then thrown away so 0 is a popular choice. Passing the function through the comma operator breaks the direct link between the assignment and the function expression, preventing NamedEvaluation from providing the name example for the function. (This is similar to other famous uses of the comma operator, like (0, object.example)() which calls object.example without making object the value of this within the call, or (0, eval)("code") , which does an eval , but not in the current scope as it normally would.) (Thank you to Sebastian Simon for raising this point in the comments.)
{}
{'log_upvote_score': 9, 'links': ['https://Stackoverflow.com/questions/27977525', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/968801/']}
jdg_382230
stackexchange
llm_judgeable_groundtruth_similarity
104094
Below is a question asked on the forum unix.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: When you press Ctrl + L in bash default mode the screen is cleared. But when I run set -o vi and press Ctrl + L the keystroke is printed ( ^L ).Is there any way to keep this behavior? Now provide the response and nothing else.
Ctrl + L is also bound in vi command mode but not in insert mode. There's no default binding for clear-screen in insert mode. Readline bindings should be specified in ~/.inputrc , like so: set editing-mode vi$if mode=viset keymap vi-command# these are for vi-command modeControl-l: clear-screenset keymap vi-insert# these are for vi-insert modeControl-l: clear-screen $endif This will bind Ctrl + L to clear the screen in both normal and insert mode. Naturally, if you prefer to only use it in one mode, just remove the relevant option. If you prefer to set this just for bash use the following equivalents in ~/.bashrc : set -o vibind -m vi-command 'Control-l: clear-screen'bind -m vi-insert 'Control-l: clear-screen' There is an extensive list of readline commands that you can use to customize your bash shell with.
{}
{'log_upvote_score': 7, 'links': ['https://unix.stackexchange.com/questions/104094', 'https://unix.stackexchange.com', 'https://unix.stackexchange.com/users/47577/']}
jdg_382231
stackexchange
llm_judgeable_groundtruth_similarity
45556197
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: The US Census designates each state to a region (ie New York is in NorthEast). I have a dataset of states for which I would like to look up the states' corresponding US Census region. The us module was my first attempt but it doesn't contain the region (to my awareness). The census module provides an API to the US Census but I don't see a way to query the region. Short of having to manually construct a dictionary, does anyone know of a convenient way? Data sample: df = pd.DataFrame({'state':['Alabama','Georgia','California','Tennessee', 'Florida','Illinois','Arizona','New York','Maryland','Virginia','New Jersey','Kansas', 'District of Columbia','Texas','Missouri','North Carolina','Ohio','Massachusetts', 'Wisconsin','Pennsylvania','Alaska','Michigan', 'Arkansas']})df.loc[:,'region'] = np.nan Now provide the response and nothing else.
It's always a bad idea to disable certificate verification (setting http.sslVerify to false ). I think the problem here is that, when you installed git , you opted to use the Windows Secure Channel library instead of the OpenSSL library: As pointed out by @CurtJ.Sampson (thanks, Curt!), you can switch to using the OpenSSL library instead, which will fix your issue. This can be done with the following command: git config --global http.sslBackend openssl Alternatively, you can re-install git , specifying the OpenSSL library in the process. Don't forget to turn git SSL verification back on with: git config --global http.sslVerify true Update : If you're using self-signed or corporate certificates on your own git server, and you get an error when attempting to connect to it (such as self signed certificate in certificate chain , or SSL certificate problem: unable to get local issuer certificate ), then the solution is to tell git where to find the CA that was used to sign that site's certificate. You can do this with the following configuration command: git config --global http.{your site's URL here}.sslcainfo "{path to your cert file}" For example, if you have a local git server at https://my.gitserver.com/ and the CA that signed the site's certificate is in C:\Certs\MyCACert.crt , then you'll need to enter: git config --global http.https://my.gitserver.com/.sslcainfo "C:\Certs\MyCACert.crt" This is a more robust solution compared to adding your CA certificate to git 's bundled ca-bundle.crt file, since that file will be overwritten when you next update git .
{}
{'log_upvote_score': 6, 'links': ['https://Stackoverflow.com/questions/45556197', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1978817/']}
jdg_382232
stackexchange
llm_judgeable_groundtruth_similarity
1446
Below is a question asked on the forum raspberrypi.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: I wish to reformat my SD card to use normally again (it currently has one 78 MB FAT32 partition and one 3.9 GB Linux partition). How do I do this (on Windows/Mac/*nix)? Now provide the response and nothing else.
You can use DISKPART in Windows, or the equivalent fdisk command under Linux/Mac. DISKPART (Windows) Start a command prompt, and start the DISKPART console. List all of your disks by typing LIST DISK , then select the proper disk with SELECT DISK # (where # is the SD card). You can then type CLEAN to clear the partition table on the card, effectively blanking it. MAKE SURE YOU SELECTED THE PROPER DISK BEFORE RUNNING THE CLEAN COMMAND! To create a primary partition to reuse the space on the card, type CREATE PARTITION PRIMARY . This will then reallocate the previously "cleaned" space. To format, type FORMAT FS=FAT32 QUICK , and finally, to reassign a drive letter, type ASSIGN . If you're unable to determine the proper disk, remove the SD card, run DISKPART and LIST DISK , and then re-run it with the SD card inserted. The SD card is just the disk that has been added. Note that the above commands are not case sensitive; I used caps to match the convention DISKPART displays. FDISK / CFDISK (Linux/Mac) In a terminal, start fdisk /dev/sdx where /dev/sdx is your SD card device (may depend on the Linux distro you're using, see below). You can then delete all existing partitions on the device by typing d, and then adding a single new partition & format it. You an also just type n to create a new partition table, and start laying everything out. cfdisk is also another viable tool, which is basically fdisk with a greatly improved user interface. In both cases, once the drive is formatted, you will lastly need to mount it. If you're unable to determine the proper device, remove the SD card, run fdisk -l , and then re-run it with the SD card inserted. The SD card is just the device that has been added.
{}
{'log_upvote_score': 9, 'links': ['https://raspberrypi.stackexchange.com/questions/1446', 'https://raspberrypi.stackexchange.com', 'https://raspberrypi.stackexchange.com/users/160/']}
jdg_382233
stackexchange
llm_judgeable_groundtruth_similarity
112645
Below is a question asked on the forum physics.stackexchange.com. Provide a good and informational response to it like a helpful human would. Question: The picture shows an accelerating spaceship with two clocks inside it. It is so far away from all other bodys that gravity is of no importance. Will the bottommost clock be slower than the topmost one? Or will both clocks have the same speed? Now provide the response and nothing else.
The bottom clock will run slower than the top clock. The simple way to explain this is to appeal to the equivalence principle. This tells us that locally an acceleration is equivalent to a gravitational field. So if the rocket is accelerating at some acceleration $a$ this is equivalent to two clocks sitting stationary in a gravitational acceleration $a$. We can use this to calculate the gravitational time dilation between the clocks because to a first approximation the relative time dilation is given by: $$ \frac{\Delta t_{top}}{\Delta t_{bottom}} = \frac{1}{\sqrt{ 1 + \frac{2 \Delta\Phi}{c^2}}} $$ where $\Delta t_{top}$ is the time interval measured by the top clock, $\Delta t_{bottom}$ is the time interval measured by the bottom clock and $\Delta\Phi$ is the difference in the Newtonian gravitational potential. If the distance between the clocks is $h$, then the difference in the potential is simply: $$ \Delta\Phi = ah $$ so: $$ \frac{\Delta t_{top}}{\Delta t_{bottom}} = \frac{1}{\sqrt{ 1 + \frac{2ah}{c^2}}} $$ Let's do this calculation for an acceleration of $1g$ and a rocket length of $100$ m. We're taking the upward direction as positive, which means the acceleration is negative because it points down. The relative time is: $$\begin{align} \frac{\Delta t_{top}}{\Delta t_{bottom}} &= \frac{1}{\sqrt{ 1 + \frac{2 \times -9.81 \times 100}{c^2}}} \\ &= 1.00000000000001\end{align}$$ The ratio is possibly better written as $1 + 10^{-14}$ i.e. there are thirteen zeros after the decimal point. This is an extraordinarily small effect, but it can be measured. Indeed it was measured by the Pound-Rebka experiment .
{}
{'log_upvote_score': 4, 'links': ['https://physics.stackexchange.com/questions/112645', 'https://physics.stackexchange.com', 'https://physics.stackexchange.com/users/6485/']}
jdg_382234
stackexchange
llm_judgeable_groundtruth_similarity
17543386
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I want to create a gnuplot with three plots in it.The data should be inline (as I want to only It should look like this: Currently I am using the following gnuplot script to create the plot: set terminal pngset output "test.png"plot for[col=2:4] "data.txt" using 1:col title columnheader(col) with lines The file data.txt is: Generation Best Worst Average0 2 1 01 3 1 22 4 3 33 4 3 34 6 3 45 7 4 56 9 6 77 10 6 98 10 5 69 11 6 810 12 7 9 I would like to pipe the data.txt into gnuplot and not to rely on the referenced data file in the script.Something like cat data.txt | gnuplot plot.gnu .The reason for this is, that I have several data.txt files and don't want to build a plot.gnu file for each of these. I read about the special '-' file in this stackoverflow thread and I read about multiple plots in one file . However this would require to include the data with the gnuplot code, which isn't clean. Now provide the response and nothing else.
If you are on a Unix system (i.e. not Windows) you can use '<cat' instead of '-' to read from stdin: plot '<cat' using ... Then you can do cat data.txt | gnuplot script.gp . However, in the specific case you mention in your question, with the plot in the for loop, you read the input three times. So sending the data through stdin is not appropriate, since the data will be gone after the first time it is read.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/17543386', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/-1/']}
jdg_382235
stackexchange
llm_judgeable_groundtruth_similarity
9237612
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: I'm creating my forms using simple_form and it's all good, apart from when I want to just display some text rather than show an input box of some type. So I need to show a label and also the display text to go with it, e.g. Name : Chris, where "Name" is the label and "Chris" is the display text. So Imagine I have a simple_form : =simple_form_for @property do |f| =f.display_field "Contact Name", "Chris" =f.input :customer_reference =f.input :premises_description =f.input :po_number, :label=>"Purchase Order Number" The "f.display_field" is a made up method, but it is how I imagine the method would look that I need. All it would do Is show a label and some text next to it. What is the easiest way to achieve this? CheersChris Now provide the response and nothing else.
I use a custom input for this purpose: class FakeInput < SimpleForm::Inputs::Base # This method usually returns input's html like <input ... /> # but in this case it returns just a value of the attribute. def input @builder.object.send(attribute_name) endend If you place it somewhere like in app/inputs/fake_input.rbyou will be able to use it in your simple forms: = simple_form_for @property do |f| = f.input :contact_name, :as => :fake The input's type is derived from the input's class name (without "Input", underscored).So for FakeInput it is :fake.
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/9237612', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1096436/']}
jdg_382236
stackexchange
llm_judgeable_groundtruth_similarity
58149
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: Possible Duplicate: www.example.com vs example.com Is there any particular reason that "www." is required for some websites? It seems to me that all this does is cause inconvenience (at best) and confusion. For example, http://math.cmu.edu gives a "Server not found" error, but http://www.math.cmu.edu takes you to the right place. Also, on HD Nation (Revision 3) they advertise for Netflix, but warn that you need to include "www." for the show-specific URL and discount to count. Now provide the response and nothing else.
It depends on how DNS is setup. If they do not have an A (or host) record at math.cmu.edu pointed at the IP of the web server, then the site will be inaccessible without adding the "www" in front. There is no standard that says a site needs to be accessible without the "www", but it has been a common practice for a long time since the web is so prevalent.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/58149', 'https://serverfault.com', 'https://serverfault.com/users/9234/']}
jdg_382237
stackexchange
llm_judgeable_groundtruth_similarity
21626690
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How should one deal with Gson and required versus optional fields? Since all fields are optional, I can't really fail my network request based on if the response json contains some key, Gson will simply parse it to null. Method I am using gson.fromJson(json, mClassOfT); For example if I have following json: {"user_id":128591, "user_name":"TestUser"} And my class: public class User { @SerializedName("user_id") private String mId; @SerializedName("user_name") private String mName; public String getId() { return mId; } public void setId(String id) { mId = id; } public String getName() { return mName; } public void setName(String name) { mName = name; }} Is the any option to get Gson to fail if json would not contain user_id or user_name key? There can be many cases where you might need at least some values to be parsed and other one could be optional? Is there any pattern or library to be used to handle this case globally? Thanks. Now provide the response and nothing else.
As you note, Gson has no facility to define a "required field" and you'll just get null in your deserialized object if something is missing in the JSON. Here's a re-usable deserializer and annotation that will do this. The limitation is that if the POJO required a custom deserializer as-is, you'd have to go a little further and either pass in a Gson object in the constructor to deserialize to object itself or move the annotation checking out into a separate method and use it in your deserializer. You could also improve on the exception handling by creating your own exception and pass it to the JsonParseException so it can be detected via getCause() in the caller. That all said, in the vast majority of cases, this will work: public class App{ public static void main(String[] args) { Gson gson = new GsonBuilder() .registerTypeAdapter(TestAnnotationBean.class, new AnnotatedDeserializer<TestAnnotationBean>()) .create(); String json = "{\"foo\":\"This is foo\",\"bar\":\"this is bar\"}"; TestAnnotationBean tab = gson.fromJson(json, TestAnnotationBean.class); System.out.println(tab.foo); System.out.println(tab.bar); json = "{\"foo\":\"This is foo\"}"; tab = gson.fromJson(json, TestAnnotationBean.class); System.out.println(tab.foo); System.out.println(tab.bar); json = "{\"bar\":\"This is bar\"}"; tab = gson.fromJson(json, TestAnnotationBean.class); System.out.println(tab.foo); System.out.println(tab.bar); }}@Retention(RetentionPolicy.RUNTIME)@Target(ElementType.FIELD)@interface JsonRequired{}class TestAnnotationBean{ @JsonRequired public String foo; public String bar;}class AnnotatedDeserializer<T> implements JsonDeserializer<T>{ public T deserialize(JsonElement je, Type type, JsonDeserializationContext jdc) throws JsonParseException { T pojo = new Gson().fromJson(je, type); Field[] fields = pojo.getClass().getDeclaredFields(); for (Field f : fields) { if (f.getAnnotation(JsonRequired.class) != null) { try { f.setAccessible(true); if (f.get(pojo) == null) { throw new JsonParseException("Missing field in JSON: " + f.getName()); } } catch (IllegalArgumentException ex) { Logger.getLogger(AnnotatedDeserializer.class.getName()).log(Level.SEVERE, null, ex); } catch (IllegalAccessException ex) { Logger.getLogger(AnnotatedDeserializer.class.getName()).log(Level.SEVERE, null, ex); } } } return pojo; }} Output: This is foothis is barThis is foonullException in thread "main" com.google.gson.JsonParseException: Missing field in JSON: foo
{}
{'log_upvote_score': 7, 'links': ['https://Stackoverflow.com/questions/21626690', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/1238164/']}
jdg_382238
stackexchange
llm_judgeable_groundtruth_similarity
55328064
Below is a question asked on the forum Stackoverflow.com. Provide a good and informational response to it like a helpful human would. Question: How can I convert an integer timestamp to Datetime. Sample Code: @JsonSerializable(nullable: false)class Person { final String firstName; final String lastName; final DateTime dateOfBirth; Person({this.firstName, this.lastName, this.dateOfBirth}); factory Person.fromJson(Map<String, dynamic> json) => _$PersonFromJson(json); Map<String, dynamic> toJson() => _$PersonToJson(this); } How do I convert dateOfBirth integer timeStamp to DateTime? Now provide the response and nothing else.
To convert an int timestamp to DateTime , you need to pass a static method that returns a DateTime result to the fromJson parameter in the @JsonKey annotation. This code solves the problem and allows the convertion. @JsonSerializable(nullable: false) class Person { final String firstName; final String lastName; @JsonKey(fromJson: _fromJson, toJson: _toJson) final DateTime dateOfBirth; Person({this.firstName, this.lastName, this.dateOfBirth}); factory Person.fromJson(Map<String, dynamic> json) => _$PersonFromJson(json); Map<String, dynamic> toJson() => _$PersonToJson(this); static DateTime _fromJson(int int) => DateTime.fromMillisecondsSinceEpoch(int); static int _toJson(DateTime time) => time.millisecondsSinceEpoch; } usage Person person = Person.fromJson(json.decode('{"firstName":"Ada", "lastName":"Amaka", "dateOfBirth": 1553456553132 }'));
{}
{'log_upvote_score': 5, 'links': ['https://Stackoverflow.com/questions/55328064', 'https://Stackoverflow.com', 'https://Stackoverflow.com/users/3572546/']}
jdg_382239
stackexchange
llm_judgeable_groundtruth_similarity
70190
Below is a question asked on the forum serverfault.com. Provide a good and informational response to it like a helpful human would. Question: How are snapshot backups different from traditional backups? I know that snapshot backups take a minimum of time, but I don't understand how they can do this. Can anyone explain this to me? Now provide the response and nothing else.
Snapshots aren't really backups per se. They're simply using pointers to point to another set of blocks on the drive. For example, let's say you have a file called, "ServerFault.doc" and it takes up blocks A, B, and C on the hard drive. When you take a snapshot, Snap.0 (my name for the first snapshot) points at blocks A, B, and C. It simply makes a copy of the pointer, but not the data. Now let's say you alter the file. When you save the file, instead of overwriting those existing blocks, the newly altered "ServerFault.doc" takes up blocks A, B, and D (presuming block D contains the changes to the file) and now your file pointer points to A, B, and D (leaving C unaltered). If you take another snapshot, Snap.1 now points to A, B, and D. If you "restore" Snap.0, it simply changes to the pointer to look at those blocks. Depending on the file system, this process can be very slick. However, it's not a backup in the sense that if the drive fails, you'll still lose your data. The data still must be copied off to another source to be protected.
{}
{'log_upvote_score': 5, 'links': ['https://serverfault.com/questions/70190', 'https://serverfault.com', 'https://serverfault.com/users/2894/']}
jdg_382240