text
stringlengths
64
81.1k
meta
dict
Q: Event Driven Microservice Architecture & Dealing with synchronous external neighboring systems Let’s say I have an event driven microservice system - a microservice cluster - that takes orders and works with them. Orders are placed by an external neighbor system that is working synchronously via a REST endpoint provided by one of my microservices. This service is acting as a facade to keep all the synchronous stuff on the outside of my cluster since every communication on the inside is message based (asynchronous). Incoming orders will be stored in a database by another of my microservices who reacts to an event message send by the facade microservice. Now, suppose there is a requirement to deal with incoming cancellation for orders and these cancellations are also coming in via REST. For valid cancellations an order has been placed before and therefore is stored in the database of my cluster. If I wouldn’t try to design event driven I would think of something like: If a cancellation is coming in via REST, I synchronously check if a corresponding order for that cancellation is in my database. If so my synchronous response on the REST call would be either a 200 if everything is OK and if not, I would reject the invalid cancellation. But since I am trying to design an event driven system I don’t know how to deal with this. I think, maybe I would have another facade microservice that would provide the REST endpoint for the cancellation. And on reception of a cancellation it should also send a message into a specific queue/topic to start working asynchronously from there. But how do I deal with the fact that I need an information from the database to check if a cancellation is invalid and this new facade service has no access to my clusters database? Do I have to always accept cancellations and tell the outside world that everything is OK, even if I don’t know if that is true? And if the other system needs to know about invalid cancellations, do I have to give them access so that they could listen to one of my topics/queues, so that they then can react to an event message that travels through my cluster containing the information if a cancellation was valid? Or do I have to set up another service where the outside world can ask if a cancellation was valid? And what if I can’t talk to the maintainers of the foreign system or they don’t want to change their systems api? Or do I have to give my database accessing microservice an internally accessible REST endpoint that is accessed by my facade microservice synchronously? How do I design this while keeping my microservices scalable, resilient, loosely-coupled, etc.? I hope I asked my question in a comprehensible way. Maybe am I thinking completely in the wrong way and/or missing something about event driven architecture? Hopefully someone here can give me an solution, an idea or a hint into the right direction. Many thanks in advance! A: Synchronous calls are only a problem when the work required is slow. Since you just have to check a db, You can have a queue for cancellation validation requests with an RPC style return queue and call it from your REST service, while keeping the incoming request waiting. Where you might end up with issues is when the Order Placement message is still on the queue to be processed and you get a cancel for it. It doesn't seem unsolvable though. If the cancel request processing was slow, well then yes, you would have no choice but to adapt your workflow to an async one. My general advice would be: If you are using message queues, make your microservices a bit bigger than you would otherwise. Just to keep the number of internal message types down to a manageable level. I probably would have the rest service check the db directly. If you need a message to trigger other stuff you can always fire off a CancellationRequestProccessed or something at the same time as doing the actual work.
{ "pile_set_name": "StackExchange" }
Q: How to turn off pluralize for specific table in Entity Framework 6.0? You can write this code in OnModelCreating method: modelBuilder.Conventions.Remove(); But this is applying to all tables. Is it possible to turn off to specific table by passing the table name to such method. Thnx. A: Yes modelBuilder.Entity<Foo>().ToTable( "Whatever" ); Entity Mappings using Fluent API
{ "pile_set_name": "StackExchange" }
Q: How to use .net to add checkboxes to an Excel spreadsheet? I have carefully researched this and not found the answer, yet. I received a task that required me to export some tabular data to Excel in .net. I was pleasantly surprised to find the "ClosedXML" library that does most of what I want with relative ease. However, when I sent my work to my client, he said that he would like for each row of data to have a checkbox next to it. The idea is that the user can check and uncheck the boxes, upload the file to a web server, and then have the web server make changes to a database based on which boxes are checked. Unfortunately, I cannot find any reference to checkboxes in ClosedXML, and I saw at least one comment in a discussion somewhere saying (without any further explanation) that ClosedXML doesn't do checkboxes. I tried creating an Excel spreadsheet and putting nothing on it except for a single checkbox. I tried browsing the OpenXML object model and found that there seems to be the following reference to the checkbox: document.WorkbookPart.WorksheetParts.Single().ControlPropertiesParts.Single().FormControlProperties Yeah, all that to find a single checkbox, much less consider adding one for every row! If I have code that is writing rows of data in Excel, how can I add checkboxes to the rows? If I have to use a different library, that's okay. Update: As @Mike suggested, it is easy to set this up as a drop down menu where the user can pick from two different options using data validation. The code looks like this: cell.SetValue(item.isOn ? "On" : "Off"); cell.DataValidation.List("\"On,Off\"", true); A: I'm not familiar with ClosedXML, but I think I understand your client's intent. Instead of offering checkboxes, you could create a blank column that the client can type an 'X' in. You could even use data validation to allow the user to pick the 'X' from a drop-down menu.
{ "pile_set_name": "StackExchange" }
Q: Garbage characters in SqlDataReader.GetString() of converted TIMESTAMP I've got a SELECT that gets, among other things, CONVERT(varchar(10), TIMESTAMP) where TimeStamp is defined as [TIMESTAMP] [binary](8) NULL Some of the timestamps have bad data, meaning instead of something like 0x30332F31372F3131, which converts to "03/17/11", they have only the bottom four bytes, as in 0x0000000002F09ADD. When I do the SELECT in an MSSMS query window those bad ones come out blank, which is fine, but when I retrieve them in my program using an SqlDataReader the Strings for the bad timestamps come out with garbage characters, as in " ðšÝ". Any ideas as to what I can do about this? A: Select case when ISDate(CONVERT(varchar(10), TIMESTAMP))=1 then CONVERT(varchar(10), TIMESTAMP) else null end
{ "pile_set_name": "StackExchange" }
Q: EntityFramework Include and possibly join? I have the following table structure as shown in the picture. (see: Table structure). Both tables ("Batches" and "Methods") reference to a "Project" table. When I now create a new Project I would like to get all childs created as well. Doing so I did the follwoing: _dbContext.Projects.Where(x => x.Id == prjId) .Include(x => x.Batches) .Include(x => x.Batches.Select(y => y.Measurements)) .Include(x => x.Methods).AsNoTracking().FirstOrDefault(); Now the problem is the following: New Batch and Method instances are created - thus they get a new ID(PK). The referenced Project_Id (FK) is set correct. But in my new Measurement instance only the Batch_Id(FK) is set correct and the Method_Id remains unchanged (has the old value) (see: result). What I need is that the Measurements.Mehtod_Id is set from the Methods table. Is there any suitable solution for that? My entities look like the following public class Project { [Key] public long Id { get; set; } public string Name { get; set; } public bool IsActive { get; set; } public virtual List<Batch> Batches { get; set; } public virtual List<Method> Methods { get; set; } } public class Batch : BaseObject { public Batch() { BatchFiles = new List<FileAttachment>(); Measurements = new List<Measurement>(); } public long Id { get; protected set; } public long Project_Id { get; set; } public virtual Project Project { get; set; } public virtual List<Measurement> Measurements { get; set; } } public class Method : BaseObject { public Method() { Parameters = new List<Parameter>(); } public long Id { get; protected set; } public long Project_Id { get; set; } public virtual Project Project { get; set; } public virtual List<Measurement> Measurements { get; set; } } public class Measurement { public int Id { get; protected set; } [ForeignKey("Batch")] public long? Batch_Id { get; set; } [Required] public virtual Batch Batch { get; set; } [ForeignKey("Method")] public long? Method_Id { get; set; } public virtual Method Method { get; set; } } // creation code (just a copy with new IDs for all childs) Project newProjectVersion = _dbContext.Projects.Where(x => x.Id == prjId) .Include(x => x.Batches) .Include(x => x.Batches.Select(y => y.Measurements)) .Include(x => x.Methods) .AsNoTracking().FirstOrDefault(); _dbContext.Projects.Add(newProjectVersion); _dbContext.SaveChanges(); Thanks for any help! A: The first problem is that your Select statement doesn't connect Measurements to Methods because of the AsNoTracking() addition. Only Projects and Methods are connected because they are explicitly Included off of the Project entity. The Measurements have a Method_id but this is value is not accompanied by a Method in their Method property. You could check that in the debugger if you walk through the object graph (with lazy loading disabled though!). Because of this, when all entities will be Add-ed to the context, EF won't notice that measurements receive new methods. You could get tempted to fix that by Include-ing Measurement.Method as well: ... .Include(x => x.Batches.Select(y => y.Measurements.Select(m => m.Method))) ... Now you'll see that Measurement.Method will be populated everywhere in the object graph. However, there's a gotcha here. When using AsNoTracking, EF6 doesn't keep track of entities it materialized (duh). This means that for each Measurement it creates a new Method instance, even if an identical Method (by id) was materialized before for another Measurement. (And in this case it will always materialize duplicates, because you already include Project.Methods.) That's why you can't do this in the quick way with AsNoTracking and Add using one context instance. You'll get an error that EF tries to attach duplicate entities. You must build the object graph using one context, with tracking, so EF will not materialize duplicates. Then you must Add this object graph to a new context. Which will look like this: Project project; using(var db = new MyContext()) { db.Configuration.ProxyCreationEnabled = false; project = db.Projects.Where(x => x.Id == prjId) .Include(x => x.Batches) .Include(x => x.Batches.Select(y => y.Measurements)) .Include(x => x.Methods).FirstOrDefault(); } using(var db = new MyContext()) { db.Projects.Add(project); db.SaveChages(); } Three remarks: Proxy creation is disabled, because you can't attach a proxy to another context without explicitly detaching it first. No, I didn't forget to include Measurement.Method. All methods are loaded by including them in the Project and now (because of tracking, and assuming that measurement will only have methods of the project they belong to), EF connects them with the Measurements by relationship fixup. EF-core is smarter here: when adding AsNoTracking it won't track materialized entities, but still, it won't create duplicates either. It seems to have some temporary tracking during the construction of an object graph.
{ "pile_set_name": "StackExchange" }
Q: Exception in CURSOR or other solution I have a DB where selling tickets. I have such procedure, where I count all sold money from some race: CREATE OR REPLACE PROCEDURE Total_money(depart IN RACE.DEPART_PLACE%TYPE, dest IN RACE.DESTINATION_PLACE%TYPE, total OUT TICKET.PRICE%TYPE) IS CURSOR tickets IS SELECT t.CLIENT_ID, t.PRICE FROM TICKET t JOIN VAGON v ON t.VAGON_ID = v.VAGON_ID JOIN RACE r ON v.RACE_ID = r.RACE_ID WHERE r.DEPART_PLACE = depart AND r.DESTINATION_PLACE = dest; BEGIN FOR t IN tickets LOOP IF t.CLIENT_ID IS NOT NULL THEN total := total + t.PRICE; END IF; END LOOP; END; First question: Can I place an exception into CURSOR declaration? Or what can I do, when I pass wrong depart name or destination name of the train? Or these names don't exist in DB. Then it will create an empty cursor. And return 0 money. How to control this? Second question: After procedure declaration, I run these commands: DECLARE t TICKET.PRICE%TYPE; t:=0; execute total_money('Kyiv', 'Warsaw', t) But there is an error(PLS-00103 Encountered the symbol...) First question: How to fix it? A: A simple check is just to test that total is non-zero after the loop: ... END LOOP; IF total <= 0 THEN RAISE_APPLICATION_ERROR(-20001, 'Toal zero, invalid arguments?'); END IF; END; If the total could legitimately be zero (which seems unlikely here, apart from the client ID check) you could have a counter of a flag and check that: CREATE ... IS found BOOLEAN := false; CURSOR ... BEGIN total := 0; FOR t IN tickets LOOP found := true; IF t.CLIENT_ID IS NOT NULL THEN total := total + t.PRICE; END IF; END LOOP; IF NOT found THEN RAISE_APPLICATION_ERROR(-20001, 'No records, invalid arguments?'); END IF; END; execute is an SQL*Plus command, so I'm not sure which way you want this to work. You can use an anonymous block like this: DECLARE t TICKET.PRICE%TYPE; BEGIN total_money('Kyiv', 'Warsaw', t); -- do something with t END; / Or using an SQL*Plus (or SQL Developer) variable you can do: variable t number; execute total_money('Kyiv', 'Warsaw', :t); print t I'd change it from a procedure to a function though; declare a total within it, initialise it to zero, and return that instead of having an out parameter. Then you can call it from PL/SQL or from SQL, within a simple select. And as ElectricLlama points out, you don't need a cursor; and don't need to do this in PL/SQL at all - just use an aggregate sum(). I assume this is an exercise to learn about cursors though?
{ "pile_set_name": "StackExchange" }
Q: Elasticsearch 1.0 and pyelasticsearch Does anyone know if pyelasticsearch (currently v0.6.1) works with Elasticsearch v1.0? Has anyone tried using these together (yes, I know Elasticsearch v1.0 was just released) yet? I'm using both in a Django application and while I can't say for certain it certainly looks as though pyelasticsearch is causing the internal server error I'm currently getting. The application functioned as intended with Elasticsearch v0.90.11 and pyelasticsearch v0.6.1. A: The last commit I see on pyelasticsearch is from several months ago. There are breaking API changes from .90 to 1.0 for ES, so I would guess pyelasticsearch is not going to work against it.
{ "pile_set_name": "StackExchange" }
Q: RequireJS order plugin starts fetching in correct order but doesn't wait for the files to download? I'm trying to implement RequireJS in a project and i'm having some issues getting it to work correctly. If i've understood this correctly (and otherwise the plugin would be rather pointless), the order plugin downloads the scripts in the correct order, and waits for each model to download before executing the next one. Example: requirejs.config({ paths: { 'jquery': 'http://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min' } }); require(['order!jquery', 'order!models/flyInModal'], function() { $('.fly-in-modal').flyInModal(); }); That should first download jquery from the path, and after jquery has loaded, continue to load flyInModal.js. Correct? As of now, this is what's happening: require.js loads init-front.js loads order.js loads jquery starts loading flyInModal.js loads jquery finishes loading after flyInModal.js has loaded, causing errors because jQuery is missing Screenshot of chrome dev tools: http://i.imgur.com/pdpBbak.png Have i misunderstood this, or is it working correctly now? I find order.js pretty pointless if it doesn't wait for the script to finish loading before continuing. Some scripts have a higher latency than others, that's just how it is. A: In RequireJS 2.x order has been deprecated in favour of shim - http://requirejs.org/docs/api.html#config-shim Details on why this was removed - https://github.com/jrburke/requirejs/wiki/Upgrading-to-RequireJS-2.0#wiki-shim
{ "pile_set_name": "StackExchange" }
Q: How to define a derived property in object oriented Matlab I want a read-only field that I can access as fv=object.field, but where the value that is returned is computed from other fields of the object (i.e. the return value satisfies fv==f(object.field2)). The desired functionality is the same as for the property function/decorator in Python. I recall seeing a reference that this is possible by setting the parameters of the properties block, but the Matlab OOP documentation is so scattered that I can't find it again. A: This is called a "dependent" property. A quick example of a class using a derived property is below: classdef dependent_properties_example < handle %Note: Deriving from handle is not required for this example. It's just how I always use classes. properties (Dependent = true, SetAccess = private) derivedProp end properties (SetAccess = public, GetAccess = public) normalProp1 = 0; normalProp2 = 0; end methods function out = get.derivedProp(self) out = self.normalProp1 + self.normalProp2; end end end With this class defined, we can now run: >> x = dependent_properties_example; >> x.normalProp1 = 3; >> x.normalProp2 = 10; >> x x = dependent_properties_example handle Properties: derivedProp: 13 normalProp1: 3 normalProp2: 10
{ "pile_set_name": "StackExchange" }
Q: Django -- Pre-populating Hidden Fields In my user signup process the user first creates their account, then creates a LIST object, and then is directed to the dashboard of the LIST object they just created. To achieve this I need to be able to access the list_id of the recently created LIST object so I can pass it into the next view/template. list_id is the primary key for the LIST table/class and as such I don't want a user to create a value for this -- it auto-increments. Here is the field from the model: class List(models.Model): list_id = models.IntegerField(primary_key=True) Description of list_id from MySQL: +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | list_id | int(11) | NO | PRI | NULL | auto_increment | Here is the view: @login_required @permission_required('social_followup.add_list') def user_create(request): if request.method == 'POST': list_form = forms.ListForm(request.POST) if list_form.is_valid(): list_create = list_form.save() messages.success(request, 'List {0} created'.format(list_create.list_id)) up, _ = UserProfile.objects.get_or_create(user=request.user, list_id=list_create) return redirect(reverse('user_dashboard', args=(2,))) # 2 is just an example else: list_form = forms.ListForm() return TemplateResponse(request, 'dashboard/create.html', {'list_form': list_form, }) However, I've found that when I don't include list_id as part of the form, I am not able to access that attribite (doing list_create.list_id returns None when list_id is not a field). In my template I have the list_id field hidden like this: <div class="control-group {% if list_form.list_id.errors %}error{% endif %}"> <div class="controls"> {{ list_form.list_id.as_hidden }} </div> </div> Since list_id is required and hidden, the form won't validate because list_id has no value. However, if I remove list_id as a field, I cannot seem to access the list_id attribute in my view. Is there a way to auto populate this field with whatever the next list_id should be so that I can grab the value with list_create.list_id, and keep the field hidden? Or is there another way to achieve this? A: Primary keys do not exist before .save() has been called. If you want to access the PK later after the form has been saved, you can access it by passing to your template list_form = forms.ListForm(instance=list_create) And then in your template {{list_form.instance.id}}
{ "pile_set_name": "StackExchange" }
Q: Bat file to run a .exe at the command prompt I want to create a .bat file so I can just click on it so it can run: svcutil.exe /language:cs /out:generatedProxy.cs /config:app.config http://localhost:8000/ServiceModelSamples/service Can someone help me with the structure of the .bat file? A: To start a program and then close command prompt without waiting for program to exit: start /d "path" file.exe A: You can use: start "windowTitle" fullPath/file.exe Note: the first set of quotes must be there but you don't have to put anything in them, e.g.: start "" fullPath/file.exe A: it is very simple code for executing notepad bellow code type into a notepad and save to extension .bat Exapmle:notepad.bat start "c:\windows\system32" notepad.exe (above code "c:\windows\system32" is path where you kept your .exe program and notepad.exe is your .exe program file file) enjoy!
{ "pile_set_name": "StackExchange" }
Q: Is that possible for an applet to install another applet or to send APDUs? Is that possible for a javacard applet to download and install another applet? Is that possible for an applet to send APDUs (information) to another applet? If so, can anyone lead me to proper documentations to begin? A: Is that possible for a javacard applet to download and install another applet? No, that's not possible, there is simply no API for it. In all the examples from Global Platform - which is probably more relevant than the Java Card specifications - the applet data is loaded through APDU commands. There is an Applet.install method in the Java Card API of course, but it gets called by the system and cannot be used from another applet - not even a security domain, as far as I know. Is that possible for an applet to send APDUs (information) to another applet? Yes, you can have one class implement the Shareable interface and share it through the getShareableInterfaceObject method. All Java Card tutorials will include this. The APDU buffer cannot be shared, but it doesn't need to, you can simply access it through the APDU methods. From the API: The Java Card runtime environment designates the APDU object as a temporary Java Card runtime environment Entry Point Object (See Runtime Environment Specification, Java Card Platform, Classic Edition, section 6.2.1 for details). A temporary Java Card runtime environment Entry Point Object can be accessed from any applet context. References to these temporary objects cannot be stored in class variables or instance variables or array components. Please do read tutorials or buy the old but still valid Java Card technology for Smart Cards. It's old, but the core principles are still completely valid, and most other basic things can be learned by studying the API.
{ "pile_set_name": "StackExchange" }
Q: Extract text in a ... paragraph element I have a string like some text <p>any text</p> from which I need to remove the part <p>any text</p> and as a result, get the string some text I found some sample code from a tutorial to work with strings, but I don't understand how it works. I'm a newbie in coding and it's hard because I don't know English. private String description; public void setDescription(String description) { this.description = description; if (description.contains("<p>")) { String musor = description.substring(description.indexOf("<p>")); String cleanUp = musor.substring(0, musor.indexOf("</p>")+1); musor = musor.substring(musor.indexOf("<p>")); this.description = this.description.replace(cleanUp, ""); } } A: You can use regular expressions which can do the trick. String regexp = "<p>.*?</p>"; String replace = ""; myString.replaceAll(regexp, replace); Replaces all <p>-tags and their contents with ​. (See also http://www.regular-expressions.info/.) I guess that there are a lot of libraries, which can do the same or even more.
{ "pile_set_name": "StackExchange" }
Q: How to add class after count exceed the limit i want to add class text-danger when i type more than max word in textarea. here what i try var max_chars = 230; document.getElementById('description').onkeyup = function () { document.getElementById('count_char').innerHTML = (max_chars - this.value.length) + "/"; }; $('#description').on('textarea', function () { checkWord(); }); function checkWord(){ var maxword = 230; var note = document.getElementById("descnote"); if (minword < Number($('#description').val().length)){ note.classList.add("text-danger"); } } my form <textarea class="form-control" name="description" id="description" form="tambah_post" rows="3" required></textarea> <small id="count_char"></small><small id="descnote">230 Sisa Rekomendasi.</small> A: You can use change event for this same var max_chars = 230; $(document).on('change','#description', function () { if (max_chars < $(this).val().trim()){ $(this).addClass("text-danger"); } else { $(this).removeClass('text-danger') } }); Also can do it as follow $(document).on('keyup keydown change','#description', function () { if (max_chars < $(this).val().trim()){ $(this).addClass("text-danger"); } else { $(this).removeClass('text-danger') } }); You can refer Jquery Keyup method for more Details A: You may do it using only jquery. Add the keyup event listener to the text area and check the length of it's value var maxword = 10; $('#description').on('keyup', function() { //remove white space and check if length is greater if ($(this).val().trim().length > maxword) { //using addclass to add a class $(this).addClass('text-danger') } else { $(this).removeClass('text-danger') } }); .text-danger { border: 1px solid red; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <textarea class="form-control" name="description" id="description" form="tambah_post" rows="3" required></textarea> <small id="count_char"></small><small id="descnote">230 Sisa Rekomendasi.</small>
{ "pile_set_name": "StackExchange" }
Q: What happens if MySQL connections continually aren't closed on PHP pages? At the beginning of each PHP page I open up the connection to MySQL, use it throughout the page and close it at the end of the page. However, I often redirect in the middle of the page to another page and so in those cases the connection does not be closed. I understand that this is not bad for performance of the web server since PHP automatically closes all MySQL connections at the end of each page anyway. Are there any other issues here to keep in mind, or is it really true that you don't have to worry about closing your database connections in PHP? $mysqli = new mysqli("localhost", "root", "", "test"); ...do stuff, perhaps redirect to another page... $mysqli->close(); A: From: http://us3.php.net/manual/en/mysqli.close.php "Open connections (and similar resources) are automatically destroyed at the end of script execution. However, you should still close or free all connections, result sets and statement handles as soon as they are no longer required. This will help return resources to PHP and MySQL faster." A: Just because you redirect doesn't mean the script stops executing. A redirect is just a header being sent. If you don't exit() right after, the rest of your script will continue running. When the script does finish running, it will close off all open connections (or release them back to the pool if you're using persistent connections). Don't worry about it.
{ "pile_set_name": "StackExchange" }
Q: I can't see module files in style library list When my web part is deployed with visual studio or when I install the wsp in test environment, I don't see the module file in the list sharepointServer/Style%20Library/Indices/CSS But I can access to the file if I type the file name (for example sharepointServer/Style%20Library/Indices/CSS/commonStyle.css). Elements.xml sample : <Module Name="Module_Indices" Url="Style Library"> <File Path="Module_Indices\Indices\CSS\commonStyle.css" Url="Watson_Indices/CSS/commonStyle.css" /> </Module> An idea why I can't see the module files ? A: Refer this MSDN Article and Comment in there You will need to add Type="GhostableInLibrary" to the File Tag, it is Mandatory and as your CAML is missing that you are not seeing it in the Library. <File Path="Module_Indices\Indices\CSS\commonStyle.css" Url="Watson_Indices/CSS/commonStyle.css" Type="GhostableInLibrary" />
{ "pile_set_name": "StackExchange" }
Q: Why Sin(30) = -0.9880316240928618 in java I try to show the value of sin 30 in java (eclipse). But it's not a real value, My code : public class sin30 { public static void main(String[] args){ System.out.println("Sin 30 = "+Math.sin(30)); }} and show : Showing should sin 30 is 0.5. what's wrong with my code? A: Math.sin expects a value in radians System.out.println("Sin 30 = "+ Math.sin(Math.toRadians(30))); A: Before should transform it in Radiant like this: double radians = Math.toRadians(30); System.out.println("Sin 30 = "+Math.sin(radians));
{ "pile_set_name": "StackExchange" }
Q: How to call individual form elements from the fieldset of a child entity in ZF2 In a view script the syntax to call a form element from a fieldset is something like: echo $this->formRow($form->get('member')->get('firstName')); This script calls the single element firstName from the fieldset member. However, if the fieldset element refers to an alias for an associative fieldset entity, this script calls all of the elements in the child table. What is the syntax to call a single element from the child table? In other words, let’s say we have a members entity and a members fieldset that include elements like firstName and lastName. We also have an address entity and an address fieldset that include elements like address, city, state, and zipcode. For the ORM associations, the members entity includes an addressInfo element that establishes the link to the address entity, and the address entity includes a memberItem element that establishes the link back to the members entity. In this case, echo $this->formRow($form->get('member')->get('firstName')); produces a single firstName form element in the view script, while echo $this->formRow($form->get('member')->get(‘addressInfo’)); produces form elements for address, city, state, and zipcode. I would assume that if we want to produce a form element only for city, the script might want to be something like echo $this->formRow($form->get('member')->get(‘addressInfo’)->get(‘city’)); or echo $this->formRow($form->get('member')->get(‘addressInfo’, ‘city’)); but neither of these work. What is the syntax? A: Since the addressInfo element is a collection, it is necessary to identify which 'row' of the collection before identifying the 'field.' Therefore, $member = $form->get('member'); $addressFieldsets = $member->get('addressInfo'); $addressInfo = $addressFieldsets[0]; $city = $addressInfo->get('city'); echo $this->formRow($city);
{ "pile_set_name": "StackExchange" }
Q: Codeigniter email validator always returns null to database I have some code that no matter what the email submits a NULL value. I have played with every validation rule I can and it still submits a NULL value for email when submitted. View <?php php echo form_label('First Name :'); ?> <?php echo form_error('dfirstName'); ?><br> <?php echo form_input(array('id' => 'dfirstName', 'First Name' => 'dfirstName')); ?><br> <?php echo form_label('Last Name :'); ?> <?php echo form_error('dlastName'); ?><br /> <?php echo form_input(array('id' => 'dlastName', 'Last Name' => 'dlastName')); ?><br /> <?php echo form_label('E-mail :'); ?> <?php echo form_error('demail'); ?><br /> <?php echo form_input(array('id' => 'demail', 'e-mail' => 'demail')); ?><br /> Controller $this->load->library('form_validation'); $this->form_validation->set_error_delimiters('<div class="error">', '</div>'); //Validating firstName Field $this->form_validation->set_rules('dfirstName', 'FirstName', 'required|min_length[4]|max_length[15]i'); //Validating lastName Field $this->form_validation->set_rules('dlastName', 'LastName', 'required|min_length[4]|max_length[15]'); //Validating Email Field $this->form_validation->set_rules('demail', 'e-mail', 'trim|alpha_numeric|max_length[30]'); if ($this->form_validation->run() == FALSE) { $this->load->view('schedule_submit'); } else { //Setting values for tabel columns $data = array( 'e-mail' => $this->input->post('demail'), 'LastName' => $this->input->post('dlastName'), 'FirstName' => $this->input->post('dfirstName') ); //Transfering data to Model $this->acom_insert->form_insert($data); $data['message'] = 'Data Inserted Successfully'; //Loading View $this->load->view('acom_success', $data); } A: Here your view's form input parameters are wrong, for all 3 fields you are not using name parameters. You can follow this code - <?php php echo form_label('First Name :'); ?> <?php echo form_error('dfirstName'); ?><br> <?php echo form_input(array('id' => 'dfirstName', 'name' => 'dfirstName')); ?><br> <?php echo form_label('Last Name :'); ?> <?php echo form_error('dlastName'); ?><br /> <?php echo form_input(array('id' => 'dlastName', 'name' => 'dlastName')); ?><br /> <?php echo form_label('E-mail :'); ?> <?php echo form_error('demail'); ?><br /> <?php echo form_input(array('id' => 'demail', 'name' => 'demail')); ?><br />
{ "pile_set_name": "StackExchange" }
Q: Simple Route to an action in another controller i have an MVC .Net C# project. have Plan Action under Home Controller. but i dont want to access this page as http://....../Home/Plans but i want to access it as http://....../Plans but i dont want to create Plans Controller. so i dont want to do a redirectToAction. i am trying to use the Route Annonation as the following: [Route("plans/")] public ActionResult Plans() [Route("plans/{actions}")] public ActionResult Plans() [Route("plans/index")] public ActionResult Plans() but none of the above worked for me. can you guys help me in this. Updated: this is my action under HomeController [Route("plans")] public ActionResult Plans() { var servicePlansDto = SubscriberApiManager.SubscriptionSellingService.GetServicePlans(ServiceId).FindAll(sp => !sp.HasPromotionCode); List<ServicePlanVm> servicePlansVm = Mapper.Map<List<ServicePlanDto>, List<ServicePlanVm>>(servicePlansDto); return View(servicePlansVm); } and this is my configurations public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } A: First of all remember to configure attribute routing: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } Then take care that each controller function has to have a different name. In the example they have the same name and this is not accepted by the compiler. Lat but not least, what's the need of the {actions} parameter in the routing attribute? When you define an attribute routing you don't need to define an action, as you your attribute is already decorating an action / method. You can have required / optional parameters in your routing but they usually correspond to a matching parameter in the method's sugnature: //Example http://www.domain.com/plans/123 [Route("plans/{productId}")] public ActionResult Plans(string productId) { ... }
{ "pile_set_name": "StackExchange" }
Q: "certain" in "This guy has certain reputation in this community" Is the word certain appropriate in this sentence? This guy has certain reputation in this community? Is that alright? Or is there a better way to put it? A: This guy has a certain reputation in this community. Reputation is a noun, so an article is needed. Otherwise it is OK, and the use of certain is correct. You would say "This guy has a (certain) reputation in this community." because it is indefinite what kind of reputation it is. You can't use the for this reason. However, you could say "This guy has the best reputation in this community." as this defines the kind of reputation he has.
{ "pile_set_name": "StackExchange" }
Q: Combobox já selecionado com valor do banco de dados Tenho alguns combobox na área de edição de informações do usuário e preciso puxar o dado já selecionado anteriormente pelo usuário na hora cadastro. Tentei fazer com o foreach mas não funcionou. <div class="form-group col-md-5" > <label for="inputSexo">Sexo</label> <select name="sexo_cliente" id="sexo_cliente" class="form-control" disabled> <option selected disabled="">Sexo</option> <?php require_once "api/conexao.php"; try { $prepared3 = $conexao_pdo->prepare("select * from sexo"); $prepared3->execute(); $result3 = $prepared3->fetchAll(); foreach($result3 as $resultado3) { echo "<option value='". $resultado3["cod"] ."'>". $resultado3["sexo"] ."</option>"; } } catch (PDOException $e) { echo "<option></option>"; } ?> </select> </div> A: Pode resolver isso verificando o valor cadastrado pelo usuário e comparando o mesmo com os valores que serão colocados nas options, definindo como selected (option marcada por default) o sexo do usuário a ser editado. <div class="form-group col-md-5" > <label for="inputSexo">Sexo</label> <select name="sexo_cliente" id="sexo_cliente" class="form-control" disabled> <option selected disabled="">Sexo</option> <?php require_once "api/conexao.php"; try { $prepared3 = $conexao_pdo->prepare("select * from sexo"); $prepared3->execute(); $result3 = $prepared3->fetchAll(); foreach($result3 as $resultado3) { // $varSexoUsuario é a variavel com valor do sexo do usuario $selected = ($resultado3["cod"] == $varSexoUsuario) ? 'selected' : ''; echo "<option value='". $resultado3["cod"] ."' ".$selected.">". $resultado3["sexo"] ."</option>"; } } catch (PDOException $e) { echo "<option></option>"; } ?> </select> </div>
{ "pile_set_name": "StackExchange" }
Q: spring autowire is not working returning null i'm learning spring now. here's my sample code. i'm using jersey, spring, hibernate, and mysql for my REST service. CustomerServiceImpl.java which is REST endpoint (partial code) package com.samples.service.impl; @Path("customers") public class CustomerServiceImpl implements CustomerService { private static Logger logger = Logger.getLogger(CustomerServiceImpl.class); @Autowired CustomerBO customerBO; here is CustomerBOImpl.java (partial code) package com.samples.BO.impl; @Component public class CustomerBOImpl implements CustomerBO { @Autowired CustomerDAO customerDAO; @Autowired CustomerAdapter customerAdapter; CustomerDAOImpl.class package com.samples.DAO.impl; @Repository public class CustomerDAOImpl implements CustomerDAO { this is applicationContext.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.1.xsd"> <context:property-placeholder location="classpath*:database.properties"/> <context:component-scan base-package="com.samples"/> <context:annotation-config /> </beans> this is first few lines of exception i'm getting. http-bio-8090-exec-1] [class: CustomerServiceImpl] INFO - list all customers [http-bio-8090-exec-1] [class: CustomerServiceImpl] INFO - customerBO is null May 08, 2014 10:55:29 AM com.sun.jersey.spi.container.ContainerResponse mapMappableContainerException SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container java.lang.NullPointerException at com.samples.service.impl.CustomerServiceImpl.getAllCustomers(CustomerServiceImpl.java:40) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvo this is web.xml <?xml version="1.0" encoding="UTF-8" standalone="no"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5"> <display-name>Employee Service</display-name> <context-param> <param-name>contextConfigLocation</param-name> <param-value> classpath*:applicationContext.xml </param-value> </context-param> <context-param> <param-name>initializeContextOnStartup</param-name> <param-value>true</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <servlet> <servlet-name>jersey-serlvet</servlet-name> <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class> <init-param> <param-name>com.sun.jersey.config.property.packages</param-name> <param-value>com.samples.service</param-value> </init-param> <init-param> <param-name>com.sun.jersey.api.json.POJOMappingFeature</param-name> <param-value>true</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>jersey-serlvet</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app> so if i'm understanding how this works correctly, i'm configuring xml to have my components autoscanned by providing the package under which i want to run autoscan. and for objects that i want to autowire. in CustomerServiceImpl class, i use @autowired for customerBO object which should have been scanned by @Component annotation on CustomerBOImpl.class definition. can you please help why my auto-scan is not picking up autowired customerBO object? thanks. A: I suspect the problem is your CustomerServiceImpl class is being instantiated outside of Spring, either by the servlet container or explicitly in your code. You need to either have Spring instantiate it (by making it a bean, etc) or use <context:spring-configured/>. I'm not sure if the latter will work if the servlet container instantiates the object as it will likely depend on the order in which things occur...
{ "pile_set_name": "StackExchange" }
Q: Primefaces ConfirmDialog not vertically centered in form updated with ajax I have a form that "grows" through Ajax+Rendered when some operations are clicked. The problem is that my p:confirmDialog is not vertically centered when the form grows. When in "normal size", the dialog is correctly centered. I have already tried: Remove or add "appendToBody" attribute Change the position of the declaration in the page (before and after h:form) Overwrite the "Top" in CSS Is that an issue from Primefaces (currently using v4) or am I doing something wrong? Since I use lots of "rendered" attributes, should I "rerender" the dialog? Here follows a snippet of my code. <h:form id="myForm"> <p:fieldset legend="Hello"> <!-- lots of things here --> </p:fieldset> <p:spacer height="20px" /> <p:fieldset legend="Dashboard" id="thisOneMakesTheFormGrows" rendered="#{bean.include or bean.edit}"> <!-- this one has lots of items, making the page grow when the 'rendered' attribute is true --> </p:fieldset> </h:form> <p:confirmDialog global="true" id="meuConfirmDlg" appendToBody="true" showEffect="fade" width="500px" hideEffect="fade" widgetVar="confirmDlg" closable="false"> <p:commandButton value="Yes" type="button" styleClass="ui-confirmdialog-yes" icon="ui-icon-check" /> <p:commandButton value="No" type="button" styleClass="ui-confirmdialog-no" icon="ui-icon-close" /> </p:confirmDialog> A: Have you overwritten the TOP attribute of the dialog box by percentage? If not please overwrite the css of the dialog box div. #dialogBoxId{ top: 50% !important; left:50% !important; }
{ "pile_set_name": "StackExchange" }
Q: Why the negative root of the initial value problem $′=\frac{2}{(1+2)}$, $(2)=0$ is not valid? The solution is $x=\pm\frac{\sqrt{15}}{2}$, but the negative root is valid, I searched online and someone says it does not satisfy the initial condition. But why? why the initial condition could not be satisfied? Thank you! A: Solving the ode $$ y' = \frac{2x}{1+2y}$$ we get: $$ \int1+2y \text{ }dy = \int 2x \text{ }dx$$ $$ y+y^2 = x^2+c$$ plugging in the initial condition we get: $$ 0+0 = 2^2+c \implies c = -4$$ Solving for y, the final solutions are: $$ y = \frac{-1\pm \sqrt{4x^2-15}}{2}$$. But the important thing here is to see which of these solutions, $ y = \frac{-1+ \sqrt{4x^2-15}}{2}$ or $ y = \frac{-1- \sqrt{4x^2-15}}{2}$, satisfies the initial condition we have. It is first one (because for the second one, by plugging on $2$ for x and $0$ for y we get $0=-1$ which is not true). Thus the unique solution to the differential equations is: $$ y = \frac{-1+ \sqrt{4x^2-15}}{2}$$
{ "pile_set_name": "StackExchange" }
Q: How are stages split into tasks in Spark? Let's assume for the following that only one Spark job is running at every point in time. What I get so far Here is what I understand what happens in Spark: When a SparkContext is created, each worker node starts an executor. Executors are separate processes (JVM), that connects back to the driver program. Each executor has the jar of the driver program. Quitting a driver, shuts down the executors. Each executor can hold some partitions. When a job is executed, an execution plan is created according to the lineage graph. The execution job is split into stages, where stages containing as many neighbouring (in the lineage graph) transformations and action, but no shuffles. Thus stages are separated by shuffles. I understand that A task is a command sent from the driver to an executor by serializing the Function object. The executor deserializes (with the driver jar) the command (task) and executes it on a partition. but Question(s) How do I split the stage into those tasks? Specifically: Are the tasks determined by the transformations and actions or can be multiple transformations/actions be in a task? Are the tasks determined by the partition (e.g. one task per per stage per partition). Are the tasks determined by the nodes (e.g. one task per stage per node)? What I think (only partial answer, even if right) In https://0x0fff.com/spark-architecture-shuffle, the shuffle is explained with the image and I get the impression that the rule is each stage is split into #number-of-partitions tasks, with no regard for the number of nodes For my first image I'd say that I'd have 3 map tasks and 3 reduce tasks. For the image from 0x0fff, I'd say there are 8 map tasks and 3 reduce tasks (assuming that there are only three orange and three dark green files). Open questions in any case Is that correct? But even if that is correct, my questions above are not all answered, because it is still open, whether multiple operations (e.g. multiple maps) are within one task or are separated into one tasks per operation. What others say What is a task in Spark? How does the Spark worker execute the jar file? and How does the Apache Spark scheduler split files into tasks? are similar, but I did not feel that my question was answered clearly there. A: You have a pretty nice outline here. To answer your questions A separate task does need to be launched for each partition of data for each stage. Consider that each partition will likely reside on distinct physical locations - e.g. blocks in HDFS or directories/volumes for a local file system. Note that the submission of Stages is driven by the DAG Scheduler. This means that stages that are not interdependent may be submitted to the cluster for execution in parallel: this maximizes the parallelization capability on the cluster. So if operations in our dataflow can happen simultaneously we will expect to see multiple stages launched. We can see that in action in the following toy example in which we do the following types of operations: load two datasources perform some map operation on both of the data sources separately join them perform some map and filter operations on the result save the result So then how many stages will we end up with? 1 stage each for loading the two datasources in parallel = 2 stages A third stage representing the join that is dependent on the other two stages Note: all of the follow-on operations working on the joined data may be performed in the same stage because they must happen sequentially. There is no benefit to launching additional stages because they can not start work until the prior operation were completed. Here is that toy program val sfi = sc.textFile("/data/blah/input").map{ x => val xi = x.toInt; (xi,xi*xi) } val sp = sc.parallelize{ (0 until 1000).map{ x => (x,x * x+1) }} val spj = sfi.join(sp) val sm = spj.mapPartitions{ iter => iter.map{ case (k,(v1,v2)) => (k, v1+v2) }} val sf = sm.filter{ case (k,v) => v % 10 == 0 } sf.saveAsTextFile("/data/blah/out") And here is the DAG of the result Now: how many tasks ? The number of tasks should be equal to Sum of (Stage * #Partitions in the stage) A: This might help you better understand different pieces: Stage: is a collection of tasks. Same process running against different subsets of data (partitions). Task: represents a unit of work on a partition of a distributed dataset. So in each stage, number-of-tasks = number-of-partitions, or as you said "one task per stage per partition”. Each executer runs on one yarn container, and each container resides on one node. Each stage utilizes multiple executers, each executer is allocated multiple vcores. Each vcore can execute exactly one task at a time So at any stage, multiple tasks could be executed in parallel. number-of-tasks running = number-of-vcores being used. A: If I understand correctly there are 2 ( related ) things that confuse you: 1) What determines the content of a task? 2) What determines the number of tasks to be executed? Spark's engine "glues" together simple operations on consecutive rdds, for example: rdd1 = sc.textFile( ... ) rdd2 = rdd1.filter( ... ) rdd3 = rdd2.map( ... ) rdd3RowCount = rdd3.count so when rdd3 is (lazily) computed, spark will generate a task per partition of rdd1 and each task will execute both the filter and the map per line to result in rdd3. The number of tasks is determined by the number of partitions. Every RDD has a defined number of partitions. For a source RDD that is read from HDFS ( using sc.textFile( ... ) for example ) the number of partitions is the number of splits generated by the input format. Some operations on RDD(s) can result in an RDD with a different number of partitions: rdd2 = rdd1.repartition( 1000 ) will result in rdd2 having 1000 partitions ( regardless of how many partitions rdd1 had ). Another example is joins: rdd3 = rdd1.join( rdd2 , numPartitions = 1000 ) will result in rdd3 having 1000 partitions ( regardless of partitions number of rdd1 and rdd2 ). ( Most ) operations that change the number of partitions involve a shuffle, When we do for example: rdd2 = rdd1.repartition( 1000 ) what actually happens is the task on each partition of rdd1 needs to produce an end-output that can be read by the following stage so to make rdd2 have exactly 1000 partitions ( How they do it? Hash or Sort ). Tasks on this side are sometimes referred to as "Map ( side ) tasks". A task that will later run on rdd2 will act on one partition ( of rdd2! ) and would have to figure out how to read/combine the map-side outputs relevant to that partition. Tasks on this side are sometimes referred to as "Reduce ( side ) tasks". The 2 questions are related: the number of tasks in a stage is the number of partitions ( common to the consecutive rdds "glued" together ) and the number of partitions of an rdd can change between stages ( by specifying the number of partitions to some shuffle causing operation for example ). Once the execution of a stage commences, its tasks can occupy task slots. The number of concurrent task-slots is numExecutors * ExecutorCores. In general, these can be occupied by tasks from different, non-dependent stages.
{ "pile_set_name": "StackExchange" }
Q: How to recruit a PhD student without a strong connection to teaching? As a postdoc, I'm considering to apply to research-only positions (for instance in a research institute, and not a university), and I know that one of the responsibilities of being a researcher is to recruit PhD students. I personally only know people (including me) who have been "recruited" for a PhD by one of their teacher (usually at Master level), and so I wouldn't be sure of how to proceed in order to recruit a PhD student as a young researcher: Are there some specialized websites where to post ads? Is it better to contact some teachers to see if they have good students to recommend? Moreover, in this case, which criteria can one use? If I were to teach, I would have a whole semester to know a student, and to decide whether it would make a good fit for a PhD, but how to do that during a one-hour interview? A: PhD studentships are quite often advertised like "normal" jobs, i.e. on general job boards/recruting websites. If you have contacts, by all means use them. As for evaluating the candidates, similar guidelines as for evaluating applicants for any jobs apply. I don't think there's a one-fits-all answer. Note that the hiring process may also depend on what institution you'd be working for. They might have an HR department that screens/selects the candidates. A: There are a couple of ways to determine whether a particular graduate student is a good candidate for your lab. Subject knowledge. While this is often unfair to the student, I know many researchers who will only accept students who are familiar with their area of research. This saves time in getting the student up to speed, which can take many months, as you probably know. Simple personality matches. By the time they're looking into your lab, they've already been accepted into the graduate program and kept their grades up high enough to be applying to labs, which means that (assuming you agree with the standards of the program) they are fairly smart. Your job is determine whether the student would be a good match for your lab in particular, and whether you want to work with them on a daily basis for the next 5+ years. Rotations. Many programs have graduate student rotations, which will give you an opportunity to interact with many students, and get the chance to know them better than the one-hour interview you mentioned. Aside from this, read up on general interviewing tips. Almost all the articles you'll find discussing general hiring advice is applicable to recruiting graduate students/postdocs as well. A: I am presuming by your question that you are talking about working in Europe; in North America, scientists at non-academic research labs generally are not generally expected to recruit PhD students. Unfortunately, I don't think there is an easy method for applying for positions outside of posting announcements on sites like academia.edu or TIPTOP. However, you will need to make sure that you are clear on your future workplace's requirements and regulations regarding the recruiting of graduate students. Many such institutes do not have PhD-granting programs of their own; in that case, you would need to make sure you were affiliated with a program that does grant doctoral degrees before you begin recruitment.
{ "pile_set_name": "StackExchange" }
Q: Is it safe to "use" to disk if extended smart check is in progress? As simple as the title suggests. /dev/sda is a unit from /dev/md127 - RAID1 drive. I'm currently running a test - started a smartctl -t long /dev/sda. Afterwards, I started to copy files over WinSCP (SFTP) to the mounted mdadm array. I didn't notice any performance drops, but I'm kind of not sure - not a system administrator, am just developing an automated system. Is it safe to "use" (write/read) disk while the drive is undergoing a smartcheck? What can go wrong? Are there any drawbacks? A: Yes, it is safe, as the checks do not modify anything on your disk and thus will not interfere with your normal usage (I happen to be researching the topic right now). I have managed servers with smart checks running and never had to worry about using the disks at the same time.
{ "pile_set_name": "StackExchange" }
Q: How to Refresh datagridview continously after Updating I am using This code for Updating my datagrid view after the data is updated in access database.The Data is updated for every second i kept this code in a loop in a background but when i am starting the background a big X is being displayed. try { OleDbDataAdapter dAdapter; OleDbCommandBuilder cBuilder; DataTable dTable; BindingSource bSource = new BindingSource(); dAdapter = new OleDbDataAdapter("Select * from data", cls_rt.con); //create a command builder cBuilder = new OleDbCommandBuilder(dAdapter); //create a DataTable to hold the query results dTable = new DataTable(); //fill the DataTable dAdapter.Fill(dTable); //BindingSource to sync DataTable and DataGridView bSource = new BindingSource(); //set the BindingSource DataSource bSource.DataSource = dTable; DataGridView.DataSource = dTable; } catch (Exception) { } Then I used this code try { this.dataTableAdapter.Fill(this.rTDataSet.data); } and kept this in loop dataDataGridView.Update(); then dataDataGridView.Refresh(); then dataDataGridView.RefreshEdit(); but it dint work for me I want my datagridview to update for every second and one more thing when it gets update i dont want the whole gridview to update i just want the particular cell to be update. Their would be a great appreciation if someone could help me. Thanks In Advance. A: almost all datagridview' s refreshing / updating the values will send you onto same way..and in this way the easiest of to "refresh" your dgv is to put this line when you need refreshing values yourDataGridview.DataSource = yourDataRetrievingMethod // in your situation your dataset and/or table
{ "pile_set_name": "StackExchange" }
Q: Possible to programatically change IIS's SMTP server "Smart Host" I recently discovered a way to audit SMTP emails prior to them actually leaving the SMTP server. This is accomplished by changing the "Smart Host" value to something that is Named a host that doesn't exist Less than 15 characters Has no periods in the name This allows me to view the messages with Outlook Express, check the file attachments, and other programatically generated content through System.Net.Mail I release the messages by changing this to a valid value and restarting the SMTP service. Question How can I programatically change this value so I can allow for the controlled queueing, audit, and release of these email messages? A: You can programmatically do this using the IIS WMI Provider. The SmartHost property can be found on either the IIsSmtpService or IIsSmtpServer object. On Windows Server 2008 you will need to install IIS 6.0 WMI compatibility. Something like the following should work. public static void ConfigureSmtpHost() { DirectoryEntry smtpServer = new DirectoryEntry("IIS://LOCALHOST/SMTPSVC/1"); smtpServer.Properties["SmartHost"].Value = "myNewSmartHost"; smtpServer.CommitChanges(); }
{ "pile_set_name": "StackExchange" }
Q: Obtaining Drawable object from string in Android Studio I am trying to find a way to set an imageview via a string. It seems that getResources() is deprecated and I can't seem to find a fast way to pass a string and load an image to my imageView. I do not have any working or useful code to present, I am simply asking for methods to look into. Thanks! A: The answer by theMatus is correct. Drawables are accessed via an int identifier (i.e. R.drawable.some_image will have a corresponding int value). To convert a String to such a value, you would use Resources.getIdentifier(): String mDrawableName = "some_image"; int resId = getResources().getIdentifier(mDrawableName , "drawable", getPackageName()); To access the Drawable, you would use getResources().getDrawable(resId). getResources() is not deprecated by the way, although getDrawable(int id)technically is as of API 22 (it will still work). If you want to approach this correctly and get around the deprecation, you can use the following: @TargetApi(Build.VERSION_CODES.LOLLIPOP_MR1) private Drawable getDrawableForSdkVersion(int resId, Resources res) { Drawable drawable = null; if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP_MR1) { drawable = res.getDrawable(resId, null); } else { drawable = res.getDrawable(resId); } return drawable; }
{ "pile_set_name": "StackExchange" }
Q: Why are Internet hosts "not required" to receive an IPv4 datagram larger than 576 bytes? I'm currently studying Internet protocols and had a question regarding the IP datagram. Within the IP header I am aware there is a field called "total length" which specifies the total length of the particular fragmented datagram in bits. However, while reading the textbook ("TCP/IP Illustrated Vol. 1") I read that "a host is not required to be able to receive an IPv4 datagram larger than 576 bytes." If it says that it's "not required," then doesn't it mean that it technically would be able to transport it? Why is there such a limit in terms of the IP MTU? Edit One thing that I came across while studying TCP reminded me of this question I asked previously. TCP is a transport layer (layer 4 in the conventional OSI model) protocol that runs encapsulated inside the lower network layer (layer 3) protocol. This is also where the Internet "power horse" known as IP is used. All protocols have a specific kind of header, and in the case of IP and TCP, both of their headers have a minimum of length 20 bytes (in TCP's case, the maximum length is 60 bytes including options added at the end). TCP protocol use things called "segments" which are equivalent to packets for other protocols. The maximum segment size (MSS) is "the largest segment that a TCP is willing to receive from its peer and, consequently, the largest size its peer should ever use when sending." (TCP/IP Illustrated, Vol. 1, 2e p. 606). The MSS is usually specified as an option in the TCP header, but if it's not specified then the default size is 536 bytes. Recall that the IP header and TCP basic header are both a minimum of 20 bytes. This means: 20 (IP header) + 20 (TCP header) + 536 (default MSS) = 576 bytes. Thus, the minimum required packet size that IPv4 hosts should be able to process are 576 bytes. A: Well to take an analogy, it's like your city council mandating that "every parking spot in the city must be big enough to accommodate a Prius". That is, it is illegal to build a parking spot that is too small to accommodate a Prius. This rule of course has nothing to do with the size of vehicles allowed on the highway. A: IP datagrams can be up to 64K in length but it would be completely unreasonable back in 1981 to require every host to allocate 64K buffers. That could be your entire addressable memory on a 16 bit computer! The numbers are essentially arbitrary but that memory was too expensive to just throw at the problem is a factor. From RFC 791: All hosts must be prepared to accept datagrams of up to 576 octets (whether they arrive whole or in fragments). It is recommended that hosts only send datagrams larger than 576 octets if they have assurance that the destination is prepared to accept the larger datagrams. The number 576 is selected to allow a reasonable sized data block to be transmitted in addition to the required header information. For example, this size allows a data block of 512 octets plus 64 header octets to fit in a datagram.
{ "pile_set_name": "StackExchange" }
Q: File download in Angular and JWT TL;DR How to download/save a file using Angular and JWT authentication without leaving a token trail in the browser? My Angular/Node app is secured over HTTPS and uses JWT for authentication. The JWT is stored in sessionStorage and passed in the Authorization header field for all AJAX requests to the server. I need functionality in the app to download a file so that it's automatically saved by the browser (or a popup displayed where to save etc.). It should work ideally in any browser that can run Angular. I have looked at the following: AJAX requests. This doesn't work because of inherent security measures preventing a browser from saving a file locally. Pass the JWT in a Cookie - cookies are something I want to avoid using, hence the reason for using sessionStorage. Pass the JWT in a query string but this means it will be logged in the server logs, and more importantly can be seen in browser history. iframe that contains a form that POSTS the data. Can't set a header with this method. Any other options? A: The iframe method was close. Just need to set the server up to accept the JWT from the body of a POST rather than the query string. It's not the most elegant solution having to use an iframe, but it seems to meet the requirements. Here's the directive I used: .directive('fileDownload', function () { return { restrict: 'A', replace: false, template: "<iframe style='position:fixed;display:none;top:-1px;left:-1px;' />", link: function (scope, element) { element.click(function() { var iframe = element.find('iframe'), iframeBody = $(iframe[0].contentWindow.document.body), form = angular.element("<form method='POST' action='/my/endpoint/getFile'><input type='hidden' name='foo' value='bar' /><input type='hidden' name='access_token' value='" + scope.access_token + "' /></form>"); iframeBody.append(form); form.submit(); }); } } }); And in express middleware pop the JWT into the header before validating it with the express-jwt module if (req.body && req.body.hasOwnProperty('access_token')) { req.headers.authorization = 'Bearer ' + req.body.access_token; }
{ "pile_set_name": "StackExchange" }
Q: JSF IllegalException: Component ID form: `xyz` has already been found in the view I am getting IllegalStateException saying component ID form has already been found in the view and am not sure what is causing this issue and I do not want this exception in the first place. Exception Servlet.service() for servlet Faces Servlet threw exception: java.lang.IllegalStateException: Component ID form:_captureFileOnsubmit has already been found in the view. at com.sun.faces.util.Util.checkIdUniqueness(Util.java:846) [:2.1.7-SNAPSHOT] at com.sun.faces.util.Util.checkIdUniqueness(Util.java:830) [:2.1.7-SNAPSHOT] at com.sun.faces.util.Util.checkIdUniqueness(Util.java:830) [:2.1.7-SNAPSHOT] at com.sun.faces.application.view.StateManagementStrategyImpl.saveView(StateManagementStrategyImpl.java:135) [:2.1.7-SNAPSHOT] at com.sun.faces.application.StateManagerImpl.saveView(StateManagerImpl.java:133) [:2.1.7-SNAPSHOT] at com.sun.faces.application.view.WriteBehindStateWriter.flushToWriter(WriteBehindStateWriter.java:225) [:2.1.7-SNAPSHOT] at com.sun.faces.application.view.FaceletViewHandlingStrategy.renderView(FaceletViewHandlingStrategy.java:419) [:2.1.7-SNAPSHOT] at com.sun.faces.application.view.MultiViewHandler.renderView(MultiViewHandler.java:125) [:2.1.7-SNAPSHOT] at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:121) [:2.1.7-SNAPSHOT] at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) [:2.1.7-SNAPSHOT] at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:139) [:2.1.7-SNAPSHOT] at javax.faces.webapp.FacesServlet.service(FacesServlet.java:594) [:2.1.7-SNAPSHOT] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:324) [:6.0.0.Final] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:242) [:6.0.0.Final] My xhtml page looks like: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html" xmlns:icecore="http://www.icefaces.org/icefaces/core" xmlns:ace="http://www.icefaces.org/icefaces/components" xmlns:ice="http://www.icesoft.com/icefaces/component"> <script type="text/javascript" src="/js/icefaces/ace-jquery.js" /> <script type="text/javascript" src="/js/icefaces/ace-components.js" /> <script type="text/javascript" src="/js/icefaces/icepush.js" /> <script type="text/javascript" src="/js/icefaces/bridge.js" /> <script type="text/javascript" src="/js/icefaces/compat.js" /> <script type="text/javascript" src="/js/icefaces/fileEntry.js" /> <script type="text/javascript" src="/js/icefaces/jsf.js" /> <script type="text/javascript" src="/js/icefaces/icefaces-compat.js" /> <h:head> <title>ICEfaces 3</title> <link rel="stylesheet" type="text/css" href="/xmlhttp/css/rime/rime.css"/> </h:head> <h:body> <h:form id="form"> <h:outputText value="Welcome to ICEfaces 3, select current date: "/> <ace:dateTimeEntry renderAsPopup="true"> <f:convertDateTime pattern="MM/dd/yyyy" timeZone="Canada/Mountain"/> </ace:dateTimeEntry> <ace:fileEntry id = "fileUpload" label="File Entry" relativePath="uploaded" fileEntryListener="#{fileUpload.uploadFile}"/> <h:commandButton value="Upload File" /> </h:form> </h:body> </html> All am trying to do is just get fileUpload feature working, wierd part is that javascript and css stuffs present in javax.faces.resources folder is not referenced in the application and so there are some of wierd errors that i get. Another thing to note is if I use eclipse with icefaces plugins then fileUplaod feature works fine but if i try to build them without plugins and with only standard set of required jars then i am getting componenet id related IllegalStateException. Any thoughts, suggestinos? Updates <context-param> <param-name>javax.faces.FACELETS_SKIP_COMMENTS</param-name> <param-value>true</param-value> </context-param> <context-param> <param-name>javax.faces.VALIDATE_EMPTY_FIELDS</param-name> <param-value>false</param-value> </context-param> <context-param> <param-name>com.icesoft.faces.concurrentDOMViews</param-name> <param-value>false</param-value> </context-param> <context-param> <param-name>com.icesoft.faces.synchronousUpdate</param-name> <param-value>false</param-value> </context-param> <context-param> <param-name>com.icesoft.faces.blockingRequestHandler</param-name> <param-value>icefaces</param-value> </context-param> <context-param> <param-name>com.icesoft.faces.checkJavaScript</param-name> <param-value>false</param-value> </context-param> A: I get errors like this using mojarra when I have javax.faces.PROJECT_STAGE set to Development. If you have this set change it to Production and see if the errors go away. I though it was known bug but can't find anything.
{ "pile_set_name": "StackExchange" }
Q: reinterpret_cast changes type from `const float *` to `float` unexpectedly I have a pointer: const float *m_posBufferPtr_float; I assign to this variable with: m_posBufferPtr_float = reinterpret_cast<const float *>(buffer()->data().constData()); At which constData() function returns const char * type: inline const char *QByteArray::constData() const { return d->data(); } Therefore my reinterpret_cast should convert const char * to const float *. But to my surprise, exactly before reinterpret_cast my pointer is: and exactly after reinterpret_cast debugger shows my pointer as: I wonder why reinterpret_cast is converting const char * to float rather than const float * A: If you take this snippet: int main(int argc, char *argv[]) { float* pointer = nullptr; float value = 12.34; pointer = &value; qDebug() << *pointer; } and execute it step by step, you will see in your debugger: Then Notice that the type became float when the pointer has been initialized. It's due to the configuration of the debugger. In Qt Creator, right-click on your pointer and uncheck Dereference pointers automatically:
{ "pile_set_name": "StackExchange" }
Q: Can I do this with Boost interval_map? What I want to do is handling interval efficiently. For example, in my example, intervals are like the following: [10, 20], [15, 25], [40, 100], [5, 14] Intervals are closed and integers, and some of intervals may be ovelapped. I want to find overlapped intervals for a given query efficiently. For example, if [16, 22] is given: [10, 20], [15, 25] The above intervals should be computed as overalpped intervals. I'm currently writing an interval tree based on Red-Black Tree (reference: CLRS, Introduction to Algorithms). Although finding all overlapped intervals can be O(n), the running time should be faster. Note that intervals can be deleted and inserted. However, I just found that Boost has interval_map and interval_set: http://www.boost.org/doc/libs/1_46_1/libs/icl/doc/html/index.html I tried it, but the behavior is very strange for me. For example, if [2, 7] is inserted first and then [3, 8] is inserted, then the resulting map will have [2, 3), [3, 7], and (7, 8]. That is, when a new interval is inserted, splitting is automatically done. Can I turn off this feature? Or, Boost's interval_map is right for my purpose? A: You asked for a data structure that could find overlaps efficiently. This does so, by storing overlaps in the data structure. Now you seem to be complaining that it has done so. This example explains the logic: typedef std::set<string> guests; interval_map<time, guests> party; party += make_pair(interval<time>::right_open(time("20:00"), time("22:00")), guests("Mary")); party += make_pair(interval<time>::right_open(time("21:00"), time("23:00")), guests("Harry")); // party now contains [20:00, 21:00)->{"Mary"} [21:00, 22:00)->{"Harry","Mary"} //guest sets aggregated on overlap [22:00, 23:00)->{"Harry"} When you add two overlapping intervals, you actually create three intervals with distinct properties. The overlap is in both original intervals, make it a logically distinct interval from either of the original intervals. And the two original intervals now span times with different properties (some that overlap the original, some that don't). This splitting makes it efficient to find overlaps, since they are their own intervals in the map. In any event, Boost does allow you to select the interval combining style. So if you want to force a structure that makes it harder to find overlaps, you can do so.
{ "pile_set_name": "StackExchange" }
Q: Merkle–Hellman knapsack cryptosystem - My exam I'm learning about Merkle-Hellman cryptosystem. Here is my question : Why chose q : https://en.wikipedia.org/wiki/Merkle–Hellman_knapsack_cryptosystem Thanks all. A: The answer is in the next few sentences of that same Wikipedia article: q is chosen this way to ensure the uniqueness of the ciphertext. If it is any smaller, more than one plaintext may encrypt to the same ciphertext. Since q is larger than the sum of every subset of w, no sums are congruent mod q and therefore none of the private key's sums will be equal. So in short q is chosen to ensure uniqueness of the ciphertext which is important. If I have message a which encrypts to b and message c also encrypts to b then there is no unique decryption for b. b could be either a or c. It is important that encryption/decryption algorithms are one-to-one from plaintext to ciphertext otherwise it becomes difficult to encrypt/decrypt - there would be an element of guessing involved.
{ "pile_set_name": "StackExchange" }
Q: How much do C/C++ compilers optimize conditional statements? I recently ran into a situation where I wrote the following code: for(int i = 0; i < (size - 1); i++) { // do whatever } // Assume 'size' will be constant during the duration of the for loop When looking at this code, it made me wonder how exactly the for loop condition is evaluated for each loop. Specifically, I'm curious as to whether or not the compiler would 'optimize away' any additional arithmetic that has to be done for each loop. In my case, would this code get compiled such that (size - 1) would have to be evaluated for every loop iteration? Or is the compiler smart enough to realize that the 'size' variable won't change, thus it could precalculate it for each loop iteration. This then got me thinking about the general case where you have a conditional statement that may specify more operations than necessary. As an example, how would the following two pieces of code compile: if(6) if(1+1+1+1+1+1) int foo = 1; if(foo + foo + foo + foo + foo + foo) How smart is the compiler? Will the 3 cases listed above be converted into the same machine code? And while I'm at, why not list another example. What does the compiler do if you are doing an operation within a conditional that won't have any effect on the end result? Example: if(2*(val)) // Assume val is an int that can take on any value In this example, the multiplication is completely unnecessary. While this case seems a lot stupider than my original case, the question still stands: will the compiler be able to remove this unnecessary multiplication? Question: How much optimization is involved with conditional statements? Does it vary based on compiler? A: Short answer: the compiler is exceptionally clever, and will generally optimise those cases that you have presented (including utterly ignoring irrelevant conditions). One of the biggest hurdles language newcomers face in terms of truly understanding C++, is that there is not a one-to-one relationship between their code and what the computer executes. The entire purpose of the language is to create an abstraction. You are defining the program's semantics, but the computer has no responsibility to actually follow your C++ code line by line; indeed, if it did so, it would be abhorrently slow as compared to the speed we can expect from modern computers. Generally speaking, unless you have a reason to micro-optimise (game developers come to mind), it is best to almost completely ignore this facet of programming, and trust your compiler. Write a program that takes the inputs you want, and gives the outputs you want, after performing the calculations you want… and let your compiler do the hard work of figuring out how the physical machine is going to make all that happen. Are there exceptions? Certainly. Sometimes your requirements are so specific that you do know better than the compiler, and you end up optimising. You generally do this after profiling and determining what your bottlenecks are. And there's also no excuse to write deliberately silly code. After all, if you go out of your way to ask your program to copy a 50MB vector, then it's going to copy a 50MB vector. But, assuming sensible code that means what it looks like, you really shouldn't spend too much time worrying about this. Because modern compilers are so good at optimising, that you'd be a fool to try to keep up. A: The C++ language specification permits the compiler to make any optimization that results in no observable changes to the expected results. If the compiler can determine that size is constant and will not change during execution, it can certainly make that particular optimization. Alternatively, if the compiler can also determine that i is not used in the loop (and its value is not used afterwards), that it is used only as a counter, it might very well rewrite the loop to: for(int i = 1; i < size; i++) because that might produce smaller code. Even if this i is used in some fashion, the compiler can still make this change and then adjust all other usage of i so that the observable results are still the same. To summarize: anything goes. The compiler may or may not make any optimization change as long as the observable results are the same.
{ "pile_set_name": "StackExchange" }
Q: commutator subgroup and semidirect product Suppose $G$ is a solvable group such that $G = N \rtimes H$. Then I can show that $G' = M \rtimes H'$, where $G'$ is the commutator subgroup of $G$ and $M = N \cap G'$, $H' = H \cap G'$. I can also show that $H'$ is indeed the commutator subgroup of $H$. So $H$ and $H'$ are related without this relation explicitly depending on $G'$. Is this also true for $M$ and $N$, i.e. is there a relation between $M$ and $N$ that does not hinge on $G'$? In the case of $N$ abelian of maximal order it might hold that $N \cong Z(G) \times M$, where $Z(G)$ is the center of $G$. At least it holds for the few groups I checked. I'm equally interested in the general and this special case. A: Here is a fundamental example (if $G$ is finite solvable, then $G/\Phi(G)$ is of the form described here). Let $H$ be a group of invertible $n \times n$ matrices over the field $\mathbb{Z}/p\mathbb{Z}$. Let $N$ be the elementary abelian group of order $p^n$, $N \cong C_p^n$. Then $H$ acts on $N$ by matrix multiplication. Let $G = N \rtimes H$. Then $G' = [G,G] = [N,N][N,H][H,H] = [N,H] \rtimes H'$. $[N,H] = \langle n^{-1} n^h : n \in N, h \in H \rangle$ in multiplicative notation, but in matrix notation, we just get $$[N,H] = \langle -n + n \cdot h : n \in N, h \in H \rangle = \langle n\cdot (h-1): n \in N, h \in H \rangle = \sum_{h \in H} \newcommand{\im}{\operatorname{im}}\im(h-1)$$ Coprime action If $H$ has order coprime to $p$, then some fancy linear algebra shows that $\ker((h-1)^n) = \ker(h-1)$ and $\im(h-1)$ is a direct complement. In other words, by Fitting's lemma (applied to the semisimple operator $h-1$), we get $$N = \ker(h-1) \oplus \im(h-1) = C_{h}(N) \times [N,h]$$ Using some slightly fancier versions of these linear algebra ideas we even get $$N=\left( \bigcap_{h \in H} \ker(h-1) \right) \oplus \left(\sum_{h \in H} \im(h-1) \right) = C_H(N) \times [H,N]$$ Even if $N$ is not abelian similar ideas give: $N=C_H(N)[H,N]$, though the intersection may be non-identity. Defining characteristic If $H$ has order a power of $p$, then one gets sort of the opposite behavior. The minimum polynomial of $h$ divides $x^{p^n}-1 = (x-1)^{p^n}$, so every eigenvalue of $h-1$ is 0, and $h-1$ is nilpotent. Hence Fitting's lemma tells us that $N=\ker((h-1)^{p^n}) \times \im((h-1)^{p^n})$, but that is useless since $(h-1)^{p^n}=0$ and so the kernel is all of $N$ and the image is $1$. If we try to apply this to $h-1$ directly without raising to the $p^n$th power, then things go very weird. Take $h=\begin{bmatrix}1&1\\0&1\end{bmatrix}$. Then $\im(h-1) = \{ (0,x) : x \in C_p \}$ but also $\ker(h-1) = \{ (0,x) :x \in C_p \}$. When $p=2$, this is the $D_8$ example. If one wants larger $N$, then one can take $H=\langle h_1,h_2\rangle$ with $$ h_1=\begin{bmatrix}1&0&1\\0&1&0\\0&0&1\end{bmatrix}, \qquad h_2=\begin{bmatrix}1&0&0\\0&1&1\\0&0&1\end{bmatrix} $$ Then $\im(h_i-1)=\{ (0,0,x) :x \in C_p \}$ but $\ker(h_1-1) = \{ (0,y,z) : y,z \in C_p \}$ and $\ker(h_2-1) = \{ (x,0,z) : x,z \in C_p \}$ so $$\bigcap_{h \in H} \ker(h-1) = \{ (0,0,x) : x \in C_p \}$$ and $$\sum_{h \in H} \im(h-1) = \{ (0,0,x) : x \in C_p \}$$ When $p=2$, this is the $D_8 \operatorname{\sf Y} D_8$ example. Notice how broken the decomposition is here. References Kurzweil–Stellmacher, Theory of Finite Groups, Chapter 8, is where this really started to make sense for me.
{ "pile_set_name": "StackExchange" }
Q: grep:argument list too long I need to delete 2 lines above and 4 lines below the lines starting with 'Possible'. This this line should also be erased. I'm not used to work in the terminal, but seems that for what I want the solution below is the most straightforward. The problem is that my file has over 70000 lines, and it seems to be too much for grep: $ grep -v "$(grep -E -a -B 2 -A 3 'Possible' structure)" structure >final -bash: /bin/grep: Argument list too long Is there any other way to accomplish this? A snippet of the input file, with a part to be erased: gi|41|gb|JH9|.1(59-594) Length: 73 bp Type: Glu Anticodon: CTC at 33-35 (59424-59426) Score: 22.64 Possible pseudogene: HMM Sc=43.51 Sec struct Sc=-20.87 * | * | * | * | * | * | * | Seq: GCCCGTTTGGCTCAGTGGAtAGAGCATCGGCCCTCAgACCGTAGGGtCCTGGGTTCAGTTCTGGTCAAGGGCA Str: >>>>.>...>>>>........<<<<.>>>>........<<<.<......>.>>.......<<.<..<.<<<<. A: The problem is that my file has over 70000 lines, and it seems to be too much for grep: No, the fact is that grep -E -a -B 2 -A 3 'Possible' structure expands into something that causes the argument list to be too large. You can use process substitution instead: grep -v -f <(grep -E -a -B 2 -A 3 'Possible' structure) structure >final A: I think that you should split your command into two stages. At first stage you select the strings which you don't wish to see in the output (the inner grep) and save the result into a file. On the second stage you check the input using -f grep flag (-f allows to specify pattern in a file instead of the command line). A: You can try this sed, sed 'N;/^[^\n]*\n[^\n]*$/N; /.*\n.*\n.*Possible/{$q;N;N;N;d};P;D;' structure > final
{ "pile_set_name": "StackExchange" }
Q: An efficient way to check an IP address if it is a proxy of any sort? How to check an IP address if it is a proxy of some sort (including TOR and possibly TOR alternatives)? Any ideas about an elegant and reliable way? May be some online listing, tool, service, local method also an option? P.S. I'm asking this, because I am kind of baffled whether to scan for specific ports in attempt to be serviced, or just rely on some online listings/dnsbl about TOR nodes, or both, or may be there is more universal approach because there may be TOR alternatives also? I've seen at least one. A: You can view the TOR list here: TOR (The Onion Router) Servers - IP List or here: Torstatus You'll have to scan every port and attempt to make a connection through it for your other part of the question, a proxy can serve on any port at all. So not, it's not really possible.
{ "pile_set_name": "StackExchange" }
Q: Group a List I have a question that's similar to yesterday's question. I've got this List<object[]> List<object[]> olst = new List<object[]>(); olst.Add(new object[] { "AA1", "X", 1, 3.50 }); olst.Add(new object[] { "AA2", "Y", 2, 5.20 }); olst.Add(new object[] { "AA2", "Y", 1, 3.50 }); olst.Add(new object[] { "AA1", "X", 1, 3.20 }); olst.Add(new object[] { "AA1", "Y", 2, 5.30 }); I need to produce List<object[]> to hold this: "AA1", "X", 2, 6.70 "AA2", "Y", 3, 8.70 "AA1", "Y", 2, 5.30 In other words, I need to group olst by the 1st and 2nd elements of each object[] and sum 3rd and 4th. I could use a for loop, but I was hoping someone could help me using lambda expressions and/or linq to accomplish this. A: List<object[]> olst = new List<object[]>(); olst.Add(new object[] { "AA1", "X" }); olst.Add(new object[] { "AA2", "Y" }); olst.Add(new object[] { "AA2", "Y" }); olst.Add(new object[] { "AA1", "X" }); olst.Add(new object[] { "AA1", "Y" }); var result = from ol in olst group ol by new {p1 = ol[0], p2 = ol[1]} into g select g.First(); Something like this? A: You need to group by an anonymous type, then sum the third and fourth columns: List<object[]> grouped = olst .GroupBy(o => new { Prop1 = o[0].ToString(), Prop2 = o[1].ToString() }) .Select(o => new object[] { o.Key.Prop1, o.Key.Prop2, o.Sum(x => (int)x[2]), o.Sum(x => (double)x[3]) }) .ToList();
{ "pile_set_name": "StackExchange" }
Q: System time is off by up to hundreds of milliseconds despite NTP sync before boot I'm running a couple of servers which need a pretty tight time sync (<50ms) as they are running a Paxos algorithm. The servers are running NTP and are successfully sync at one point. According to hwclock the 11-minute mechanism is enabled, so the system time should be copied to hardware clock. However, I see that after a reboot the system time can be off by as much as 300ms compared to the time just before a reboot. Is it unreasonable to think that after a reboot the time should be within 50ms of the time just before reboot? A: I do not have numbers to produce, but it seems probable that the interface used to set the clock at boot only has precision down to the second. You do not state your OS, but on all Unix-like systems it is possible to insert a dependency on NTP time in the boot process. The NTP daemon is started at boot, but often it immediately backgrounds itself and boot continues while the NTP daemon looks for servers to sync to -- this is so that boot is not delayed in case the machine is not connected to the network. In this case, you will want to make sure that the ntp daemon is started in a way that will correct an offset by stepping at boot. This can be, for example, ntpd -gx or chronyc -q. You may also wish to insert a check that the offset is acceptable before starting your workload. A: My initial reaction was that 300ms seems like an awful lot, but I do have numbers to produce, and they show that @Law29 is right: One of my machines over a normal week: Frequency: System peer offset: Same system, shorter period with a reboot involved: Frequency: System peer offset: Scatter plot of the peers (Hope you can read all the numbers on the graphs OK - drop me a comment if not.) As you can see, there's a rather large discrepancy. It surprised me how much it was, and also how long it took to get back on track with the frequency correction, considering that there's a stratum 1 GPS source on my local network. And given that the peer samples are fairly tightly clustered on the plot, it's clearly a problem with the local clock, not inconsistent network delay during startup. (For the record, the hardware is a Shuttle DS437 fanless mini-PC with a dual-core Celeron 1037U @ 1.8 GHz.) So the takeaways are probably: make sure ntpd is successfully writing the NTP drift file, make sure the kernel's 11-minute timer to update the hardware clock is on (See "Automatic Hardware Clock Synchronization by the Kernel" in man hwclock for details), or your shutdown process is updating the hardware clock, make sure ntpd has 4-10 reachable sources (in iburst mode), and configure your startup dependencies so that ntpd has a chance to fix the clock before Paxos starts.
{ "pile_set_name": "StackExchange" }
Q: Particle beam in infinite magnetic field In a physics exam at university I had the following problem: A proton-deuteron beam is accelerated by a $\Delta V = 10^7\,\mathrm{V}$ difference of potential. At some point, the particles enter a uniform, infinite, magnetic field with $B = 2 T$, perpendicular to the beam's direction. Calculate, at the exit of the magnetic field, the distance between protons and deuterons. My approach was that, in the exam's short available time, since the magnetic field is so strong, the forces exerted by the particles were negligible; also that the beam was accelerated before entering the field, so moving with constant speed. So the only force I thought would be significant was the Lorentz force; since this force does no work on the particles, these may only change the direction of their speed, not the module, and so they start rotating in a cycloidal motion along the direction of the beam. My result so was: $x_A(t) = \frac{v_0}{\omega} [1-cos(\frac{\omega t}{A})]$ $y_A(t) = \frac{v_0}{\omega} [sin(\frac{\omega t}{A})]$ $z_A(t) = 0$ $\omega = \frac{qB}{m}$ $v_0 = \sqrt{\frac{2q\Delta V}{m}}$ With y being the direction of the beam, z the position along the direction of B, the x axis perpendicular to both, and A the atomic mass. My professor didn't tell me what was wrong, but implied that it was all wrong, and I have no idea where. I don't want to make wrong assumptions that may bring me even further from the right solution. What's wrong in my approach, and what is the right solution/way to solve this problem? A: It looks like you were on the right track but weren't careful to label the velocities and frequencies with the associated particle species. $v_0$ and $\omega$ have different values for each beam. Additionally, the atomic mass $A$ shouldn't be in your formula for $x_A(t)$ because it's already in $\omega$. These may be the problems your professor saw. Presumably the infinite magnetic field only occupies the y>0 region. So the beams enter the field at the same point, follow two semicircles of different radii, and leave. You just need to find the distance between the these exit points, and you've already calculated their paths $x_A(t), y_A(t)$. The distance is $$D = x_p(\pi/\omega_p) - x_d(\pi/\omega_d) = 2\left(\frac{v_p}{\omega_p} - \frac{v_d}{\omega_d}\right) = \frac{2(m_p v_p - m_d v_d)}{qB} $$ It's also possible your professor was expecting you to "just know" the gyroradius formula $R= |\frac{mv}{qB}|$ leading to $D = 2(R_p - R_d)$, and wasn't looking for more detailed answers.
{ "pile_set_name": "StackExchange" }
Q: React .map built from 1 of 2 arrays So in short, I'm trying to use an array.map that can be called from one of two arrays depending on an if statement. A bit like so class App extends Component { state = { foo: [ {number:"0"}, {number:"1"}, {number:"2"}, {number:"3"} ], bar: [ {number:"9"}, {number:"8"}, {number:"7"}, {number:"6"} ], fooBar: "foo" } test = () => { if(this.state.fooBar === "foo") { return this.state.foo } else { return this.state.bar } } onChangeFooBar = () = { this.setstate({fooBar: bar}) } render(){ <Button onClick={this.onChangeFooBar}>Set to bar</Button> <p>Here are some lines of text</p> {this.test.map(list => <p> {list.text} </p>)} } } However, it throws an error saying it's not a function. What is it I'm no getting here? I'm almost certain my issue is in the.test A: this.test is a function, not an array. Try to call it first: {this.test().map(list => <p> {list.text} </p>)}
{ "pile_set_name": "StackExchange" }
Q: Code is working fine in my system but coursera autograder is giving me unknown signal Task -- The goal in this code problem is to implement the binary search algorithm. Input Format -- The first line of the input contains an integer n and a sequence a0 < a1 < ... < an−1 of n pairwise distinct positive integers in increasing order. The next line contains an integer k and k positive integers b0,b1,...,bk−1. Constraints -- 1 ≤ n,k ≤ 10^4; 1 ≤ a[i] ≤ 10^9 for all 0 ≤ i < n; 1 ≤ b[]j ≤ 10^9 for all 0 ≤ j < k; Output Format -- For all i from 0 to k−1, output an index 0 ≤ j ≤ n−1 such that aj = bi or −1 if there is no such index. I am using code blocks with c++11 compiler. I have already tried stress testing and got correct results in my system. But coursera autograder is giving me unknown signal 11. In some problems it gives unknown signal 8. Can anyone tell me the possible reason behind this. int binary_search(const vector<long long> &a, long long x) { size_t left = 0, right = (size_t)a.size()-1; size_t mid = 0; while(left<=right){ mid = (left+right)/2; if(x < a[mid]){ right = mid-1; } else if(x > a[mid]){ left = mid+1; } else return mid; } return -1; } int main() { size_t n; std::cin >> n; vector<long long> a(n); for (size_t i = 0; i < a.size(); i++) { std::cin >> a[i]; } size_t m; std::cin >> m; vector<long long> b(m); for (size_t i = 0; i < m; ++i) { std::cin >> b[i]; } for (size_t i = 0; i < m; ++i) { //replace with the call to binary_search when implemented std::cout << binary_search(a, b[i]) << ' '; } } Actual result that i got in autograder. Failed case #4/36: unknown signal 11 (Time used: 0.00/1.00, memory used: 40071168/536870912.) A: If the vector has e.g. size 2, then you initialize left = 0, right = 1 and mid = 0. left <= right so you calculate mid = 0 and check if x < a[0]. If that happens, you now set right = -1. In unsigned arithmetic, that is a really large number. Your loop continues because 0 <= really large number, you calculate mid = half of really large number and access the vector there. That's UB and gets your program killed. Switching to signed types means right = -1 is indeed smaller than left = 0 and terminates the loop.
{ "pile_set_name": "StackExchange" }
Q: C# Reflection- Find if a class has defined destructor I am trying to find if a class has destructors using reflection. I do see methods to get constructors in System.Reflection. Is there a way to find if a class has defined custom destructors in C#? A: The destructor method seems to be called Finalize(). All objects have this, so you want to check if it's explicitly defined on that object by trying to get it with the DeclaredOnly binding flag. It's also private and non-static so you need the other two flags as well. myObj.GetMethod("Finalize", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.DeclaredOnly) This will return null if the object doesn't have a defined destructor.
{ "pile_set_name": "StackExchange" }
Q: Source files not displaying in Visual Studio for System.Data.Sqlite I've downloaded the System.Data.Sqlite source code (Version 1.0.0.102) and want to build it in Visual Studio 2013. For some reason all the C# projects show no files. I can see in each project file lines similar to - < Import Project="$(MSBuildProjectDirectory)\Targets\System.Data.SQLite.Files.targets" / > which I guess should add the files into the project but it doesn't seem to. The .targets file is found and contains a list of all the files. How can I get the files to appear so I can build the solution? Thanks, Oliver A: Only solution I could find was to manually merge in the contents of each .targets file into each project file using a text editor. I found on the System.Data.Sqlite FAQ's page (https://system.data.sqlite.org/index.html/doc/trunk/www/faq.wiki) they mention that due to limitations in Visual Studio the files won't show up but will still build correctly. I however wanted to actually see and step through the source code. It was a bit tedious to merge but seemed to work fine. Shame there's not a way for Visual Studio handle it.
{ "pile_set_name": "StackExchange" }
Q: PHP - Is there any way for this login to be bypassed (Without bruteforcing) <?php $post = $_POST["post"]; $pass = "1234"; if ($post == $pass) { echo "Success!"; }else{ echo "Failed."; } ?> I know all about preparing statements and hashing etc. I always have security in mind, but I was thinking to myself one day, how would someone actually break into something like this? I understand you can SQL inject something database driven, but this of course is not database driven. But I was wondering how someone could actually break into this really simple login without just bruteforcing it? Is there any kind of attack that could be done here to get into this login without any bruteforcing? Or would it be impossible otherwise. Note: I am of course excluding things such as putting a virus on the server, I mean within reason can a user with advanced technical knowledge somehow break into this? A: See timing attack. With PHP always use the password_verify() function or another timing attack safe string comparer such as hash_equals().
{ "pile_set_name": "StackExchange" }
Q: Factory method only called once Following is my register code container.RegisterType<IFoo, Foo>( new InjectionConstructor(typeof (IEnumerable<int>), new ResolvedParameter<IBar1>, new InjectionParameter<IBar2>(CreateBar2(container))) Problem is CreateBar2(container) is only called once when program startup, I need it called everytime IFoo resolved Another question, which one is best practice container.RegisterType<IFoo, Foo>( new InjectionConstructor(typeof (IEnumerable<int>), new ResolvedParameter<IBar1>, new InjectionParameter<IBar2>(CreateBar2(container))) or container.RegisterType<IFoo, Foo>( new InjectionConstructor(typeof (IEnumerable<int>), new ResolvedParameter<IBar1>, CreateBar2(container)) A: First, you need to use a different LifetimeManager. TransientLifetimeManager will resolve a new instance every time. container.RegisterType<IFoo, Foo>( new InjectionConstructor(typeof (IEnumerable<int>), new ResolvedParameter<IBar1>, new InjectionParameter<IBar2>(CreateBar2(container)), new TransientLifetimeManager()) This means that every time IFoo is injected or resolved it will call the constructor each time. However, it seems as you're injecting a method, which will be executed at registration - CreateBar2(container). It's the same thing as writing: var bar2 = CreateBar2(container); // Called once. container.RegisterType<IFoo, Foo>( new InjectionConstructor(typeof (IEnumerable<int>), new ResolvedParameter<IBar1>, new InjectionParameter<IBar2>(bar2)) I recomend you to abstract this to a class instead and injecting it. This way you can control the calls to that as well. public interface ICreateBar2 { IBar CreateBar2(); } public class CreateBar2 { private IUnityContainer _container; public CreateBar(IUnityContainer container) { _container = container; } public IBar CreateBar2() { // Do stuff. return CreateBar2(_container); // Or what you need to do? } } And change your registration to container.RegisterType<IFoo, Foo>( new InjectionConstructor(typeof (IEnumerable<int>), new ResolvedParameter<IBar1>, new ResolvedParameter<ICreateBar2>), new TransientLifetimeManager()) container.RegisterType<ICreateBar2, CreateBar2>(new TransientLifetimeManager()); Or possibly RegisterInstance, if it better suits your needs? Remember to change the constructor of IFoo to accept ICreateBar instead. The best thing with this approach is that you don't need the InjectionConstructor anymore, since all parameters can be resolved by Unity. container.RegisterType<IFoo, Foo>(new TransientLifetimeManager()); If your REALLY need to keep the CreateBar2()-method in the current scope, you can inject a Func which actually returns the same value as CreateBar2(). I do not know the complete signature of CreateBar(), but you can do something like this: container.RegisterType<IFoo, Foo>( new InjectionConstructor(typeof (IEnumerable<int>), new ResolvedParameter<IBar1>, new InjectionParameter<Func<IBar2>>( new Func<IBar2>(()=> CreateBar2(container))); But now you need to inject Func<IBar2> to the IFoo constructor. This will cause it to execute whenever you use it in the constructor. public class Foo : IFoo { IBar1 _bar1; IBar2 _bar2; public Bar(IBar1 bar1, Func<IBar2> bar2Func) { _bar1 = bar1; _bar2 = bar2Func(); } }
{ "pile_set_name": "StackExchange" }
Q: How to use Loop to execute store procedure in Cosmos-DB/Document-DB? I have JSON like { "id": "58d99ca3231f13b9ecbbbca4", "50records": [ { "aomsLineNbr": 1, "licenses": [ { "productKey": "84fc2cde-9735-4cea-b97a-3cd627d3d0a5", "aid": "someAid" } ] } ] } I want to fetch record on the basis of aid. 50record can have multiple objects and licenses can also have multiple objects. I am constucting the query as "SELECT * FROM orders o WHERE o['50records'][0].licenses[0].aid='someAid'" how can I loop those 50records and licenses to search aid in all available objects? Below is my store procedure: function getOrdersByAidCollection(aid){ var context = getContext(); var collection = context.getCollection(); var link = collection.getSelfLink(); var response = context.getResponse(); var query = "SELECT * FROM orders o WHERE o['50records'][0].licenses[0].aid='"+aid+"'"; var isAccepted = collection.queryDocuments(collection.getSelfLink(),query, function (err, feed, options) { if (err) { return errorResponse(400, err.message); } if (!feed || !feed.length){ return errorResponse(400, "no orders doc found"); }else { getContext().getResponse().setBody(JSON.stringify(feed)); } }); if (!isAccepted){ return errorResponse(400, "The query was not accepted by the server."); } } Where and how I need to put a loop ?? Any help will be appreciable! Thanks A: why do you need a loop? This looks like a query question. Can you try a query like this: SELECT VALUE r FROM orders c JOIN r in c["50records"] JOIN li in r.licenses WHERE li.aid = "someAid" Thanks!
{ "pile_set_name": "StackExchange" }
Q: What are the benefits of stewing on the hob over cooking in the oven? When slow-cooking something like shin of beef in a gravy, what is the difference between cooking it on the hob and cooking in the oven? I tend to use the oven for slow-cooking but a lot of recipes tell you to stew on the hob. Is there any particular reason for this? Thanks A: The most obvious thing is it keeps your oven free; a range has only one oven, but four or so burners. Its also often easier to check on it, add ingredients, etc. stove-stop. Stove-top also lets you quickly turn up or down the heat. Stirring is easy stove-top, more annoying in the oven. For what you're doing, the biggest difference is going to be where the heat comes from. Stovetop, of course, comes almost entirely from the bottom of the pot, the lid for example can remain fairly cool, meaning moisture evaporating can condense on it, and drip back in. In the oven, the heat surrounds the pot, so the lid will be just as hot—or hotter, even—than the base of the pot. To take a recipe designed for stovetop, you'd perform all the browning steps stovetop (of course). Maybe add a little more liquid. Then bring it to a simmer stovetop, and finally transfer to the oven. Putting a piece of aluminum foil over the pot (under the lid) can help keep moisture in.
{ "pile_set_name": "StackExchange" }
Q: Collapseable Text Menu I'm making a website for my web dev class, and I had an idea to make the nav with single letters representing the menu items, but when you hover over the letter, they expand into full word(s). Example: If a menu item is shown as "[A]", when you hover over the letter, it expands into "[About]". Is this something that can be done in HTML5/CSS, or do I need JavaScript, jQuery, etc? A: You can achieve this effect with CSS. This code probably can be even optimized a lot but it's just to give you a rough idea of how your desired effect could look like. HTML <nav> <a href="">About</a> <a href="">Test</a> <a href="">About</a> <a href="">Test</a> </nav> CSS body { margin: 0; } nav { height: 80px; align-items: center; background: #eee; display: flex; justify-content: space-around; } a { color: rgba(0, 0, 0, 0.8); font-size: 0; transition: 0.3s; text-decoration: none; } a:first-letter { font-size: initial; } a:hover { font-size: initial; } body { margin: 0; } nav { height: 80px; align-items: center; background: #eee; display: flex; justify-content: space-around; } a { color: rgba(0, 0, 0, 0.8); font-size: 0; transition: 0.3s; text-decoration: none; } a:first-letter { font-size: initial; } a:hover { font-size: initial; } <nav> <a href="">About</a> <a href="">Test</a> <a href="">About</a> <a href="">Test</a> </nav>
{ "pile_set_name": "StackExchange" }
Q: Why is XSLT transformation outputing dates? XML is SOAP response Does anyone know why would XSLT transformation add something to the output (html) that's not in the XSL template? Check my example below, I'm trying to get rid of the dates (see output). xsl template <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:php="http://php.net/xsl" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:a="http://schemas.datacontract.org/2004/Common.Akol" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <!-- Edited by Adrian --> <xsl:output method="html" encoding="UTF-8" indent="yes"/> <xsl:template match="GetDriverDataResult"> <hr /> trans key: <xsl:value-of select="a:driverInformationResponse/a:TransKey" /> trans key 2: <xsl:value-of select="a:driverInformationResponse/a:TransKey" /> </xsl:template> </xsl:stylesheet> XML <?xml version="1.0"?> <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" s:mustUnderstand="1"> <u:Timestamp u:Id="_0"> <u:Created>2014-08-27T00:29:39.204Z</u:Created> <u:Expires>2014-08-27T00:34:39.204Z</u:Expires> </u:Timestamp> </o:Security> </s:Header> <s:Body> <GetDriverDataResponse> <GetDriverDataResult xmlns:a="http://schemas.datacontract.org/2004/Common.Akol" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <a:driverInformationResponse> <a:TransKey>C00029540</a:TransKey> <a:Status>Success</a:Status> </a:driverInformationResponse> </GetDriverDataResult> </GetDriverDataResponse> </s:Body> </s:Envelope> OUPUT 2014-08-27T00:29:39.204Z 2014-08-27T00:34:39.204Z trans key: C00029540 trans key 2: C00029540 A: By default, text nodes are copied to the output XML document by the built-in template rules. To suppress this behavior, add this template to match text nodes and do nothing with them: <xsl:template match="text()"/>
{ "pile_set_name": "StackExchange" }
Q: How do we Construct LCP-LR array from LCP array? To find the number of occurrences of a given string P ( length m ) in a text T ( length N ) We must use binary search against the suffix array of T. The issue with using standard binary search ( without the LCP information ) is that in each of the O(log N) comparisons you need to make, you compare P to the current entry of the suffix array, which means a full string comparison of up to m characters. So the complexity is O(m*log N). The LCP-LR array helps improve this to O(m+log N). know more How we precompute LCP-LR array from LCP array? And How does LCP-LR help in finding the number of occurrences of a pattern? Please Explain the Algorithm with Example Thank you A: // note that arrSize is O(n) // int arrSize = 2 * 2 ^ (log(N) + 1) + 1; // start from 1 // LCP = new int[N]; // fill the LCP... // LCP_LR = new int[arrSize]; // memset(LCP_LR, maxValueOfInteger, arrSize); // // init: buildLCP_LR(1, 1, N); // LCP_LR[1] == [1..N] // LCP_LR[2] == [1..N/2] // LCP_LR[3] == [N/2+1 .. N] // rangeI = LCP_LR[i] // rangeILeft = LCP_LR[2 * i] // rangeIRight = LCP_LR[2 * i + 1] // ..etc void buildLCP_LR(int index, int low, int high) { if(low == high) { LCP_LR[index] = LCP[low]; return; } int mid = (low + high) / 2; buildLCP_LR(2*index, low, mid); buildLCP_LR(2*index+1, mid + 1, high); LCP_LR[index] = min(LCP_LR[2*index], LCP_LR[2*index + 1]); } Reference: https://stackoverflow.com/a/28385677/1428052
{ "pile_set_name": "StackExchange" }
Q: Bootstrap Modal Form: Labels don't right-align/getting widths correct for in-line sections I have a bootstrap 3 modal form that uses mixed form-horizontal and form-inline classes. I've fiddled around with the column widths but can't seem to get the form just right. There are two problems that I can't seem to get resolved: The labels don't right align. The State field is not the correct width. My Html: <div class="row"> <div class="col-md-7"> <h2>Agents</h2> </div> <div class="col-md-2"> <a id="addAgentButton" href="#" class="btn btn-primary">Add Agent</a> </div> </div> <div id="agentModal" data-bind="with:detailAgent" class="modal fade"> <div class="modal-dialog"> <div class="modal-content"> <div class="modal-body col-md-12"> <form role="form" data-bind="submit: save"> <div class="form-group col-md-12"> <label class="col-md-2 control-label" for="txtAgentName">Name: </label> <div class="col-md-6"><input class="form-control input-sm" id="txtAgentName" type="text" data-bind="value:Name" /></div> </div> <div class="form-group col-md-12"> <label class="col-md-2 control-label" for="txtAgentAddressLine1">Address 1: </label> <div class="col-md-6"> <input class="form-control input-sm" id="txtAgentAddressLine1" type="text" data-bind="value:Address1" /> </div> </div> <div class="form-group col-md-12 form-inline"> <label class="col-md-2 control-label" for="txtAgentCity">City: </label> <div class="col-md-2"> <input class="form-control input-sm" id="txtAgentCity" type="text" data-bind="value:City" /> </div> <label class="col-md-2 control-label" for="txtAgentState">State: </label> <div class="col-md-2"> <select class="form-control input-sm" id="txtAgentState" data-bind="options: $root.states, value: State, optionsCaption:'Choose a state...'"></select> </div> <label class="col-md-1 control-label" for="txtAgentZip">Zip: </label> <div class="col-md-2"> <input type="tel" class="form-control input-sm" id="txtAgentZip" data-bind="value:Zip" /> </div> </div> </form> </div> <div class="modal-footer"> <button type="button" class="btn btn-default" data-dismiss="modal">Close</button> <button type="button" class="btn btn-primary">Save changes</button> </div> </div><!-- /.modal-content --> </div><!-- /.modal-dialog --> </div><!-- /.modal --> My Javascript to show the modal: $("#addAgentButton").on("click", function() { $("#agentModal").modal("show"); }); My CSS: .modal-dialog { width: 800px;/* your width */ } #addAgentButton { margin-top: 15px; } And here's the jsfiddle. A: Ok, I hope I understood you correct. Take a look at this fiddle I had to change your inline-form html part a bit: <div class="form-group col-md-12"> <label class="control-label col-md-2" for="txtAgentCity">City: </label> <div class="col-md-2"> <input class="form-control input-sm" id="txtAgentCity" type="text" data-bind="value:City" /> </div> <label class="col-md-2 control-label text-right" for="txtAgentState">State: </label> <div class="col-md-2"> <select class="form-control input-sm" id="txtAgentState" data-bind="options: $root.states, value: State, optionsCaption:'Choose a state...'"></select> </div> <label class="col-md-2 control-lntZip text-right">Zip: </label> <div class="col-md-2"> <input type="tel" class="form-control input-sm" id="txtAgentZip" data-bind="value:Zip" /> </div> </div> Just add the class .text-right to the label you want to be aligned right.
{ "pile_set_name": "StackExchange" }
Q: Vacuously prove that the empty set is a subset of any set $A$ My proof is as follows: For any given set $A$, $A\cap \emptyset=\emptyset\implies \emptyset\subseteq A$ Is this a valid proof? Thanks. A: That makes sense, although I don't think that's "vacuous." What they want you to see is the following: In order to show $\emptyset\subseteq A$, you must show that if $x\in\emptyset$, then $x\in A$. But $\emptyset$ has no elements, so the statement is vacuously true.
{ "pile_set_name": "StackExchange" }
Q: How can I check my Android app's network consumption? I need to check my Android app's Internet consumption. In my app I have a numerous number of web service APIs being called. I want to know how much my app consumes the Internet in kB/MB at a full go. How can I check that? Is there any tool to check that? A: Android Studio 2.0 Introduce new Network section in Android Monitor which can help you with your problem. Tx == Transmit Bytes Rx == Receive Bytes A: There are three ways... You can view in Device/Emulator. Go to Setting -> Data usage, and find your application in the list In Eclipse, select DDMS (perspective) -> Select your package from Devices (left side) -> Click on Network Statistics tab -> Click Start As already answered, in Android Studio, go to Android Monitor (bottom tab) -> Network (tab) -> look for Tx (Transmit Data) / Rx (Receive Data) A: For view purposes, you can check it in the monitor as mentioned by MD. To store, you can do that programmatically int UID = android.os.Process.myUid(); rxBytes += getUidRxBytes(UID); txBytes += getUidTxBytes(UID); /** * Read UID Rx Bytes * * @param uid * @return rxBytes */ public Long getUidRxBytes(int uid) { BufferedReader reader; Long rxBytes = 0L; try { reader = new BufferedReader(new FileReader("/proc/uid_stat/" + uid + "/tcp_rcv")); rxBytes = Long.parseLong(reader.readLine()); reader.close(); } catch (FileNotFoundException e) { rxBytes = TrafficStats.getUidRxBytes(uid); //e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } return rxBytes; } /** * Read UID Tx Bytes * * @param uid * @return txBytes */ public Long getUidTxBytes(int uid) { BufferedReader reader; Long txBytes = 0L; try { reader = new BufferedReader(new FileReader("/proc/uid_stat/" + uid + "/tcp_snd")); txBytes = Long.parseLong(reader.readLine()); reader.close(); } catch (FileNotFoundException e) { txBytes = TrafficStats.getUidTxBytes(uid); //e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } return txBytes; } }
{ "pile_set_name": "StackExchange" }
Q: Perforce - How to get Label details using Label name with wildcard I am new to Perforce. I need a 'p4' client command to get Label(s) by providing Label name. However I don't know complete Label name so I want to provide wildcard in the Label name. In P4V client I do it this way: A: If you look in P4V's log pane you can see what command(s) it's running to get its data. For labels it's probably running something like: p4 labels -e "cobrands.razor*build_01"
{ "pile_set_name": "StackExchange" }
Q: How does importing this submodule overwrite a value in the parent __init__.py module? I have a package with three files: testimport ├── __init__.py ├── logging.py └── util.py __init__.py contains: from __future__ import ( absolute_import, division, print_function, unicode_literals ) import logging # imports standard library module (because absolute_import is activated) _logging_file = logging.__file__ from .util import testlog if _logging_file != logging.__file__: # at this point, `logging` no longer points to the standard # library module, but the local logging module instead(!) raise AssertionError('`logging` overwritten; {!r} is not equal to {!r}'.format(_logging_file, logging.__file__)) LOGGER = logging.getLogger(__name__) logging.py contains: import sys __all__ = () SILENT = -(sys.maxsize) - 1 util.py contains: from __future__ import ( absolute_import, division, print_function, unicode_literals ) import logging # imports standard library module (because absolute_import is activated) from .logging import SILENT # this is (perversely) where the importing module's `logging` gets overridden __all__ = ( 'testlog' ) _LOGGER = logging.getLogger(__name__) def testlog(log_lvl=SILENT): _LOGGER.log(log_lvl, 'Hello!') The AssertionError is raised when importing testimport: % python Python 2.7.10 (default, Sep 24 2015, 10:13:45) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import testimport <function testlog at 0x10e86e1b8> Traceback (most recent call last): File "<stdin>", line 1, in <module> File "testimport/__init__.py", line ..., in <module> raise AssertionError('`logging` overwritten; {!r} is not equal to {!r}'.format(_logging_file, logging.__file__)) AssertionError: `logging` overwritten; '/.../lib/python2.7/logging/__init__.pyc' is not equal to 'testimport/logging.pyc' Why on earth is that happening? Test repo is here. Travis builds are here. Update, when stepping through this with pdb, it appears that the offending instruction is from .logging import SILENT in util.py, but I have no idea why. This is an abbreviated session from the repo version: % echo 'import testimport' >|testme.py % python -m pdb testme.py (Pdb) s --Call-- > /.../testimport/testimport/__init__.py(1)<module>() -> from __future__ import ( (Pdb) b 12 Breakpoint 1 at /.../testimport/testimport/__init__.py:12 (Pdb) c > /.../testimport/testimport/__init__.py(12)<module>() -> from testimport.util import testlog (Pdb) s --Call-- > /.../testimport/testimport/util.py(1)<module>() -> from __future__ import ( (Pdb) b 5 Breakpoint 2 at /.../testimport/testimport/util.py:5 (Pdb) c > /.../testimport/testimport/util.py(5)<module>() -> from .logging import SILENT (Pdb) u > /.../testimport/testimport/__init__.py(12)<module>() -> from testimport.util import testlog (Pdb) p logging <module 'logging' from '/.../lib/python2.7/logging/__init__.pyc'> (Pdb) d > /.../testimport/testimport/util.py(5)<module>() -> from .logging import SILENT (Pdb) s --Call-- > /.../testimport/testimport/logging.py(1)<module>() -> from __future__ import ( (Pdb) b 6 Breakpoint 3 at /.../testimport/testimport/logging.py:6 (Pdb) c > /.../testimport/testimport/logging.py(6)<module>() -> SILENT = -(sys.maxsize) - 1 (Pdb) u > /.../testimport/testimport/util.py(5)<module>() -> from .logging import SILENT (Pdb) u > /.../testimport/testimport/__init__.py(12)<module>() -> from testimport.util import testlog (Pdb) p logging <module 'logging' from '/.../lib/python2.7/logging/__init__.pyc'> (Pdb) s > /.../testimport/testimport/util.py(7)<module>() -> 'testlog', (Pdb) u > /.../testimport/testimport/__init__.py(12)<module>() -> from testimport.util import testlog (Pdb) p logging <module 'testimport.logging' from 'testimport/logging.pyc'> A: After some more research, I understand the above to be the correct behavior. I am embarrassed to say that I did not see this before. It makes total sense once one understands what's happening. From the docs: Submodules When a submodule is loaded using any mechanism (e.g. importlib APIs, the import or import-from statements, or built-in __import__()) a binding is placed in the parent module’s namespace to the submodule object. ... Given Python’s familiar name binding rules this might seem surprising, but it’s actually a fundamental feature of the import system. This explanation seems to have been absent before Python 3.4, but affects all versions of Python. See Python issue #24029. In the example above, testimport.logging necessarily refers to the local module. Importing it (from anywhere) installs it in testimport (as should be expected). It should come as no surprise that this necessarily replaces any existing logging member of testimport.
{ "pile_set_name": "StackExchange" }
Q: Drop-down Selection Control - Windows 8 Metro - XAML I want the dropdowns like in the pictures below: I dont know how I get them. I suppose that these are some kind of comboboxes but I am not sure. Can anyone help me and provide the xaml code?! Thanks. A: I suppose you are looking for Combo box: Windows 8 store controls list (MSDN) . To use: <ComboBox x:Name="comboBox1" SelectionChanged="ComboBox_SelectionChanged" Width="100"> <x:String>Item 1</x:String> <x:String>Item 2</x:String> <x:String>Item 3</x:String> </ComboBox>
{ "pile_set_name": "StackExchange" }
Q: SharePoint -customized contact form I am working on WSS 3.0. I am requiring to develop a custom contact form -requirements are very specific so I would like to code it. What options do I have? Could I use the Form Web Part? Thank you for your valuable help. A: it is probably best to create a custom web part, especially if you have some specific requirements. This will give you an option to develop it just the way you want it
{ "pile_set_name": "StackExchange" }
Q: sockets, its attributes and SO_REUSEADDR option i have a few basic questions: 1.A socket is represented by a protocol, a local ip, local port, remote ip and remote port. Suppose such a connection exists between a client and a server. Now when i bind another client to same local port and ip, it got bound(i used SO_REUSEADDR) but connect operation by second client to the same remote ip and port failed.So, is there no way a third process can share the same socket? 2.When we call listen() on a socket bound to a local port and ip, it listens for connections. When a client connects, it creates a socket (say A). It completes 3 way handshake and then starts a different socket(say B) and also deletes the socket A (Source) .The new client is taken care of by the new socket B. So, what kind of a socket represents a listening socket i.e. what is the remote ip and port and is socket A different than that socket or just addition of remote ip and port to listening socket forms A? 3.I read that SO_REUSEADDR can establish a listening socket on a port if there is no socket listening on that port and ip and all sockets on that port and ip have SO_REUSEADDR option set.But then i also came across a text which said if a client is bound to a port and ip, another client can't bind to it(even if SO_REUSEADDR is used) unless the first client successfully calls connect(). There was no listening socket(it is a client so we there is no call to connect()) on that port and ip in this example. So, why isn't another client allowed? Thanks in advance. A: Correct: there is no way to create two different sockets with the same protocol, local port, local address, remote port, and remote address. There would be nothing to tell which packets belonged to which socket! A listening socket does not have a remote address and remote port. That's OK, because there are no packets on the wire associated with this socket (yet). Actually, all sockets start out with neither a local nor remote address or port. These properties are only assigned later when bind() (for local) and connect()/accept() (for remote) are called. Until you call connect() or listen() on a socket, there isn't any different between a server (listening) or client socket. They're the same thing. So it would be more correct here to say that no two sockets are allowed to share the same protocol, local address, and local port if neither has a remote address or port. This isn't a problem in practice though, because you usually don't call bind() on a client socket, which means there is an implicit bind() to an ephemeral port at connect() time. These typical client sockets can't conflict with a listening socket because they go from having no addresses associated with them to having both local and remote addresses associated with them, skipping the state where they have only a local one.
{ "pile_set_name": "StackExchange" }
Q: Plotting numpy array using Seaborn I'm using python 2.7. I know this will be very basic, however I'm really confused and I would like to have a better understanding of seaborn. I have two numpy arrays X and y and I'd like to use Seaborn to plot them. Here is my X numpy array: [[ 1.82716998 -1.75449225] [ 0.09258069 0.16245259] [ 1.09240926 0.08617436]] And here is the y numpy array: [ 1. -1. 1. ] How can I successfully plot my data points taking into account the class label from the y array? Thank you, A: You can use seaborn functions to plot graphs. Do dir(sns) to see all the plots. Here is your output in sns.scatterplot. You can check the api docs here or example code with plots here import seaborn as sns import pandas as pd df = pd.DataFrame([[ 1.82716998, -1.75449225], [ 0.09258069, 0.16245259], [ 1.09240926, 0.08617436]], columns=["x", "y"]) df["val"] = pd.Series([1, -1, 1]).apply(lambda x: "red" if x==1 else "blue") sns.scatterplot(df["x"], df["y"], c=df["val"]).plot() Gives Is this the exact input output you wanted? You can do it with pyplot, just importing seaborn changes pyplot color and plot scheme import seaborn as sns import matplotlib.pyplot as plt fig, ax = plt.subplots() df = pd.DataFrame([[ 1.82716998, -1.75449225], [ 0.09258069, 0.16245259], [ 1.09240926, 0.08617436]], columns=["x", "y"]) df["val"] = pd.Series([1, -1, 1]).apply(lambda x: "red" if x==1 else "blue") ax.scatter(x=df["x"], y=df["y"], c=df["val"]) plt.plot() Here is a stackoverflow post of doing the same with sns.lmplot
{ "pile_set_name": "StackExchange" }
Q: XSL select first line plain text I have an xml file with content type plain text.What i want to do is use the first line as a headline in an xsl transformation.How should it be formed?I ve tried with various terms as a delimeter ,like but no result. the input <contentSet> <inlineData contenttype="text/plain" > Et, sent luptat luptat, commy nim zzriureet vendreetue modo dolenis ex euisis nosto et lan ullandit lum doloreet vulla feugiam coreet, cons eleniam il ute facin veril et aliquis ad minis et lor sum del iriure dit la feugiamcommy nostrud min ulla </inlineData> </contentSet> and want to have output something like this <doc> <story> <headline> Et, sent luptat luptat, commy nim zzriureet vendreetue modo </headline> <text> Et, sent luptat luptat, commy nim zzriureet vendreetue modo dolenis ex euisis nosto et lan ullandit lum doloreet vulla feugiam coreet, cons eleniam il ute facin veril et aliquis ad minis et lor sum del iriure dit la feugiamcommy nostrud min ulla </text> </story> </doc> A: Well technically the first line in your inlineData element is empty and only the second line contains text. Assuming that structure is the same for all input you want to process you can use <xsl:template match="contentSet"> <story> <headline><xsl:value-of select="substring-before(substring-after(inlineData, '&#10;'), '&#10;')"/></headline> <text><xsl:value-of select="."/></text> </story> </xsl:template> I realize that is brittle but string processing in XSLT/XPath 1.0 is weak, in XSLT 2.0 you could use <xsl:value-of select="tokenize(inlineData, '\n')[normalize-space()][1]"/>.
{ "pile_set_name": "StackExchange" }
Q: How can specify ROWGUIDCOL property to Guid type column in code first or with ColumnBuilder? Consider this migration code: CreateTable( "dbo.Document", c => new { Id = c.Int(nullable: false, identity: true), Doc = c.String(), RowGuid = c.Guid(nullable: false), Person_Id = c.Int(), }) .PrimaryKey(t => t.Id) .ForeignKey("dbo.Person", t => t.Person_Id) .Index(t => t.Person_Id); i want the RowGuid be ROWGUIDCOL, and be defined like this (SQL): [RowGuid] [UNIQUEIDENTIFIER] not null RowGuidCol Unique default newid() What is the equivalent code in EntityFramework/CodeFirst ? What is the solution? Thanks. A: It does not appear that the ROWGUIDCOL property can be set directly via Entity Framework, but it might be possible to inject the property into the generate SQL by making "creative" ;-) use of the storeType parameter (assuming storeType truly allows you to override the default datatype). Starting with the code from the original question, try something like the following: CreateTable( "dbo.Document", c => new { RowGuid = c.Guid(nullable: false, identity: true, defaultValueSql: "newid()", storeType: "UNIQUEIDENTIFIER ROWGUIDCOL"), Person_Id = c.Int() }) .Index(t => t.RowGuid, true); Unfortunately, I do not have a way to test this, but given that the following SQL works, I think it is worth a shot: CREATE TABLE dbo.Test1 ( Col1 INT NOT NULL, Col2 UNIQUEIDENTIFIER ROWGUIDCOL NOT NULL UNIQUE DEFAULT NEWID() ) The "UNIQUE" requirement is accomplished via a Unique Index created by the second parameter being "true" in the Index() method. Please note that there might be some issue using "identity: true" in the Guid() method if the table already has a column marked with IDENTITY. I found this related question which addresses that situation: Entity Framework Code First Using Guid as Identity with another Identity Column
{ "pile_set_name": "StackExchange" }
Q: What are some practices for assigning unique identifiers to domain objects? Possible Duplicate: How to choose my primary key? I have a User class with the following attributes: Username (unique) Password Email (unique) First Name Last Name Age Username and Email uniquely identify an instance of User. In my database, should these be used as a primary key or should I generate a different unique identifier for each instance. As far as I know, on SELECT's, comparing strings is slower than comparing numbers. Shouldn't I then use a self-assigned int, long, double, etc. or use the AUTO_INCREMENT in a ID column for Users? What about using UUIDs (again the issue with long strings)? My question also applies to every other domain class I might add later. A: The primary key should be separate from any other field, should have no business value and should be a unique integer. Example: A company creates a 'user' table with social security numbers. These should be unique to each person. However, at some point a ssn is mis-keyed when being entered by a user. The mis-keyed number doesn't already exists and the record is saved. Some time later another user tries to enter their ssn, and it is actually the same number as was mis-keyed earlier. The user may not be able to actually save their profile and continue until the company itself finds and resolves this issue. The company decides to relax the 'unique' constraint on ssn at this point, given this issue. If the ssn is NOT the primary key this will be relatively easy to do - just drop the unique constraint. If the primary key is the ssn this will be harder. Define the field as auto_increment and mysql will handle it: CREATE TABLE Persons ( person_id int NOT NULL AUTO_INCREMENT, LastName varchar(255) NOT NULL, ...
{ "pile_set_name": "StackExchange" }
Q: Mocking @UriInfo using EasyMock or PowerMock I have a REST service class in which uriInfo object is automatically injected through @UriInfo annotation. Now, while writing JUnit for this class, I want to get a mock object created for this UriInfo object without introducing any new setter methods into the tested class just for the sake of setting the mocked UriInfo into it. Kindly let me know if you have any suggestions. We are using EasyMock and PowerMock. A: You can use Powermock's Whitebox to modify the internal state of an object. One of the simplest invocations is: Whitebox.setInternalState(tested, myMock);
{ "pile_set_name": "StackExchange" }
Q: Using rsync to back up /home folder with same permissions, recursive I would like to request information on using rsync. I tried reading the manuals, but the examples are few and confusing for me. I do not need advanced features or live sync or remote sources or remote destinations. Everything is with ext4. Just using my laptop's HDD and an external HDD over USB. On Ubuntu. My ultimate object is to move the contents of my /home to an external drive. Wipe my laptop, switch it to LVM, re-install Ubuntu, update, install same programs I had before, then boot a live USB and copy the contents of my backed up /home (now on my external HDD) onto the /home of the new installation (installed with same username and UID as last time). I would like to keep all permissions and ownership the same. I tried copy-pasting everything onto the external drive, but I got error messages. I know that doing a copy-paste from the GUI on a live USB will change everything to root ownership (which would be double plus not good). I see all of these flags in the man page ... and all I understand is rsync /home/jonathan /media/jonathan/external-drive/home/jonathan from rsync /source/file/path /destination/file/path I already use this hard drive to back up most folders and big files like Movies, etc. Is there a way to copy-paste what I want, while saving permissions, and only adding the hitherto ignored .config files and only changing changed files? I would like to be able to do this manually about once a week to back up settings AND my personnel files in case I ever need to reinstall in an emergency or my hard drive fails. A: Here is a quick rsync setup and what it does. rsync -avz /home/jonathan /media/jonathan/external-drive/home/jonathan This will recursively copy the files, preserve attributes permissions ownership etc. from /home/jonathan to the external folder. for safe keeping you could also do a tar to get everything together and then send one file over. tar zcvf /media/jonathan/external-drive/home/jonathan/jonathansFiles.tgz /home/jonathan then uncompress later. tar zxvf jonathansFiles.tgz
{ "pile_set_name": "StackExchange" }
Q: Can I store TFS 2017 password for multiple build definitions in an encrypted file In TFS 2015 we used Machine groups and weren't required to enter a Admin Login and Password for the "Powershell on Target Machines" task. We are now using tfs 2017 and do not want to define the Admin Login and Password in every build definition since we have numerous. How can you overcome this? I was thinking about using a txt file with a SecureString password and in the build definition read it in, decrypt it and assign it to the Build definition variable A: Use variable groups. Create a variable group for your shared secrets, then link the variable group to any build or release definitions that need access to the secrets. Going a step further, you could store the secrets in Azure KeyVault to provide a single source of truth for secrets. Storing secrets in source control using reversible encryption is just obfuscation, and from a security standpoint is only slightly better than storing it in plaintext.
{ "pile_set_name": "StackExchange" }
Q: How to tell normal Python function from generator by looking at the AST? I need to detect whether a ast.FunctionDef in Python 3 AST is normal function definition or a generator definition. Do I need to traverse the body and look for ast.Yield-s or is there a simpler way? A: There's a sneaky way to do this is you compile the AST instance with compile. The code object has a couple of flags attached to it, one of them being 'GENERATOR', that you can use to distinguish these. Of course this depends on certain compilation flags so it isn't really portable across CPython versions or implementations For example, with a non-generator function: func = """ def spam_func(): print("spam") """ # Create the AST instance for it m = ast.parse(func) # get the function code # co_consts[0] is used because `m` is # compiled as a module and we want the # function object fc = compile(m, '', 'exec').co_consts[0] # get a string of the flags and # check for membership from dis import pretty_flags 'GENERATOR' in pretty_flags(fc.co_flags) # False Similarly, for a spam_gen generator, you'd get: gen = """ def spam_gen(): yield "spammy" """ m = ast.parse(gen) gc = compile(m, '', 'exec').co_consts[0] 'GENERATOR' in pretty_flags(gc.co_flags) # True This might be more sneaky than what you need though, traversing the AST is another viable option that's probably more understandable and portable. If you have a function object instead of an AST you can always perform the same check by using func.__code__.co_flags: def spam_gen(): yield "spammy" from dis import pretty_flags print(pretty_flags(spam_gen.__code__.co_flags)) # 'OPTIMIZED, NEWLOCALS, GENERATOR, NOFREE' A: Traversing the AST would be harder than it seems -- using the compiler is probably the way to go. Here's an example of why looking for a Yield node isn't as simple as it sounds. >>> s1 = 'def f():\n yield' >>> any(isinstance(node, ast.Yield) for node in ast.walk(ast.parse(s1))) True >>> dis.pretty_flags(compile(s1, '', 'exec').co_consts[0].co_flags) 'OPTIMIZED, NEWLOCALS, GENERATOR, NOFREE' >>> s2 = 'def f():\n def g():\n yield' >>> any(isinstance(node, ast.Yield) for node in ast.walk(ast.parse(s2))) True >>> dis.pretty_flags(compile(s2, '', 'exec').co_consts[0].co_flags) 'OPTIMIZED, NEWLOCALS, NOFREE' The AST approach would probably require using NodeVisitor to exclude functions and lambda bodies.
{ "pile_set_name": "StackExchange" }
Q: Performing functions on multiindex in groupby I have a dataframe with a MultiIndex. Here's a minimal working example: df = pd.DataFrame({'note':[1,1,1,2,2,2,2],'t': [0.5,0.7,1.2,0.3,0.9,1.3,1.7],'val':[1,-1,0,0,1,0,0]}) dfs = df.set_index(['note','t']) which gives >>> dfs val note t 1 0.5 1 0.7 -1 1.2 0 2 0.3 0 0.9 1 1.3 0 1.7 0 what I want is to get (a) the minimum value and (b) the first value in the t index per group: note min first 1 0.5 0.5 2 0.3 0.3 I could do a groupby on the original dataframe df where note and t are columns and not indices: df.groupby('note').agg({'t': [min, lambda x: list(x)[0]]}) but I'd rather not do a reset_index() followed by another set_index() to restore the dataframe to the MultiIndex version. How do I do this? The agg function only works on columns and not the indices. A: It is possible, but not very clean: df = (dfs.index.get_level_values(1).to_series() .groupby(dfs.index.get_level_values(0)) .agg(['min', 'first'])) print (df) min first note 1 0.5 0.5 2 0.3 0.3 df = dfs.reset_index('t').groupby(level=0)['t'].agg(['min', 'first']) print (df) min first note 1 0.5 0.5 2 0.3 0.3
{ "pile_set_name": "StackExchange" }
Q: Does anybody know software for customising audio themes? I am interested in simple managing of audio themes in Ubuntu (10.10). Does anybody know a software for customising/managing/changing Ubuntu audio themes? Thanks for every suggestion. Regards, Vincenzo A: Sound Theme Manager Sound Theme Manager is a program for managing freedesktop.org sound themes that can: Set the desktop sound theme. Create sound theme packages from existing themes. Install sound theme packages. Remove sound themes. Play a preview sound for a sound theme. Browse through the sounds in a theme and play them. Create new sound themes. Edit sound theme metadata. Edit the sound theme sounds. It currently only supports a (large) subset of the sound theme specification but will eventually be fully compliant. Full Disclosure: I am the developer.
{ "pile_set_name": "StackExchange" }
Q: 16 bit value in 2's compliment, confusing I am working on reading data from a gyroscope and I am getting the data back just fine but the values are not as expected. I am thinking this is due to an error in my coding after searching around for a solution I came up with a post HERE that states: Make sure you are reading the output registers correctly, the data is a 16-bit value in 2's complement (i.e the MSB is the sign bit, then 15 bits for the value) This is very confusing to me and I am unsure if my coding is reading the value as its supposed to. I am using the wiringPi I2C library to convert existing code for the Arduino to run on the Raspberry Pi. I have my code below I am hoping someone can tell me if they see the proper attempt to read the 16 bit value in 2's compliment. Inside the getGyroValues function is the only place I could see this happening. Is my code reading the values correctly? #include <stdio.h> #include <string.h> #include <errno.h> #include <unistd.h> #include <stdlib.h> #include <stdint.h> #include <time.h> #include <wiringPi.h> #include <wiringPiI2C.h> #define CTRL_REG1 0x20 #define CTRL_REG2 0x21 #define CTRL_REG3 0x22 #define CTRL_REG4 0x23 int fd; int x = 0; int y = 0; int z = 0; int main (){ fd = wiringPiI2CSetup(0x69); // I2C address of gyro wiringPiI2CWriteReg8(fd, CTRL_REG1, 0x1F); //Turn on all axes, disable power down wiringPiI2CWriteReg8(fd, CTRL_REG3, 0x08); //Enable control ready signal wiringPiI2CWriteReg8(fd, CTRL_REG4, 0x80); // Set scale (500 deg/sec) delay(100); // Wait to synchronize void getGyroValues (){ int MSB, LSB; LSB = wiringPiI2CReadReg16(fd, 0x28); MSB = wiringPiI2CReadReg16(fd, 0x29); x = ((MSB << 8) | LSB); MSB = wiringPiI2CReadReg16(fd, 0x2B); LSB = wiringPiI2CReadReg16(fd, 0x2A); y = ((MSB << 8) | LSB); MSB = wiringPiI2CReadReg16(fd, 0x2D); LSB = wiringPiI2CReadReg16(fd, 0x2C); z = ((MSB << 8) | LSB); } for (int i=0;i<10;i++){ getGyroValues(); // In following Divinding by 114 reduces noise printf("Value of X is: %d\n", x / 114); printf("Value of Y is: %d\n", y / 114); printf("Value of Z is: %d\n", z / 114); int t = wiringPiI2CReadReg8(fd, 0x26); t = (t*1.8)+32;//convert Celcius to Fareinheit int a = wiringPiI2CReadReg16(fd,0x2B); int b = wiringPiI2CReadReg16(fd,0x2A); printf("Y_L equals: %d\n", a); printf("Y_H equals: %d\n", b); int c = wiringPiI2CReadReg16(fd,0x28); int d = wiringPiI2CReadReg16(fd,0x29); printf("X_L equals: %d\n", c); printf("X_H equals: %d\n", d); int e = wiringPiI2CReadReg16(fd,0x2C); int f = wiringPiI2CReadReg16(fd,0x2D); printf("Z_L equals: %d\n", e); printf("Z_H equals: %d\n", f); printf("The temperature is: %d\n\n\n", t); delay(500); } }; A: Here's how you could easily combine the most and the least significant bytes, representing the halves of a 2's complement 16-bit integer into an int: int Bytes2Short(unsigned char msb, unsigned char lsb) { long t = msb * 0x100L + lsb; if (t >= 32768) t -= 65536; return (int)t; }
{ "pile_set_name": "StackExchange" }
Q: can we write program to get password of linux user can we write a program in java to get usernames and passwords A: You can get usernames by reading /etc/passwd file. You cannot get passwords unless you try to decrypt them, and that's not practical...
{ "pile_set_name": "StackExchange" }
Q: Can I get away with adding a whitespace to a class definition? I'm working on some legacy PHP code that, depending on conditions, produces either the following output: <div class="foo bar">Bork bork bork</div> or: <div class="foo ">Bork bork bork</div> Note the whitespace following the class name in the second example. Which is decidedly ugly, but I'm under some time constraints right now and fixing this will take some major code surgery that I'd rather avoid for the moment. On the browsers I've tested it so far there doesn't seem to be a problem, but if I'm committing an outright standards violation here, I'd rather put in the overtime tonight. So. As per the standards, is this allowed? A: From HTML5 Compliant - Trailing Space in Class Attribute Yes, it is compliant From http://www.w3.org/html/wg/drafts/html/master/dom.html#classes: The attribute, if specified, must have a value that is a set of space-separated tokens representing the various classes that the element belongs to. From http://www.w3.org/html/wg/drafts/html/master/infrastructure.html#set-of-space-separated-tokens: A string containing a set of space-separated tokens may have leading or trailing space characters.
{ "pile_set_name": "StackExchange" }
Q: Lotus Notes: Dialog list multiple column display I'm trying to perform a multiple column view in a dialog list prompt using a @DbColumn command and did that successfully by adding another column in a view where my dialog list is looking up. The column value in it is something like this: SiteNum + SiteName + State, a combination of my first 3 columns. Now, is there any way that I can only select "SiteNum" column and have that as my dialog list field value? I got it working with the help of @Richard and @Ken. I made my formula like this, SiteNum + SiteName + State + "|" + SiteNum. But the problem is that when I select one, the SiteNum only stores on the backend and the field displays the whole selection (e.g 0006-USNY) it should only be the SiteNum (e.g 0006) I hope you can help me. Thanks! A: You should be able to set your formula to SiteNum + SiteName + State + "|" + SiteNum. That will tell the dialog lis to display SiteNum + SiteName + State but return the alias value SiteNum.
{ "pile_set_name": "StackExchange" }
Q: Different iOS Direct Update Dialog Buttons from same code from SVN with two different build processes (WL Studio / Jenkins Xcode)? We are seeing a strange behavior in our iOS part of the App: We are using a Jenkins Build Server with an XCode plugin to compile the app on the command line At first we had an SVN ignore on the iPhone "native" folder in our SVN. So all was taken from common and the whole native project was generated by the worklight Ant scripts on the Jenkins server. This was then compiled by the xcode on jenkins and the App worked fine with Direct Update - the Dialog had a "UPDATE" and a "EXIT" button, both buttons worked as intended. Then we added a CustomWebView to our project and had to include the native folder into the SVN. We tried to SVN.ignore all generated files and the build processes in Worklight Studio and on the Jenkins xcode server work both fine. We did not change any of the generated code except we added two classes: WebViewOverlayPlugin.h WebViewOverlayPlugin.m and added the plugin to the config.xml: <plugin name="WebViewOverlayPlugin" value="WebViewOverlayPlugin"/> BUT: when we compile the same SVN code in the Worklight Studio, we have an iPhone app that has a Direct Update dialog with only the "UPDATE" button across the whole dialog. There is no "EXIT" button anymore at all. when we compile the same SVN on the Jenkins Xcode server, we have an iPhone app that has a Direct Update dialog with an "UPDATE" and an "EXIT" button but the "EXIT" button does not work as intended - when pressing it, the app stays open and the dialog closes, so the user can continue to use the old version of the app without updating. The "UPDATE" button works and updates the App. So my question is, what build setting, generated files or configuration etc. could have any influence on the behavior of the Direct Update dialog. Since we use the exact same SVN source it has to have something to do with something generated, or with some configuration. Is anything around such a strange behavior known? Is it known that one can "configure" the Direct Update dialog to have only an "UPDATE" button with no "EXIT"? Thank you all for any help or hint where we could look to further investigate. A: You didn't mention your Worklight version, but starting with Worklight 5.0.6.x: The Direct Update dialog will feature only 1 button - Update. The Exit button was removed. This was done in order to be inline with Apple's App Store submission guidelines. And more in general, the WL.App.close API method was rendered "non functional", because as per Apple's guidelines, it is the user that should be given the control on exiting the application and not the application to do so. Think of it as an "OK" button instead of "Exit". That said, you are not supposed to see an Exit button to begin with. But again, I do not know what is your Worklight version and what exactly is your build setup to understand what is going on there that does this sort of mixing. You are not supposed to be able to "configure" anything in Direct Update. It is non-configurable in this respect. As for why you have 2 buttons when building with Worklight Studio, but 1 button after adding the WebOverlayPlugin, it is interesting. Is there a chance the app is of a lower Worklight version and you are now building it with a newer version of Worklight, containing the changes made, as described above?
{ "pile_set_name": "StackExchange" }
Q: How to adjust inner div container height to fit with others divs Please check the demo link I just want to adjust Geomap div's height to 100%. I have tried with this Javascript below but its overflowing with the footer. function handleResize() { var h = $('html').height(); $('#map-geo').css({'height':h+'px'}); } $(function(){ handleResize(); $(window).resize(function(){ handleResize(); }); }); Currently my Html and Body css is html, body { font-family: 'Open Sans', sans-serif; -webkit-font-smoothing: antialiased; height:69% !important; width:100% !important; } Please do help me out to make it fit. A: as far as i know you should replace : $('#map-geo').css({'height':h+'px'}); with : $('#map-geo').css('height',h+'px'); And you also need to put the whole function into : $( document ).ready(function() { }); You can try this, if it works for you : $( document ).ready(function() { function handleResize() { var h = $('html').height(); $('#map-geo').css('height',h+'px'); } handleResize(); }); This alone should resize the element.
{ "pile_set_name": "StackExchange" }
Q: ADB Crashing in Eclipse This question relates to this thread, however that thread has no answers so this it not technically a duplicate. I've got ADB 1.0.26 running on my Windows 7 x64 and Eclipse SDK 3.6.2 with ADT 10.0.1 SDK tools r10, and I've got all the Android SDK versions installed. When I connect my phone to the computer in debug mode, and type adb devices into the command prompt, my phone shows up. It's an Inspire 4G. I can adb shell into the device and ls, so I'm assuming that the adb/driver/phone part of the chain is working properly. Now, if I connect my phone and go into Eclipse, I get: [2011-03-28 17:46:33 - DeviceMonitor]Adb connection Error:An existing connection was forcibly closed by the remote host [2011-03-28 17:46:34 - DeviceMonitor]Connection attempts: 1 [2011-03-28 17:46:36 - DeviceMonitor]Connection attempts: 2 [...] [2011-03-28 17:47:53 - DeviceMonitor]Connection attempts: 10 [2011-03-28 17:46:54 - DeviceMonitor]Connection attempts: 11 [2011-03-28 17:47:00 - adb] [2011-03-28 17:47:00 - adb]This application has requested the Runtime to terminate it in an unusual way. [2011-03-28 17:47:00 - adb]Please contact the application's support team for more information. [2011-03-28 17:47:04 - DeviceMonitor]Adb connection Error:An existing connection was forcibly closed by the remote host [2011-03-28 17:47:05 - DeviceMonitor]Connection attempts: 1 [2011-03-28 17:47:07 - DeviceMonitor]Connection attempts: 2 etc, etc... This loops forever. It doesn't matter if I start Eclipse and then connect the phone, or if I connect the phone and then start Eclipse. I don't think it should matter, but my phone is an Inspire 4G which is rooted and running Revolution 4G 3.2 A: this patched adb version worked fine for me (HTC Desire HD): http://code.google.com/p/android/issues/detail?id=12141
{ "pile_set_name": "StackExchange" }
Q: If a cell is blank, hide next n rows, VBA In a worksheet, "Scenarios," I'm trying to hide groups of entire rows based on whether or not the cell value in column B for that group of rows is contains the text "not included". For instance, in the range B19:B77, I have sections for accounts in each 5 rows. The first row in each section has either the account name or "not included." If it says "not included," I would like to hide that row and the subsequent 4 rows (e.g. rows 19 through 23). I am aware of how to hide entire rows based on the value of a cell (code below), but I'd like to figure out how to hide the additional rows. Private Sub Worksheet_Change(ByVal Target As Range) Application.ScreenUpdating = False For Each xRg In Range("B19:B77") If xRg.Value = "" Then xRg.EntireRow.Hidden = True Else xRg.EntireRow.Hidden = False End If Next xRg Application.ScreenUpdating = True End Sub Thank you for your help, in advance! A: The For loop could look something like: Dim r As Long For r = 19 To 77 Step 5 Rows(r & ":" r + 4).Hidden = Cells(r, "B").Value = "not included" Next Note: That 77 looks strange. If everything is in groups of 5 rows, your last "account name" will be in row 74, which means the last group seems to only be 4 rows (74 to 77).
{ "pile_set_name": "StackExchange" }
Q: Why is Rails displaying validation errors? I am having trouble figuring out why rails is showing validation errors. The relevant details of my app are as follows: Programs are offered in Sessions (many to many) (ProgramSession associates Program and Session) Courses are offered by Instructors (many to many) (CourseInstructor associates Course and Instructors) Exams are conducted and each Exam has many Papers. I have generated all resources using scaffold. Problem: when I try to create a new paper Rails shows 2 errors prohibited this paper from being saved - Program session must exist - Course instructor must exist Entire code is available on github repo, and has also been deployed on heroku I would really appreciate all the help I can get. A: Making following changes in app/models/paper.rb fixed the problem: belongs_to :program_session, foreign_key: 'program_sessions_id' belongs_to :course_instructor, foreign_key: 'course_instructors_id'
{ "pile_set_name": "StackExchange" }
Q: Calculate lookat vector from position and Euler angles I've implemented an FPS style camera, with the camera consisting of a position vector, and Euler angles pitch and yaw (x and y rotations). After setting up the projection matrix, I then translate to camera coordinates by rotating, then translating to the inverse of the camera position: // Load projection matrix glMatrixMode(GL_PROJECTION); glLoadIdentity(); // Set perspective gluPerspective(m_fFOV, m_fWidth/m_fHeight, m_fNear, m_fFar); // Load modelview matrix glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // Position camera glRotatef(m_fRotateX, 1.0, 0.0, 0.0); glRotatef(m_fRotateY, 0.0, 1.0, 0.0); glTranslatef(-m_vPosition.x, -m_vPosition.y, -m_vPosition.z); Now I've got a few viewports set up, each with its own camera, and from every camera I render the position of the other cameras (as a simple box). I'd like to also draw the view vector for these cameras, except I haven't a clue how to calculate the lookat vector from the position and Euler angles. I've tried to multiply the original camera vector (0, 0, -1) by a matrix representing the camera rotations then adding the camera position to the transformed vector, but that doesn't work at all (most probably because I'm way off base): vector v1(0, 0, -1); matrix m1 = matrix::IDENTITY; m1.rotate(m_fRotateX, 0, 0); m1.rotate(0, m_fRotateY, 0); vector v2 = v1 * m1; v2 = v2 + m_vPosition; // add camera position vector glBegin(GL_LINES); glVertex3fv(m_vPosition); glVertex3fv(v2); glEnd(); What I'd like is to draw a line segment from the camera towards the lookat direction. I've looked all over the place for examples of this, but can't seem to find anything. Thanks a lot! A: I just figured it out. When I went back to add the answer, I saw that Ivan had just told me the same thing :) Basically, to draw the camera vector, I do this: glPushMatrix(); // Apply inverse camera transform glTranslatef(m_vPosition.x, m_vPosition.y, m_vPosition.z); glRotatef(-m_fRotateY, 0.0, 1.0, 0.0); glRotatef(-m_fRotateX, 1.0, 0.0, 0.0); // Then draw the vector representing the camera glBegin(GL_LINES); glVertex3f(0, 0, 0); glVertex3f(0, 0, -10); glEnd(); glPopMatrix(); This draws a line from the camera position for 10 units in the lookat direction.
{ "pile_set_name": "StackExchange" }
Q: Adhering to a Repository Pattern for a SharePoint list I'm creating a class using the Repository Pattern for SharePoint lists (as opposed to directly accessing an actual table in a database) on a collection of list items. The Repository Pattern designs that I've seen are constantly creating a new context for each individual repository, but I would like to use the same ClientContext object for each of the concurrent repositories (one for each list) that might be in existence. Assuming I don't want someone to have to create a new ClientContext instance themselves which would be passed into the SPRepository class as a constructor, what other options do I have as far as keeping it in static memory? References: https://msdn.microsoft.com/en-us/library/ff649690.aspx https://hendrikbulens.wordpress.com/2014/12/14/repository-pattern-sharepoint/ (This one makes use of separate repositories for each list whereas I want to make a more generic one which accepts a list name in the constructor) A: This is more complex subject than it seems, as there are many things to consider. Most of the solutions I've seen (including your links) are trying to make code generic with the cost of performance and resource consumption. There is no one-fits-all solution. Single responsibility You should inject ClientContext into your repository because there are many different authentication scenarios for CSOM (NTLM, FBA, ADFS, high trust app with user identity, High trust app with app identity, just to name few). Repository shouldn't be aware of the authentication method and means of creating the context. Resources Sharing one instance of ClientContext between multiple repositories may be good idea in simple scenarios when you want to avoid creating unnecessary objects. It implements IDisposable and we should assume there is a reason for that. Number of ExecuteQuery() calls If you use one ClientContext instance for multiple repositories you can schedule multiple operations and execute them all with one request to SharePoint. var result1 = repository1.ScheduleLoad(); var result2 = repository2.ScheduleLoad(); context.ExecuteQuery(); Remember that CSOM is remote API. Number of calls and amount of data retrieved will have significant impact on performance. Calls to ExecuteQuery() from other objects or threads Consider this: var web = StaticClass.Context.Web; var list = web.GetList('Lists/example'); var item = list.GetItemById(1); item["Title"] = "Test"; var result = repository.GetItemsAndExecute(); //or some other thread calls StaticClass.Context.ExecuteQuery(); item.Update(); StaticClass.Context.ExecuteQuery(); What will be the result? Value of Title field will remain empty. Here's why: First call gets the item by id and sets the field value, but it does not call Update(). Second call gets the item by id again and calls Update(), but it does not set the field value.
{ "pile_set_name": "StackExchange" }
Q: How to use ModelCheckpoint with custom metrics in Keras? Is it possible to use custom metrics in the ModelCheckpoint callback? A: Yes, it is possible. Define the custom metrics as described in the documentation: import keras.backend as K def mean_pred(y_true, y_pred): return K.mean(y_pred) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy', mean_pred]) To check all available metrics: print(model.metrics_names) > ['loss', 'acc', 'mean_pred'] Pass the metric name to ModelCheckpoint through monitor. If you want the metric calculated in the validation, use the val_ prefix. ModelCheckpoint(weights.{epoch:02d}-{val_mean_pred:.2f}.hdf5, monitor='val_mean_pred', save_best_only=True, save_weights_only=True, mode='max', period=1) Don't use mode='auto' for custom metrics. Understand why here. Why am I answering my own question? Check this.
{ "pile_set_name": "StackExchange" }
Q: dojo: inheritance with default value - the mixin doesn't happen I wish to declare a new dojo class inheriting from an existing dojo class, but with my own choice of default values for the class's properties. (The user can still override those values.) I am declaring my own version of the dijit.form.FilteringSelect such that: the hasDownArrow property defaults to false (rather than the standard true) and there's an extra possible property storeUrl which allows me to connect the FilteringSelect to the corresponding QueryReadStore. Here's what I did, without success: dojo.provide("my.FilteringSelect"); dojo.require("dijit.form.FilteringSelect"); dojo.require("dojox.data.QueryReadStore"); dojo.declare( "my.FilteringSelect", [ dijit.form.FilteringSelect, /* base superclass */ { hasDownArrow:false, storeUrl:"/" } /* mixin */ ], { constructor: function(params, srcNodeRef){ console.debug("Constructing my.FilteringSelect with storeUrl " + this.storeUrl); this.store = new dojox.data.QueryReadStore({url:this.storeUrl}); } } ); Say, I try to generate declaratively in the HTML such a version of my.FilteringSelect: <input type="text" id="birthplace" name="birthplace" promptMessage="Start typing, and choose among the suggestions" storeUrl="/query/regions" dojoType="my.FilteringSelect" /> This will indeed create a FilteringSelect with the desired promptMessage (which means that the superclass is properly getting the params), but hasDownArrow is true (contrary to my default mixin) and the store is null (and the Firebug console reports that storeUrl is "undefined"). What am I doing wrong? A: Oops! I really had things on their head. I found the right way around. The following works: dojo.provide("my.FilteringSelect"); dojo.require("dijit.form.FilteringSelect"); dojo.require("dojox.data.QueryReadStore"); dojo.declare( "my.FilteringSelect", dijit.form.FilteringSelect, { hasDownArrow : false, storeUrl : "/", constructor: function(params, srcNodeRef){ dojo.mixin(this, params); console.debug("Constructing my.FilteringSelect with storeUrl " + this.storeUrl); this.store = new dojox.data.QueryReadStore({url:this.storeUrl}); } } );
{ "pile_set_name": "StackExchange" }
Q: Python 3.x - Create dataframe and specify column names I am new to pandas. I have to create a data frame where one column is 'Source' and second column is 'Amount'. Created a new data frame df=[] Now how can i add a columns 'Source' and 'Amount' to this dataframe. The end result is print(df) Source Amount S1 10 S2 12 S3 8 S4 5 The data will come from a for loop. each iteration generates the source and then amount. I want to create a data frame like - df=[] for str in some_variable: df['Source'].append(str[0]) #str[0] will contain source elements df['Amount'].append(str[1]) #str[1] will contain amount elements Code - import requests import pandas as pd import bs4 import string import matplotlib.pyplot as plt url = "https://www.revisor.mn.gov/laws/?year=2014&type=0&doctype=Chapter&id=294" result = requests.get(url) soup = bs4.BeautifulSoup(result.content) summary = soup.find("div", {"class":"bill_section","id": "laws.1.1.0"}) tables = summary.findAll('table') data_table = tables[1] df=pd.DataFrame(['Source','Amount']) #Trying to incorrectly add columns to df for row in data_table.findAll("tr"): cells = row.findAll("td") try : for char in cells[0].findAll("ins"): df['Source'] = df['Source'].append() #This is where the issue is for char in cells[2].findAll("ins"): df['Amount'] = df['Amount'].append() #And here except: pass A: I think you can use read_html with replace, str.strip, str.replace and last to_numeric: import pandas as pd import matplotlib.pyplot as plt url = "https://www.revisor.mn.gov/laws/?year=2014&type=0&doctype=Chapter&id=294" #read second table in url df = pd.read_html(url)[1] #replace texts to empty string df = df.replace('new text begin','', regex=True).replace('new text end','', regex=True) #set new columns names df.columns = ['Source','b','Amount'] #remove first row df = df[1:] #remove second column b df = df.drop('b', axis=1) #strip whitespaces df.Source = df.Source.str.strip() #strip whitespaces and remove (), df.Amount = df.Amount.str.strip().str.replace(r'[(),]','') #convert column Amount to numeric df.Amount = pd.to_numeric(df.Amount) #reset index df = df.reset_index(drop=True) print df Source Amount 0 University of Minnesota 119367000 1 Minnesota State Colleges and Universities 159812000 2 Education 7491000 3 Minnesota State Academies 11354000 4 Perpich Center for Arts Education 2000000 5 Natural Resources 63480000 6 Pollution Control Agency 2625000 7 Board of Water and Soil Resources 8000000 8 Agriculture 203000 9 Zoological Garden 12000000 10 Administration 127000000 11 Minnesota Amateur Sports Commission 7973000 12 Military Affairs 3244000 13 Public Safety 4030000 14 Transportation 57263000 15 Metropolitan Council 45968000 16 Human Services 86387000 17 Veterans Affairs 2800000 18 Corrections 11881000 19 Employment and Economic Development 92130000 20 Public Facilities Authority 45993000 21 Housing Finance Agency 20000000 22 Minnesota Historical Society 12002000 23 Bond Sale Expenses 900000 24 Cancellations 10849000 25 TOTAL 893054000 26 Bond Proceeds Fund (General Fund Debt Service) 814745000 27 Bond Proceeds Fund (User Financed Debt Service) 39104000 28 State Transportation Fund 36613000 29 Maximum Effort School Loan Fund 5491000 30 Trunk Highway Fund 7950000 31 Bond Proceeds Cancellations 10849000 print df.dtypes Source object Amount int64 dtype: object But if you need solution with parsing data with BeautifulSoup, first Source and Amount are appended by data and then is created DataFrame: import requests import pandas as pd import bs4 import string import matplotlib.pyplot as plt url = "https://www.revisor.mn.gov/laws/?year=2014&type=0&doctype=Chapter&id=294" result = requests.get(url) soup = bs4.BeautifulSoup(result.content) summary = soup.find("div", {"class":"bill_section","id": "laws.1.1.0"}) tables = summary.findAll('table') data_table = tables[1] Source, Amount = [], [] for row in data_table.findAll("tr"): cells = row.findAll("td") try : for char in cells[0].findAll("ins"): Source.append(char.text) #This is where the issue is for char in cells[2].findAll("ins"): Amount.append(char.text) #And here except: pass print Source [u'SUMMARY', u'University of Minnesota', u'Minnesota State Colleges and Universities', u'Education', u'Minnesota State Academies', u'Perpich Center for Arts Education', u'Natural Resources', u'Pollution Control Agency', u'Board of Water and Soil Resources', u'Agriculture', u'Zoological Garden', u'Administration', u'Minnesota Amateur Sports Commission', u'Military Affairs', u'Public Safety', u'Transportation', u'Metropolitan Council', u'Human Services', u'Veterans Affairs', u'Corrections', u'Employment and Economic Development', u'Public Facilities Authority', u'Housing Finance Agency', u'Minnesota Historical Society', u'Bond Sale Expenses', u'Cancellations', u'TOTAL', u'Bond Proceeds Fund (General Fund Debt Service)', u'Bond Proceeds Fund (User Financed Debt Service)', u'State Transportation Fund', u'Maximum Effort School Loan Fund', u'Trunk Highway Fund', u'Bond Proceeds Cancellations'] print Amount [u'119,367,000', u'159,812,000', u'7,491,000', u'11,354,000', u'2,000,000', u'63,480,000', u'2,625,000', u'8,000,000', u'203,000', u'12,000,000', u'127,000,000', u'7,973,000', u'3,244,000', u'4,030,000', u'57,263,000', u'45,968,000', u'86,387,000', u'2,800,000', u'11,881,000', u'92,130,000', u'45,993,000', u'20,000,000', u'12,002,000', u'900,000', u'(10,849,000)', u'893,054,000', u'814,745,000', u'39,104,000', u'36,613,000', u'5,491,000', u'7,950,000', u'(10,849,000)'] print len(Source) #33 print len(Amount) #32 #remove first element Source = Source[1:] df=pd.DataFrame({'Source':Source,'Amount':Amount}, columns=['Source','Amount']) print df Source Amount 0 University of Minnesota 119,367,000 1 Minnesota State Colleges and Universities 159,812,000 2 Education 7,491,000 3 Minnesota State Academies 11,354,000 4 Perpich Center for Arts Education 2,000,000 5 Natural Resources 63,480,000 6 Pollution Control Agency 2,625,000 7 Board of Water and Soil Resources 8,000,000 8 Agriculture 203,000 9 Zoological Garden 12,000,000 10 Administration 127,000,000 11 Minnesota Amateur Sports Commission 7,973,000 12 Military Affairs 3,244,000 13 Public Safety 4,030,000 14 Transportation 57,263,000 15 Metropolitan Council 45,968,000 16 Human Services 86,387,000 17 Veterans Affairs 2,800,000 18 Corrections 11,881,000 19 Employment and Economic Development 92,130,000 20 Public Facilities Authority 45,993,000 21 Housing Finance Agency 20,000,000 22 Minnesota Historical Society 12,002,000 23 Bond Sale Expenses 900,000 24 Cancellations (10,849,000) 25 TOTAL 893,054,000 26 Bond Proceeds Fund (General Fund Debt Service) 814,745,000 27 Bond Proceeds Fund (User Financed Debt Service) 39,104,000 28 State Transportation Fund 36,613,000 29 Maximum Effort School Loan Fund 5,491,000 30 Trunk Highway Fund 7,950,000 31 Bond Proceeds Cancellations (10,849,000)
{ "pile_set_name": "StackExchange" }
Q: Preventing default mouse behavior in C# - wpf Is there an equivalent to Javascript's event.preventDefault() in C# (using wpf)? I want to prevent the default behavior when left clicking anywhere in the screen. private void Button_MouseDown(object sender, MouseButtonEventArgs e){ // I want to use something like Javascript's e.preventDefault() } A: You might want to try e.Handled = true like so: private void Button_MouseDown(object sender, MouseButtonEventArgs e){ e.Handled = true; }
{ "pile_set_name": "StackExchange" }
Q: What logic is behind t('flash.notice.order.creditcard.valid')? flash[:notice] = t('flash.notice.order.creditcard.valid') I can sort of guess what this flash message probably outputs to the user but what is this 't' method and what kind of object is flash.notice.order.creditcard.valid? Is this application-specific logic or a Rails usage? A: t is an alias for the translate method. The flash.notice.order.creditcard.valid is the name of the message to translate. It should be defined in the various locale files found in config/locale/.
{ "pile_set_name": "StackExchange" }
Q: How to define and use JSON data type in Eloquent? How can I define JSON data type that is provided in pgsql 9.4 in Laravel 5? I need to define data type, storing and fetching data and so far could not find way to deal it in Laravel 5.1 A: In your migrations you can do something like: $table->json('field_name'); And in your model you add the field to the $casts property to instruct Eloquent to deserialize it from JSON into a PHP array: class SomeModel extends Model { protected $casts = [ 'field_name' => 'array' ]; } Source: https://laravel.com/docs/5.1/eloquent-mutators#attribute-casting Note: This is also relevant answer for Laravel 5.6 A: According to Laravel documentation, 'json' is not one of the casting available types. https://laravel.com/docs/5.1/eloquent-mutators You should use 'array' instead: class SomeModel extends Model { protected $casts = [ 'field_name' => 'array' ]; }
{ "pile_set_name": "StackExchange" }
Q: handling getopt dynamically through different python script I am using getopt to handle optional arguments in all my scripts. I have lots of scripts and i have to manage getopt in all the scripts separately because all scripts have different options. My Question is is there any dynamic way to handle these options through single class by defining all options and just update those by objects ? Currently i am using getopt like this: import getopt if __name__=="__main__": if len(sys.argv) < 2: usage() else: try: options, remainder = getopt.getopt(sys.argv[2:], 's:n:i:a:c:t:p:', ['s1=','n1=','iteration=','api_v=', 'c='.....,]) except getopt.GetoptError: print "\nError: Provided options are not correct!\n" usage() for opt, arg in options: if opt in ('-s', '--s1'): s1 = arg if opt in ('-n', '--n1'): try: n1= int(arg) except ValueError: print("Error : n1 not an integer!") exit() elif opt in ('-i', '--iteration'): try: timeout = int(arg) except ValueError: print("Error : iteration not an integer!") exit() elif opt in ('-a', '--abc'): abc = arg ..... .... #and list goes on. This i have in almost all my scripts with different options. Is there any cleaner way to achieve the same thing? A: You may find the argparse module easier to use and read. You could create an ArgumentParser factory function in a separate module and just call that from all your scripts to return a parser with the correct options
{ "pile_set_name": "StackExchange" }
Q: How to run CMakesetup? I downloaded cmake and I wanted to build LSHKIT using this command: cmakesetup ", where is the LSHKIT root directory but it gives me this error: 'cmakesetup' is not recognized as an internal or external command, operable program or batch file. could you tell me what is wrong? BTW, I am working in windows 7. A: CMakeSetup is an older program that no longer builds with the most recent releases of CMake. Use "cmake-gui" instead of "CMakeSetup"...
{ "pile_set_name": "StackExchange" }
Q: Exception "java.lang.NoSuchMethodError: org.objectweb.asm.ClassWriter" on Viewing Jasper Report I am using Jasper Reports and am using following libraries in my class path jasperreports-4.5.1.jar common-digester3-3.2.jar common-digester2.1.jar castor-1.2.jar commons-beanutils-1.8..0.jar commons-collections-2.1.1.jar commons-logging-1.1.1.jar groovy-1.2.6.jar asm-2.2.3.jar asm-3.1.jar asm-all-3.1.jar antlr-3.3.1.1.jar jtds-1.2.5.jar I got the following exception Exception in thread "AWT-EventQueue-0" java.lang.NoSuchMethodError: org.objectweb.asm.ClassWriter.<init>(I)V at org.codehaus.groovy.control.CompilationUnit.createClassVisitor(CompilationUnit.java:791) at org.codehaus.groovy.control.CompilationUnit$14.call(CompilationUnit.java:755) at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:967) at org.codehaus.groovy.control.CompilationUnit.doPhaseOperation(CompilationUnit.java:546) at org.codehaus.groovy.control.CompilationUnit.processPhaseOperations(CompilationUnit.java:524) at org.codehaus.groovy.control.CompilationUnit.compile(CompilationUnit.java:501) at net.sf.jasperreports.compilers.JRGroovyCompiler.compileUnits(JRGroovyCompiler.java:96) at net.sf.jasperreports.engine.design.JRAbstractCompiler.compileReport(JRAbstractCompiler.java:188) at net.sf.jasperreports.engine.JasperCompileManager.compileReport(JasperCompileManager.java:212) at Utilities.ReportDriver.runReport(ReportDriver.java:81) at jewelleryerpapplication.GUI.MainReports.jbtnViewReportActionPerformed(MainReports.java:544) at jewelleryerpapplication.GUI.MainReports.access$100(MainReports.java:18) at jewelleryerpapplication.GUI.MainReports$2.actionPerformed(MainReports.java:210) at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:2018) at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2341) at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:402) at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:259) at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(BasicButtonListener.java:252) at java.awt.Component.processMouseEvent(Component.java:6505) on executing the following code: JasperPrint jasperPrint = JasperFillManager.fillReport(jasperReport, param, jdbcConnection); JasperViewer.viewReport(jasperPrint); What have I done wrong? How can I fix this or debug it further? A: Note that you have two different versions of ASM listed. This is a very common problem with that library because so many other libraries use it under the hood and because Java's classpath mechanism doesn't allow for using different versions of the same library. (This happens all the time between Groovy and Hibernate.) So when you set up your classpath, one library wants version 2.2.3, and one wants 3.1. When looking up classes, though, the first one on the classpath wins. In your case, Groovy is trying to call a constructor on ClassWriter that doesn't exist in whichever version won (2.2.3, if you listed them in the actual classpath order). When you have this situation, where different versions are being demanded, all you can do is pick a version and cross your fingers. Try it out to see if it works everywhere you need it to. Most likely, everything will be fine if you use the newer ASM version (3.1). That's been my experience, anyway. If you can't find a version that works for everything, you might have a big problem on your hands.
{ "pile_set_name": "StackExchange" }
Q: Variables as hash values in Python How can I link a variable to a dictionary value in Python? Consider the following code: a_var = 10 a_dict = {'varfield':a_var, 'first':25, 'second':57} # a_dict['varfield'] == 10 now a_var = 700 # a_dict['varfield'] == 10 anyway So is there a way to link the value of a variable to a field in a dictictionary without looking up for that field an updating it's value manually? A: You would need to set the value of the dictionary key, to an object that you can change the value of. For example like this: class valueContainer(object): def __init__(self, value): self.value = value def __repr__(self): return self.value.__repr__() v1 = valueContainer(1) myDict = {'myvar': v1} print myDict #{'myvar': 1} v1.value = 2 print myDict #{'myvar': 2}
{ "pile_set_name": "StackExchange" }
Q: swift : get the closest date stored in the core data I have two data in core data. Entity Name = "Contact", Attribute Name = "date"(selected datePicker.date), "content"(textFeild.text) It is that sort of information in a table view, was a success. I want to show only one content in the another view controller. The data of the closest time to the current time. I was asked to help with the following code. var stores: Contact? if ((stores?.date!.timeIntervalsince1970 ?? 0) > NSDate().timeIntervalSince1970) { labelName.text = stores?.content! } But the label has not been any indication on. And no error... What's the problem? A: As you mentioned using Core Data, the right direction is to Fetch this objects (the contacts). Then you will have an Array of NSManagedObjects. If the fecth is successful you iterate them in a ´for in´ loop. Inside this loop use ´.valueForKey´ to extract the dates and apply the rules you have created, returning the NsManagedObject or Atribute you want.
{ "pile_set_name": "StackExchange" }
Q: Checking user input for two key inputs So I am asking the user to input '##' and a number (e.g. ##2) and once this exactly like ##2 the program can proceed. But I do not know how to check a user input for two inputs. I was thinking about using an and but I am sure there must be a better way to do it. I have made it work for the '##' but for the number I don't know how I should approach this. But when I enter ##2 it crashes. I have used the split() method to try and fix it but it hasn't worked. userInput = input() def star_print(number): return number number = int(userInput) for i in range(number, 0,-1): print("*" * i) while (userInput == '##': star_printer(userInput) break Any suggestions? Cheers A: This isn't very elegant but I'm sure you can work with this and make it amazing. def star_print(number): return number number = input() if '##' not in number: print('You need to prepend two # marks before your number') else: print('Valid input, continuing') number = number.replace('#', '') number = int(number) for i in range(0, number+1): print("*" * i) * ** *** for i in range(number+1, 0, -1): print("*" * i) *** ** *
{ "pile_set_name": "StackExchange" }
Q: How to convert UV position to texture atlas pos and reapply to UV in Unity I'm trying to apply a texture from an atlas to a mesh by reiterating through the UV array on that mesh, but I'm a bit lost on converting the coordinates. For example, if the texture required is at x,y pixels on the material texture atlas, how can I convert and get those coordinates and apply to the mesh? I'm redrawing meshes dynamically with a chunk loader for a voxel game, I have it down for square voxels but now when adding more complex meshes, need to know how to apply the texture to these. I imagine the code would be something like, but I'm hoping someone can help me fill in the blanks: void UpdateMesh() { UVList.Clear(); Mesh mesh = this.GetComponent<MeshFilter>().mesh; for (int i = 0; i < mesh.uv.Length; i++) { UVList.Add(GetUVTextureFromAtlas(mesh.uv[i].x, mesh.uv[i].y, voxelType); } mesh.uv = UVList.ToArray(); } //converts UV position expected from mesh model to position on texture atlas Vector2 GetUVTextureFromAtlas(float x, float y, int voxelType) { Vector2 uv; Vector2 textureAtlasPos = GetTextureOffset(voxelType); // returns texture pos from atlas //not exactly sure what this would look like for code to convert //from the position on the atlas to the expected UV position based on //expected UV position (i.e. from 1,0 to whereever the texture is on the atlas) return uv; } A: Your base UVs exist on a unit rectangle: (0, 1) (1, 1) +-----------------+ | | | | | | | | | | | | +-----------------+ (0, 0) (1, 0) You're trying to map that space to a different rectangle on the atlas: (xmin, ymax) (xmax, ymax) +-----------------+ | | | | | | | | | | | | +-----------------+ (xmin, ymin) (xmax, ymin) Since your base UVs range from 0 to 1, it's trivial to use Lerp: float xout = Mathf.Lerp(xmin, xmax, x); float yout = Mathf.Lerp(ymin, ymax, y); You'll need appropriate values for xmin, ymin, xmax, and ymax, which means you need to know the location and size of the texture on the atlas. I've also assumed that those four values are in UV-space. If your atlas coordinates are specified as pixel coordinates, you can convert those to UV coordinates with InverseLerp. Suppose you have a texture at (x, y) with size (xlen, ylen), on an atlas with size (xsize, ysize): xmin = Mathf.InverseLerp(0, xsize, x); xmax = Mathf.InverseLerp(0, xsize, x + xlen); ymin = Mathf.InverseLerp(0, ysize, y); ymax = Mathf.InverseLerp(0, ysize, y + ylen); It's absolutely possible to improve on this method, if you're comfortable with the math, but that's the basic gist of it.
{ "pile_set_name": "StackExchange" }
Q: How to get a Sharepoint object url by guid I'm working with SPAudit and I have an object of unknown type. Here is answer for when object is Site. But object can be any type from this enum. I'm watching for a method that gets a GUID and returns url of specified object. Something like: static string GetUrlByGuid(Guid guid) { var item = SPFarm.Local.GetObject(guid); if (item == null) return null; return item.ToString(); //return item.Url or something like it } A: Well my solution is not really very good, bcs for lists and listitems it requires location string (DocLocation property from SPAudit). But at least, it works. private static string GetUrlByGuid(Guid guid, SPAuditItemType type, string location) { switch (type) { case SPAuditItemType.Site: return SPContext.Current.Site.Url; case SPAuditItemType.Web: try { using (var site = new SPSite(SPContext.Current.Site.ID)) using (var web = site.OpenWeb(guid)) { return web.Url; } } catch (FileNotFoundException) { return string.Empty; } case SPAuditItemType.List: { if (string.IsNullOrEmpty(location)) throw new ArgumentNullException("location"); using (var site = new SPSite(SPContext.Current.Site.Url + "/" + location)) { using (var web = site.OpenWeb()) { try { return web.Lists[guid].DefaultViewUrl; } catch (SPException) { return string.Empty; } } } } case SPAuditItemType.ListItem: var match = ListItemRegex.Match(location); string listUrl = match.Groups[1].Value.Trim('/'); using (var site = new SPSite(SPContext.Current.Site.Url + "/" + location)) using (var web = site.OpenWeb()) { foreach (SPList list in web.Lists) { if (list.RootFolder.ServerRelativeUrl.Trim('/') == listUrl) { return string.Format("{0}?ID={1}", SPUtility.ConcatUrls(web.Url, list.Forms[PAGETYPE.PAGE_DISPLAYFORM].Url), match.Groups[2].Value); } } } return string.Empty; case SPAuditItemType.Document: return SPContext.Current.Site.Url + "/" + location; default: return string.Empty; } } private static readonly Regex ListItemRegex = new Regex(@"(.+?)(\d+)_.000", RegexOptions.Compiled);
{ "pile_set_name": "StackExchange" }
Q: Find if point inside polygon on google maps using python I would like to know if I can use google maps api to check if a point (lat, long) is within a given polygon (list of vertices) via the back-end using python or i will have to compulsory write a python algo for that. A: I don't think there is a straightforward way of doing this using Google Map API and Python. If you have the point and polygon coordinates available, it will certainly be easier doing this using the Python library Shapely. Here is an example code using Shapely: from shapely.geometry import Point, Polygon pt = Point(0.75, 0.25) poly = Polygon([(0, 0), (1, 1), (1, 0)]) poly.contains(pt)
{ "pile_set_name": "StackExchange" }
Q: Differences between types in Python and their visibility I just wondering, what differences between next two functions (Python 3.x) def first_example (): counter = 0 def inc_counter (): counter += 1 for i in range (10): inc_counter () def second_example (): counter = [0] def inc_counter (): counter[0] += 1 for i in range (10): inc_counter () First function throw exception about referenced before assignment, but the second function works well. Could somebody explain me, why python remember arrays, but not integers? A: You are assigning to the counter name in the first nested function, making it a local variable. In the second example, you never assign to counter directly, making it a free variable, which the compiler rightly connected to the counter in the parent function. You never rebind the name, you are only altering the list counter refers to. Use the nonlocal keyword to mark counter as a free variable: def first_example (): counter = 0 def inc_counter (): nonlocal counter counter += 1 for i in range (10): inc_counter() A: Variables captured by inner functions (aka "closures") can't be assigned to, unless you explicitly mark them global or nonlocal. In the first example, since you're trying to assign, counter is taken to refer to an inner local variable, not the outer variable. This inner variable is referenced before assignment, which is an error. In the second example you are not assigning to the variable, you're calling __getitem__ on it to do the index lookup. So that's OK, counter refers to the outer variable.
{ "pile_set_name": "StackExchange" }