prompt
stringlengths
49
4.73k
ground_truth
stringlengths
238
35k
VS Code not able to recognize SQLAlchemy I have installed SQLAlchemy through the command line by: ``` pip install SQLAlchemy ``` and have also done: ``` pip install Flask-SQLAlchemy ``` I am trying to run these lines and it tells me I have an unresolved import: ``` from sqlalchemy import create_engine from sqlalchemy.orm import scoped_session, sessionmaker ``` I have ran ``` >>>import sqlalchemy >>>sqlalchemy.__version__ ``` In the python interpreter and it says I have version 1.3.17 installed so I believe that means that sqlalchemy is installed correctly but VS Code does not see it for some reason. Any help is appreciated.
Did you create environment? If no: **1.Create Environment:** On your file system, create a project folder for this tutorial, such as hello\_flask. In that folder, use the following command (as appropriate to your computer) to create a virtual environment named env based on your current interpreter: ``` # macOS/Linux sudo apt-get install python3-venv # If needed python3 -m venv env # Windows python -m venv env ``` **2.Select** Open the project folder in VS Code by running code ., or by running VS Code and using the File > Open Folder command. [![enter image description here](https://i.stack.imgur.com/lPtuj.png)](https://i.stack.imgur.com/lPtuj.png) **3. Set** In VS Code, open the Command Palette (View > Command Palette or (Ctrl+Shift+P)). Then select the Python: Select Interpreter command: [![enter image description here](https://i.stack.imgur.com/1gXas.png)](https://i.stack.imgur.com/1gXas.png) **4.Activate env** ``` source env/bin/activate (Linux/macOS) ``` or ``` env\scripts\activate (Windows) ``` **5.Install Flask** ``` # macOS/Linux pip3 install flask # Windows pip install flask ``` **6. Ensure Pylint is installed within this virtual environment** ``` pip install pylint ``` [Pylint](https://donjayamanne.github.io/pythonVSCodeDocs/docs/troubleshooting_linting/) **7. Set your python path like this to your env path like this:** ``` { "python.pythonPath": "/path/to/your/venv/bin/python", } ``` [VSCode Workspace](https://code.visualstudio.com/docs/getstarted/settings#_settings-file-locations) [flask via VS Code](https://code.visualstudio.com/docs/python/tutorial-flask)
Hilt-Dagger ViewModel calling from Fragment I'm using the `ViewModel` to update the title in the action bar SharedViewModel ``` class SharedViewModel @ViewModelInject constructor( @Assisted private val savedStateHandle: SavedStateHandle ) : ViewModel() { val title: MutableLiveData<String> by lazy { MutableLiveData<String>() } val backButton: MutableLiveData<Boolean> by lazy { MutableLiveData<Boolean>() } } ``` MainActivity observer ``` @AndroidEntryPoint ... sharedViewModel.title.observe(this, Observer { supportActionBar?.title = it }) ``` Using the code below seems to create a new instance in `Fragment` (checked in the debugger): ``` @AndroidEntryPoint ... private val viewModel: SharedViewModel by viewModels() ``` But seems to work this way ``` val viewModel = ViewModelProvider(requireActivity()).get(SharedViewModel::class.java) ``` [Article reference](https://developer.android.com/training/dependency-injection/hilt-jetpack) Is this supposed to be done this way or am I doing something wrong? Thanks!
If you want to use a shared view model in a fragment you have to use `by activityViewModels()` instead of `by viewModels()`. Why does the next line work but `by viewModels()` doesn't? ``` ViewModelProvider(requireActivity()).get(SharedViewModel::class.java) ``` Because by default `viewModels()` has its `ownerProducer` argument value as `{ this }`. To understand it better here is the source code: ``` @MainThread inline fun <reified VM : ViewModel> Fragment.viewModels( noinline ownerProducer: () -> ViewModelStoreOwner = { this }, noinline factoryProducer: (() -> Factory)? = null ) = createViewModelLazy(VM::class, { ownerProducer().viewModelStore }, factoryProducer) ``` As you can see it is an extension function for `Fragment` class. It means that the `ViewModelStoreOwner` is the **fragment**. As soon as the fragment is removed from the stack all view models it stores in view model store are gone. But if you use `by activityViewModels()` you use `Activity` as view model store owner. Notice use of `requireActivity().viewModelStore` instead of `ownerProducer().viewModelStore` which is a `Fragment` by default. ``` @MainThread inline fun <reified VM : ViewModel> Fragment.activityViewModels( noinline factoryProducer: (() -> Factory)? = null ) = createViewModelLazy(VM::class, { requireActivity().viewModelStore }, factoryProducer ?: { requireActivity().defaultViewModelProviderFactory }) ``` Eventually, `createViewModelLazy` is calling ``` ViewModelProvider(store, factory).get(viewModelClass.java) ``` which is the equivalent of you creating view model by hand using ``` ViewModelProvider(requireActivity()).get(SharedViewModel::class.java) ``` ### How to fix the issue? Use `by activityViewModels()`: ``` @AndroidEntryPoint ... private val viewModel: SharedViewModel by activityViewModels() ``` Note: `by activityViewModels()` and `by viewModels()` are just lazy load implementations of the `ViewModelProvider(...).get(...)`.
What are the purposes of files in META-INF folder of an APK file? What are the purposes of **MANIFEST.MF**, **CERT.SF** and **CERT.RSA** file in the META-INF folder in an android APK.
An android APK file is actually a jar file (java archive), it is just a plain zip file with or without compression. Jar files are used by all types of java applications, they have a specific structure - the META-INF folder contains the manifest information and other metadata about the java package carried by the jar file. The purposes of these files are as follows: 1. MANIFEST.MF: It contains various information used by the java run-time environment when loading the jar file, such as which is the main class to be run from the jar file, version of package, build number, creator of the package, security policies/permissions of java applets and java webstart packages, the list of file names in the jar along with their SHA1 digests, etc. 2. CERT.SF: This contains the list of all files along with their SHA-1 digest. 3. CERT.RSA: This contains the signed contents of the CERT.SF file along with the certificate chain of the public key used for signing the contents. As an example, Refer to a sample apk file [here](https://www.apkfiles.com/apk-499233/cm-battery "here"). If you download and expand this file using a file decompression program like 7zip to your desktop, you can see a sample of these files. In the extracted directory, go to sub-directory META-INF and view the contents of the files manifest.mf and \*.sf files. Here are the first few lines of these files: File MANIFEST.SF: ``` Manifest-Version: 1.0 Created-By: 1.7.0_60 (Oracle Corporation) Name: res/drawable-xxhdpi-v4/common_plus_signin_btn_text_dark_pressed.9.png SHA1-Digest: Db3E0/I85K9Aik2yJ4X1dDP3Wq0= Name: res/drawable-xhdpi-v4/opt_more_item_close_press.9.png SHA1-Digest: Xxm9cr4gDbEEnnYvxRWfzcIXBEM= Name: res/anim/accessibility_guide_translate_out.xml SHA1-Digest: dp8PyrXMy2IBxgTz19x7DATpqz8= ``` The file MCTN.SF contains the digests of the file listings in MANIFEST.MF along with an empty line: ``` Signature-Version: 1.0 SHA1-Digest-Manifest-Main-Attributes: Sen4TNWb3NQLczkzN1idKh81Rjc= Created-By: 1.7.0_60 (Oracle Corporation) SHA1-Digest-Manifest: NAWTDC05HK+hfNtQ91J4AoL9F7s= Name: res/drawable-xxhdpi-v4/common_plus_signin_btn_text_dark_pressed.9.png SHA1-Digest: pvIZkdVTEuilCdx8UkrlY6ufPlw= Name: res/anim/accessibility_guide_translate_out.xml SHA1-Digest: XeX9Q2w41PRm3KiZ5p07x3CY6hc= ``` The file MCTN.RSA contains the signature in base64 encoding generated over file MCTN.SF. See this reference for details on how to verify the signatures of APK packages - <http://theether.net/kb/100207>
Scala set function parameter by name with a string is it possible to set arguments of a function by name with a string. For example: given: ``` def foo(a: String, b: String) ``` can i invoke this function dynamically with a Map like ``` Map(("a", "bla"), ("b", "blub")) ``` ? If so, how? Thank you!
Applying map of args reflectively to method: ``` apm@mara:~/tmp$ scalam Welcome to Scala version 2.11.0-M4 (OpenJDK 64-Bit Server VM, Java 1.7.0_25). Type in expressions to have them evaluated. Type :help for more information. scala> val m = Map(("a", "bla"), ("b", "blub")) m: scala.collection.immutable.Map[String,String] = Map(a -> bla, b -> blub) scala> import reflect.runtime._ import reflect.runtime._ scala> import universe._ import universe._ scala> class Foo { def foo(a: String, b: String) = a + b } defined class Foo scala> val f = new Foo f: Foo = Foo@2278e0e7 scala> typeOf[Foo].member(TermName("foo")) res0: reflect.runtime.universe.Symbol = method foo scala> .asMethod res1: reflect.runtime.universe.MethodSymbol = method foo scala> currentMirror reflect f res2: reflect.runtime.universe.InstanceMirror = instance mirror for Foo@2278e0e7 scala> res2 reflectMethod res1 res3: reflect.runtime.universe.MethodMirror = method mirror for Foo.foo(a: String, b: String): java.lang.String (bound to Foo@2278e0e7) scala> res1.paramss.flatten res5: List[reflect.runtime.universe.Symbol] = List(value a, value b) scala> .map(s => m(s.name.decoded)) res7: List[String] = List(bla, blub) scala> res3.apply(res7: _*) res8: Any = blablub ``` Notice that you'll want to flatten the parameter lists, `paramss.flatten`. That is an ordered list of params which you can map to your args trivially. The clever naming convention `paramss` is meant to convey the multiple nesting; personally, I pronounce it as though it were spelled `paramses` in English. But now that I see it spelled out, I may pronounce it to rhyme with the Egyptian pharoah.
Mocking Ajax.IsRequest to return False I am trying to mock the Ajax.IsRequest() method of ASP.Net MVC. I found out how to do it in order for it to return true: ``` Expect.Call(_myController.Request.Headers["X-Requested-With"]).Return("XMLHttpRequest").Repeat.Any(); ``` This works and returns true. Now I need to test the other branch of the code. How can I mock it to return false? I have tried removing the mock altogether, It fails with: > > System.NullReferenceException : Object > reference not set to an instance of an > object.] > > > If I do: ``` Expect.Call(_templateReportController.Request["X-Requested-With"]).Return(null).Repeat.Any(); ``` It fails with the same error. Entire Test: ``` /// <summary> /// Tests the Edit Action when calling via Ajax /// </summary> [Test] public void Test_Edit_AjaxRequest() { Group group = new Group(); group.ID = 1; group.Name = "Admin"; IList<Group> groupList = new List<Group>() { group }; Definition def = new Definition(); def.ID = 1; def.Name = "Report"; def.LastModified = DateTime.UtcNow; def.Groups.Add(group); using (_mocks.Record()) { Expect.Call(_myController.Request["X-Requested-With"]).Return("XMLHttpRequest").Repeat.Any(); Expect.Call(_DefBiz.GetAll<Group>()).Return(groupList); Expect.Call(_DefBiz.Get<Definition>(1)).Return(def); } myController.DefAccess = _DefBiz; PartialViewResult actual; using (_mocks.Playback()) { actual = (PartialViewResult)myController.Edit(1); } } ``` Any advices? Cheers
The reason your are getting `NullReferenceException` is because you never stubbed the `controller.Request` object in your unit test and when you invoke the controller action which uses `Request.IsAjaxRequest()` it throws. Here's how you could mock the context using `Rhino.Mocks`: ``` [TestMethod] public void Test_Ajax() { // arrange var sut = new HomeController(); var context = MockRepository.GenerateStub<HttpContextBase>(); var request = MockRepository.GenerateStub<HttpRequestBase>(); context.Stub(x => x.Request).Return(request); // indicate AJAX request request.Stub(x => x["X-Requested-With"]).Return("XMLHttpRequest"); sut.ControllerContext = new ControllerContext(context, new RouteData(), sut); // act var actual = sut.Index(); // assert // TODO: ... } [TestMethod] public void Test_Non_Ajax() { // arrange var sut = new HomeController(); var context = MockRepository.GenerateStub<HttpContextBase>(); var request = MockRepository.GenerateStub<HttpRequestBase>(); context.Stub(x => x.Request).Return(request); sut.ControllerContext = new ControllerContext(context, new RouteData(), sut); // act var actual = sut.Index(); // assert // TODO: ... } ``` --- And here's a better alternative (which I would personally recommend you) in order to avoid all the plumbing code. Using [MVCContrib.TestHelper](http://mvccontrib.codeplex.com/wikipage?title=TestHelper) (which is based on `Rhino.Mocks`) your unit test might be simplified to this: ``` [TestClass] public class HomeControllerTests : TestControllerBuilder { private HomeController _sut; [TestInitialize()] public void MyTestInitialize() { _sut = new HomeController(); this.InitializeController(_sut); } [TestMethod] public void HomeController_Index_Ajax() { // arrange _sut.Request.Stub(x => x["X-Requested-With"]).Return("XMLHttpRequest"); // act var actual = _sut.Index(); // assert // TODO: ... } [TestMethod] public void HomeController_Index_Non_Ajax() { // act var actual = _sut.Index(); // assert // TODO: ... } } ``` Much prettier. It also allows you to write much more expressive asserts on the action results. Checkout the doc or ask if for more info is needed.
ASP .Net VNext and Owin I am exploring the features of ASP .NET VNext. In previous versions of .NET, Owin is used for self hosting of applications as well as hosting in any Owin compatible web servers including IIS. As per my understanding, ASP .NET VNext itself is host agnostic and has commands for self hosting of the application. Now, what is the place of Owin in ASP .NET VNext. I have seen many articles taking about Owin and Asp .NET VNext separately but I need to know how to put them together. I am a beginner.Correct me if I am wrong.
You are right about your confusion. Whatever experience I have with ASP.net vnext (Mainly with VS 2015 Preview and VS 2015 CTP5). Now let me share something you about OWIN and ASP.net vnext. 1. As per implementation of ASP.net vnext , OWIN type of pipeline integrated into that. In vnext there is interface called "IApplicationBuilder" and in OWIN "IAppBuilder" 2. As of now not all component build based on OWIN will not be used in ASP.net vnext but as time goes all component get transfered or have different version for vnext support. As of now in ASP.net vnext you can use Owin component by using IApplicationBuilder extention method UseOWIN. <https://github.com/aspnet/HttpAbstractions/blob/dev/src/Microsoft.AspNet.Owin/OwinExtensions.cs> 3. If you look at Startup.cs file on VS 2015 Preview or CTP5 for ASP.net vnext project then you will get idea then it is somewhat similar concept like Middleware , Register to pipeline. Surely there is difference between implementation but ASP.net vnext allow to run OWIN component.
How to enable http/2 in Asp.net core 2.2 kestrel (self hosted) on server 2016 I have just updated my project from asp.net core 2.1 to **2.2** (main reason was Brotli- and http/2 support). I was able to rebuild, deploy and start the app (self hosted console app). The app run’s on a Windows 2016 server and has https enabled (over a public certificate). The server supports TLS 1.2 (checked over the internet with a tool). Unfortunately http/2 don’t work, whereby Brotli compression seems to work. My client also does support http/2 - if I have a look to my website with GC, I can see, that some of my referenced files are server over http/2, but not my content. According to the found Information’s on the web: - My configuration should meet the prerequisites for http/2 - Asp.net core 2.2 should use http/2 **automatically** (without any change in code or settings) per default and do a fallback to http/1.1 automatically, if a client don’t support http/2 What do I miss here...?
Just found the solution myself... I use an **appsettings.json** to configure the kestrel on the server: ``` { "Logging": { "LogLevel": { "Default": "Warning" } }, "AllowedHosts": "*" , "Kestrel": { "EndPoints": { "Http": { "Url": "http://localhost:5000" }, "HttpsInlineCertFile": { "Url": "https://nnn.nnn.n.n:nnnnn", "Protocols": "Http1AndHttp2", "Certificate": { "Path": "./certificate.pfx", "Password": "Password", "AllowInvalid": "true" } } } } } ``` I had to add the entry: **"Protocols": "Http1AndHttp2",** Now, it seems to work (GC shows "h2" to the protocol now). But the result is not as expected (seems to be a bit slower now..). However, this was the solution for my initial posting. Hope this helps someone...
Which Ports need to be accessible on a Domain Controller for Clients to logon? We are currently segmenting our network. We will move the servers in another subnet than the clients. Of course the clients still need access to the domain controller to authenticate against it. I found various articles about the ports that need to be accessible between the domain controllers to allow replication but none about the ports that are important for the clients. I'm pretty sure the client won't directly access the LDAP database for example and I want to reduce the attack surface as much as possible. So which ports are needed for a client to be able to work with a domain controller?
``` tcp/53 DNS tcp/88 Kerberos tcp/135 RPC tcp/445 sysvol share tcp/389 LDAP tcp/464 Kerberos password (Max/Unix clients) tcp/636 LDAP SSL (if the domain controllers have/need/use certificates) tcp/1688 KMS (if KMS is used. Not necessarily AD, but the SRV record is in AD and clients need to communicate with the KMS). tcp/3268 LDAP GC tcp/3269 LDAP GC SSL (if the domain controllers have/need/use certificates) tcp/49152 through 65535 (Windows Vista/2008 and higher) aka “high ports” udp/53 DNS udp/88 Kerberos udp/123 time udp/135 RPC udp/389 LDAP udp/445 sysvol share ``` You can minimize the high-port range by configuring a static RPC port for Active Directory. Restricting Active Directory RPC traffic to a specific port <https://support.microsoft.com/en-us/kb/224196> It's usually a good idea to force Kerberos to use only tcp/ip, particularly if you have a large, complex network, or accounts are members of large number of groups/large token size. How to force Kerberos to use TCP instead of UDP in Windows <https://support.microsoft.com/en-us/kb/244474>
Error unmarshaling a simple xml in Golang I'm trying to write a very simple parser in Go for a large xml file (the [dblp.xml](https://dblp.uni-trier.de/)), an excerpt of which is below: ``` <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE dblp SYSTEM "dblp.dtd"> <dblp> <article key="journals/cacm/Gentry10" mdate="2010-04-26"> <author>Craig Gentry</author> <title>Computing arbitrary functions of encrypted data.</title> <pages>97-105</pages> <year>2010</year> <volume>53</volume> <journal>Commun. ACM</journal> <number>3</number> <ee>http://doi.acm.org/10.1145/1666420.1666444</ee> <url>db/journals/cacm/cacm53.html#Gentry10</url> </article> <article key="journals/cacm/Gentry10" mdate="2010-04-26"> <author>Craig Gentry Number2</author> <title>Computing arbitrary functions of encrypted data.</title> <pages>97-105</pages> <year>2010</year> <volume>53</volume> <journal>Commun. ACM</journal> <number>3</number> <ee>http://doi.acm.org/10.1145/1666420.1666444</ee> <url>db/journals/cacm/cacm53.html#Gentry10</url> </article> </dblp> ``` My code is as follows and it looks like there's something going on at `xml.Unmarshal(byteValue, &articles)`, as I cannot get any of the xml's values in the output. Can you help me with what is wrong with my code? ``` package main import ( "encoding/xml" "fmt" "io/ioutil" "os" ) // Contains the array of articles in the dblp xml type Dblp struct { XMLName xml.Name `xml:"dblp"` Dblp []Article `xml:"article"` } // Contains the article element tags and attributes type Article struct { XMLName xml.Name `xml:"article"` Key string `xml:"key,attr"` Year string `xml:"year"` } func main() { xmlFile, err := os.Open("TestDblp.xml") if err != nil { fmt.Println(err) } fmt.Println("Successfully Opened TestDblp.xml") // defer the closing of our xmlFile so that we can parse it later on defer xmlFile.Close() // read our opened xmlFile as a byte array. byteValue, _ := ioutil.ReadAll(xmlFile) var articles Dblp fmt.Println("Entered var") // we unmarshal our byteArray which contains our // xmlFiles content into 'users' which we defined above xml.Unmarshal(byteValue, &articles) for i := 0; i < len(articles.Dblp); i++ { fmt.Println("Entered loop") fmt.Println("get title: " + articles.Dblp[i].Key) fmt.Println("get year: " + articles.Dblp[i].Year) } } ```
You have a specific line in your code that returns an error ``` xml.Unmarshal(byteValue, &articles) ``` If you change that to ``` err = xml.Unmarshal(byteValue, &articles) if err != nil { fmt.Println(err.Error()) } ``` You'll see an error being reported: `xml: encoding "ISO-8859-1" declared but Decoder.CharsetReader is nil`. As a best practice, you should always check for errors being returned. To fix that, you can either remove the encoding attribute (`encoding="ISO-8859-1"`) from the XML or change the code of your unmarshalling a bit: ``` package main import ( "encoding/xml" "fmt" "io" "os" "golang.org/x/text/encoding/charmap" ) // Contains the array of articles in the dblp xml type Dblp struct { XMLName xml.Name `xml:"dblp"` Dblp []Article `xml:"article"` } // Contains the article element tags and attributes type Article struct { XMLName xml.Name `xml:"article"` Key string `xml:"key,attr"` Year string `xml:"year"` } func main() { xmlFile, err := os.Open("dblp.xml") if err != nil { fmt.Println(err) } fmt.Println("Successfully Opened TestDblp.xml") // defer the closing of our xmlFile so that we can parse it later on defer xmlFile.Close() var articles Dblp decoder := xml.NewDecoder(xmlFile) decoder.CharsetReader = makeCharsetReader err = decoder.Decode(&articles) if err != nil { fmt.Println(err) } for i := 0; i < len(articles.Dblp); i++ { fmt.Println("Entered loop") fmt.Println("get title: " + articles.Dblp[i].Key) fmt.Println("get year: " + articles.Dblp[i].Year) } } func makeCharsetReader(charset string, input io.Reader) (io.Reader, error) { if charset == "ISO-8859-1" { // Windows-1252 is a superset of ISO-8859-1, so should do here return charmap.Windows1252.NewDecoder().Reader(input), nil } return nil, fmt.Errorf("Unknown charset: %s", charset) } ``` running the above program results in: ``` Successfully Opened TestDblp.xml Entered var Entered loop get title: journals/cacm/Gentry10 get year: 2010 Entered loop get title: journals/cacm/Gentry10 get year: 2010 ```
How to recursively simplify a mathematical expression with AST in python3? I have this mathematical expression: ``` tree = ast.parse('1 + 2 + 3 + x') ``` which corresponds to this abstract syntax tree: ``` Module(body=[Expr(value=BinOp(left=BinOp(left=BinOp(left=Num(n=1), op=Add(), right=Num(n=2)), op=Add(), right=Num(n=3)), op=Add(), right=Name(id='x', ctx=Load())))]) ``` and I would like to simplify it - that is, get this: ``` Module(body=[Expr(value=BinOp(left=Num(n=6), op=Add(), right=Name(id='x', ctx=Load())))]) ``` According to [the documentation](https://docs.python.org/3/library/ast.html), I should use the NodeTransformer class. A suggestion in the docs says the following: > > Keep in mind that if the node you’re operating on has child nodes you > must either transform the child nodes yourself or call the > generic\_visit() method for the node first. > > > I tried implementing my own transformer: ``` class Evaluator(ast.NodeTransformer): def visit_BinOp(self, node): print('Evaluating ', ast.dump(node)) for child in ast.iter_child_nodes(node): self.visit(child) if type(node.left) == ast.Num and type(node.right) == ast.Num: print(ast.literal_eval(node)) return ast.copy_location(ast.Subscript(value=ast.literal_eval(node)), node) else: return node ``` What it should do in this specific case is simplify 1+2 into 3, and then 3 +3 into 6. It does simplify the binary operations I want to simplify, but it doesn't update the original Syntax Tree. I tried different approaches but I still don't get how I can recursively simplify all binary operations (in a depth-first manner). Could anyone point me in the right direction? Thank you.
There are three possible return values for the `visit_*` methods: 1. `None` which means the node will be deleted, 2. `node` (the node itself) which means no change will be applied, 3. A new node, which will replace the old one. So when you want to replace the `BinOp` with a `Num` you need to return a new `Num` node. The evaluation of the expression cannot be done via `ast.literal_eval` as this function only evaluates literals (not arbitrary expressions). Instead you can use `eval` for example. So you could use the following node transformer class: ``` import ast class Evaluator(ast.NodeTransformer): ops = { ast.Add: '+', ast.Sub: '-', ast.Mult: '*', ast.Div: '/', # define more here } def visit_BinOp(self, node): self.generic_visit(node) if isinstance(node.left, ast.Num) and isinstance(node.right, ast.Num): # On Python <= 3.6 you can use ast.literal_eval. # value = ast.literal_eval(node) value = eval(f'{node.left.n} {self.ops[type(node.op)]} {node.right.n}') return ast.Num(n=value) return node tree = ast.parse('1 + 2 + 3 + x') tree = ast.fix_missing_locations(Evaluator().visit(tree)) print(ast.dump(tree)) ```
How to add a UserControl to a Window in BlendDesigner? I am a beginner in using Blend and WPF in general. I created three user controls. I also created a Window which should house the user controls. Is there any designer way, like drag dropping the user control into the Window which will make it appear automatically?. Or typing in the XAML is the only way to include it?
At the bottom of your Blend toolbar (the vertically-oriented list of tools and controls -- on the left, by default), there is a button that looks like right-facing chevrons/arrow. Clicking this will expand the list of controls at your disposal. You can browse, but by default your cursor will now be in a search box. Type the first few letters of your control name, and you should see it appear. Click it. Now, at the *very* bottom of the toolbar, should be your control. Hover your mouse over it to see the name and description, if any. In the Objects and Timeline pane, select the window or panel to which you want the control added, and then double-click that button at the bottom of the toolbar. Alternatively, you can drag-and-drop the control into your form or into the Objects and Timeline pane.
Cascades settings in JPA and foreign key violation in hibernate I have tow classes: 1. Parent 2. Child In the database Child table has a column ParentId -> typical One (Parent) -> Many (Children) relation Now I create two entities and they ``` public class Parent { @OneToMany(mappedBy="Parent", fetch = FetchType.EAGER, cascade = CascadeType.ALL) public Set<Child> getChildern() { ... } } public class Child { @ManyToOne(fetch = FetchType.EAGER, cascade=CascadeType.ALL) @JoinColumn(name="ParentId") public Parent getParent() { ... } } ``` Now I have two scenarios: 1. Parent gets deleted -> what should happen? 2. Child gets deleted -> what should happen? Bonus questions: 1. Do I always need to create both parts of the key OneToMany and ManyToOne or can i just have ManyToOne and dont care in the parent where I have children? 2. What could cause a hibernate to give me a message foreign key constraint violation for a parent which has no children?
First of all I'm surprised this code works at all. IMO `mappedBy="Parent"` should actually be `mappedBy="parent"` (note the lower-case 'p') because the parent property of the `Child` class is called `parent` and not `Parent`. Second, I suggest you place the annotations on the properties rather than on the accessor methods. I find it makes the whole code - more readable - easier to maintain because [getters/setters can then be added behind the scenes by Lombok](http://projectlombok.org/features/GetterSetter.html) Answers to your questions depend on what exactly you mean by "get deleted". I assume you mean "deleted through persistence manager". BUT just in case you expect/want that a child is removed by the JPA provider if you do `parent.getChildren().remove(x)` then you need to set [`orphanRemoval = "true"`](http://docs.oracle.com/cd/E19798-01/821-1841/giqxy/index.html) on `OneToMany`. **Question 1** Parent and all children are deleted. That's the common case. **Question 2** Parent and all children are deleted. That's a rather odd use case. Usually cascade delete is only applied on the one-to-many relationship. **Bonus 1** > > All relationships in Java and JPA are unidirectional, in that if a > source object references a target object there is no guarantee that > the target object also has a relationship to the source object. > > > from the excellent [Java Persistence wiki book](http://en.wikibooks.org/wiki/Java_Persistence/OneToMany). **Bonus 2** Dunno. Is the `ConstraintViolationException` coming from the underlying data base? Or put differently, how does the DDL for the two tables look like? Was it generated by Hibernate?
Redirect From Action Filter Attribute What is the best way to do a redirect in an `ActionFilterAttribute`. I have an `ActionFilterAttribute` called `IsAuthenticatedAttributeFilter` and that checked the value of a session variable. If the variable is false, I want the application to redirect to the login page. I would prefer to redirect using the route name `SystemLogin` however any redirect method at this point would be fine.
**Set filterContext.Result** With the route name: ``` filterContext.Result = new RedirectToRouteResult("SystemLogin", routeValues); ``` You can also do something like: ``` filterContext.Result = new ViewResult { ViewName = SharedViews.SessionLost, ViewData = filterContext.Controller.ViewData }; ``` --- If you want to use `RedirectToAction`: You could make a public `RedirectToAction` method on your controller (*preferably on its base controller*) that simply calls the protected `RedirectToAction` from `System.Web.Mvc.Controller`. Adding this method allows for a public call to *your* `RedirectToAction` from the filter. ``` public new RedirectToRouteResult RedirectToAction(string action, string controller) { return base.RedirectToAction(action, controller); } ``` Then your filter would look something like: ``` public override void OnActionExecuting(ActionExecutingContext filterContext) { var controller = (SomeControllerBase) filterContext.Controller; filterContext.Result = controller.RedirectToAction("index", "home"); } ```
Pandas and Cassandra: numpy array format incompatibility I'm using the Python cassandra driver to connect and query our Cassandra cluster. I want to manipulate my data via Pandas, there is an area in the documentation for the cassandra driver that mentions this exactly: <https://datastax.github.io/python-driver/api/cassandra/protocol.html> > > NumpyProtocolHander: deserializes results directly into NumPy arrays. > This facilitates efficient integration with analysis toolkits such as > Pandas. > > > Following the above instructions and doing a SELECT query in Cassandra, I can see the output (via the type() function) as a: ``` <class 'cassandra.cluster.ResultSet'> ``` Iterating through the results, this what printing a row comes up as: ``` {u'reversals_rejected': array([0, 0]), u'revenue': array([ 0, 10]), u'reversals_revenue': array([0, 0]), u'rejected': array([3, 1]), u'impressions_positive': array([3, 3]), u'site_user_id': array([226226, 354608], dtype=int32), u'error': array([0, 0]), u'impressions_negative': array([0, 0]), u'accepted': array([0, 2])} ``` (I've limited the query results, I'm working with much larger amounts of data - hence wanting to use numpy and pandas). My knowledge of Pandas is limited, I attempted to run very basic functionalities: ``` rslt = cassandraSession.execute("SELECT accepted FROM table") test = rslt[["accepted"]].head(1) ``` This outputs the following error: ``` Traceback (most recent call last): File "/UserStats.py", line 27, in <module> test = rslt[["accepted"]].head(1) File "cassandra/cluster.py", line 3380, in cassandra.cluster.ResultSet.__getitem__ (cassandra/cluster.c:63998) TypeError: list indices must be integers, not list ``` I understand the error, I just don't know how to "transition" from this supposed numpy array to being able to use Pandas.
The short answer is: ``` df = pd.DataFrame(rslt[0]) test = df.head(1) ``` rslt[0] gives you your data as a Python dict, that can be easily converted to a Pandas dataframe. For a complete solution: ``` import pandas as pd from cassandra.cluster import Cluster from cassandra.protocol import NumpyProtocolHandler from cassandra.query import tuple_factory cluster = Cluster( contact_points=['your_ip'], ) session = cluster.connect('your_keyspace') session.row_factory = tuple_factory session.client_protocol_handler = NumpyProtocolHandler prepared_stmt = session.prepare ( "SELECT * FROM ... WHERE ...;") bound_stmt = prepared_stmt.bind([...]) rslt = session.execute(bound_stmt) df = pd.DataFrame(rslt[0]) ``` **Note:** The above solution will only get you part of the data if the query is large. So you should do: ``` df = pd.DataFrame() for r in rslt: df = df.append(r) ```
Unable to setup DKIM TXT-Value as DNS-Record I have a domain name which DNS is edited via Google Cloud DNS. And I have a Google Apps for Work Account with that domain name. I wanted to set up DKIM-authentication but when I try to save the corresponding TXT-Record I get the error that the Tag is invalid. I did the same before and it worked perfectly. I checked the old setup and I saw that the old DKIM-record was about half the length. The new one seems to be too long for a TXT-record in the Google Cloud Platform. Does anyone have a solution?
yeah, you have to split the record as described in this article: <https://support.google.com/a/answer/173535> > > If your domain provider limits the size of the TXT record value to 255 characters, you can't enter the DKIM key as a single entry in the DNS records. In this case, split the key into multiple quoted text strings and enter them together in the TXT record value field. For example, split the DKIM key into two parts as follows: > > > > ``` > "v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAraC3pqvqTkAfXhUn7Kn3JUNMwDkZ65ftwXH58anno/bElnTDAd/idk8kWpslrQIMsvVKAe+mvmBEnpXzJL+0LgTNVTQctUujyilWvcONRd/z37I34y6WUIbFn4ytkzkdoVmeTt32f5LxegfYP4P/w7QGN1mOcnE2Qd5SKIZv3Ia1p9d6uCaVGI8brE/7zM5c/" "zMthVPE2WZKA28+QomQDH7ludLGhXGxpc7kZZCoB5lQiP0o07Ful33fcED73BS9Bt1SNhnrs5v7oq1pIab0LEtHsFHAZmGJDjybPA7OWWaV3L814r/JfU2NK1eNu9xYJwA8YW7WosL45CSkyp4QeQIDAQAB" > ``` > > The two quoted strings have to stay on the same line - in the same box in the Cloud DNS interface rather than in two separate boxes.
Run a Pre-Build Event only when Rebuilding I'm compiling a project which depends on a couple of other open-source projects (specifically, Zlib and LibTIFF), for which Windows makefiles (intended for use with nmake) are provided. I'd like to arrange things such that, whenever I perform a 'Rebuild' on my project, the open-source projects are also rebuilt with the appropriate configuration (Debug/Release, 32/64 bit etc.). My own project is fairly straightforward but not trivial. I squandered my first few days by trying to figure out how to generate nmake makefiles of my own, either directly or using CMake. Despite having a reasonable amount of experience with GNU make, this caused me to lose the will to live. After coming out of rehab, I decided to stick with a Visual Studio project file for building my own project, and to define a 'Pre-Build event' to execute nmake on the supplier-provided makefiles for my open source libraries. For example, in my Visual Studio 2010 project page for a 32-bit Release build, using the Intel C++ compiler, I write my Pre-Build Event command line as: ``` call "C:\Program Files (x86)\Intel\Composer XE 2013 SP1\bin\ipsxe-comp-vars.bat" ia32 vs2010 cd ..\..\..\zlib-1.2.8 nmake /f win32\Makefile.msc zlib.lib cd ..\tiff-4.0.3 nmake /f Makefile.vc lib ``` (I realise I could just as well put the above into a batch file, and call the batch file, but I don't want to give myself another file to keep track of). Anyway, this works great, except for one thing: if I decide to change my configuration, I'd like to execute an 'nmake clean' on the two libraries to ensure that they're rebuilt with my new configuration, something along the lines of: ``` call "C:\Program Files (x86)\Intel\Composer XE 2013 SP1\bin\ipsxe-comp-vars.bat" ia32 vs2010 cd ..\..\..\zlib-1.2.8 IF "%BUILDACTION%"=="REBUILD" nmake /f win32\Makefile.msc clean nmake /f win32\Makefile.msc zlib.lib cd ..\tiff-4.0.3 IF "%BUILDACTION%"=="REBUILD" nmake /f Makefile.vc clean nmake /f Makefile.vc lib ``` Unfortunately, I can't seem to identify any environment variable which satisifes the role of BUILDACTION here, nor can I see a straightforward way of defining one. So, is there any way of doing what I'm trying to do here, or am I on a hiding to nothing?
[Stijn](https://stackoverflow.com/users/128384/stijn) put me on the right track: the final solution was manually to add the following section right at the end of my .vcxproj file (immediately before the </Project> which formed the very last line): ``` <Target Name="OnCleanOnly" AfterTargets="Clean"> <Exec Command="call &quot;C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat&quot; x86 cd ..\..\..\zlib-1.2.8 nmake /f win32\Makefile.msc clean cd ..\tiff-4.0.3 nmake /f Makefile.vc clean" /> <Message Text="Cleaning libraries" Importance="High" /> </Target> ``` This ensures, as required, that the 'nmake clean' commands are run every time either 'Clean Solution' or 'Rebuild Solution' are selected from the 'Build' menu. This is *precisely* what I was hoping to achieve.
Admob on Android: banner space not reserved before loading We have been using **AdMob** on our Android app for more than 4 years. In the last days, we encountered an issue with AdMob, without modifying any code. As you can see from the picture below: - PREVIOUSLY, the banner space was reserved, before the banner was loaded - NOW, the banner space is not reserved before loading, creating a very annoying experience for the user, who sees content shifting down after the banner is loaded [![enter image description here](https://i.stack.imgur.com/QipQI.png)](https://i.stack.imgur.com/QipQI.png) === Here is a description of our implementation: we are placing our banner about 20% top of the screen of a fragment, inside a LinearLayout "banner\_container" ``` <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > .... <LinearLayout android:id="@+id/banner_container" android:visibility="gone" android:layout_width="match_parent" android:layout_height="wrap_content" android:orientation="vertical" /> .... </LinearLayout> ``` on Fragment's "onCreateView" we are adding the banner to the container ``` @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { ... adView = new AdView(getActivity()); adView.setAdSize(AdSize.SMART_BANNER); adView.setAdUnitId(AD_UNIT_ID); LinearLayout mBannerContainer = rootView.findViewById(R.id.banner_container); mBannerContainer.setVisibility(View.VISIBLE); mBannerContainer.addView(adView); AdRequest adRequest = new AdRequest.Builder().addTestDevice(AdRequest.DEVICE_ID_EMULATOR) .build(); adView.loadAd(adRequest); ... } ``` === How can we revert to the situation where the banner space is already reserved on loading?
Since Admob does not reserve the banner height before loading it, the solution seems to be to do it manually. According to the [Admob guide for Banner Ads:](https://developers.google.com/admob/android/banner) > > Smart Banners are ad units that will render screen-wide banner ads on any screen size across different devices in either orientation. Smart Banners help deal with increasing screen fragmentation across different devices by "smartly" detecting the width of the phone in its current orientation, and making the ad view that size. > > > Three ad heights (in dp, density-independent pixel) are available: > > > 32 - used when the screen height of a device is less than 400 > 50 - used when the screen height of a device is between 400 and 720 > 90 - used when the screen height of a device is greater than 720 > > > **Solution 1** You know the height using below method: ``` public static int getAdViewHeightInDP(Activity activity) { int adHeight = 0; int screenHeightInDP = getScreenHeightInDP(activity); if (screenHeightInDP < 400) adHeight = 32; else if (screenHeightInDP >= 400 && screenHeightInDP <= 720) adHeight = 50; else adHeight = 90; return adHeight; } public static int getScreenHeightInDP(Activity activity) { DisplayMetrics displayMetrics = ((Context) activity).getResources().getDisplayMetrics(); float screenHeightInDP = displayMetrics.heightPixels / displayMetrics.density; return Math.round(screenHeightInDP); } ``` In your "banner\_container" remove: ``` android:visibility="gone" ``` Change Fragment "onCreateView": ``` @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { ... adView = new AdView(getActivity()); adView.setAdSize(AdSize.SMART_BANNER); adView.setAdUnitId(AD_UNIT_ID); LinearLayout mBannerContainer = rootView.findViewById(R.id.banner_container); mBannerContainer.setLayoutParams( new LinearLayout.LayoutParams( ViewGroup.LayoutParams.MATCH_PARENT, getAdViewHeightInDP(this.getActivity()) )); ) mBannerContainer.addView(adView); AdRequest adRequest = new AdRequest.Builder().addTestDevice(AdRequest.DEVICE_ID_EMULATOR) .build(); adView.loadAd(adRequest); ... } ``` **Solution 2** Instead of using the methods "getAdViewHeightInDP" and "getScreenHeightInDP", the method "AdSize.SMART\_BANNER.getHeightInPixels (this)" is used. In your "banner\_container" remove: ``` android:visibility="gone" ``` Change Fragment "onCreateView": ``` @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { ... adView = new AdView(getActivity()); adView.setAdSize(AdSize.SMART_BANNER); adView.setAdUnitId(AD_UNIT_ID); LinearLayout mBannerContainer = rootView.findViewById(R.id.banner_container); mBannerContainer.setLayoutParams( new LinearLayout.LayoutParams( ViewGroup.LayoutParams.MATCH_PARENT, AdSize.SMART_BANNER.getHeightInPixels(this.getActivity()) )); ) mBannerContainer.addView(adView); AdRequest adRequest = new AdRequest.Builder().addTestDevice(AdRequest.DEVICE_ID_EMULATOR) .build(); adView.loadAd(adRequest); ... } ```
Linq Query needed to satisfy a WECO rule I'm creating an application for a manufacturing company. We're implementing WECO (Western Electric Co.) rules for statistical process control. One of the rules states that if any 2 out of 3 consecutive values exceeds some target value, issue an alarm. So, to keep things simple, say I have the following list of values: ``` List<double> doubleList = new List<double>() { .89,.91,.93,.95,1.25,.76,.77,.78,.77,1.01,.96,.99, .88,.88,.96,.89,1.01 }; ``` From this list, I want to pull out all sequences where any 2 out of 3 consecutive values are greater than .94. The Linq query should return the following six sequences: ``` .93, .95, 1.25 (3rd, 4th and 5th values) .95, 1.25, .76 (4th, 5th and 6th values) .77, 1.01, .96 (9th, 10th and 11th values) 1.01, .96, .99 (10th, 11th and 12th values) .96, .99, .88 (11th, 12th and 13th values) .96, .89, 1.01 (15th, 16th and 17th values) ``` Notice the last sequence. The two values, out of the three are not consecutive. That's ok, they don't need to be. Just any 2 out of 3 consecutive. I've thought about starting with the first value, taking three and checking for any two out of that three, moving to the second value, doing the same, moving to the third and doing the same etc. in a loop This would work of course, but would be slow. I'm assuming there must be a faster way to do this.
You could write an extension method: ``` public static IEnumerable<IEnumerable<double>> GetTroubleSequences(this IEnumerable<double> source, double threshold) { int elementCount = source.Count(); for (int idx = 0; idx < elementCount - 2; idx++) { var subsequence = source.Skip(idx).Take(3); if (subsequence.Aggregate(0, (count, num) => num > threshold ? count + 1 : count) >= 2) { yield return subsequence.ToList(); } } } ``` Now you can just use it on your input list: ``` var problemSequences = doubleList.GetTroubleSequences(0.94); ``` Note that above extension method is inefficient, if your input list is very long you should consider just a regular for loop with a sliding window so you only iterate over the sequence once - or rewrite the extension method accordingly (i.e. limit to `ICollection` input so you can use the indexer instead of having to use `Skip` and `Take`). Update: Here's a version requiring `IList` so we can use the indexer: ``` public static IEnumerable<IEnumerable<double>> GetTroubleSequences(this IList<double> source, double threshold) { for (int idx = 0; idx < source.Count - 2; idx++) { int count = 0; for (int i = idx; i < idx + 3; i++) if (source[i] > threshold) count++; if (count >= 2) yield return new[] { source[idx], source[idx + 1], source[idx + 2] }; } } ``` This version is traversing the list once, and for each item in the list evaluates the next 3 items starting with the current item, so still O(n).
OpenGL - selective world rendering I'm building a miniature city with the basic minimum looks of a city (roads,buildings,trees etc) where u can move around. I know that rendering the whole model set in each frame doesn't work... So can anyone give me an insight on the standard (but easiest) procedure used in selectively rendering only the visible parts of the system? I mean, just displaying only the visible stuff (with respect to the camera position) and not rendering the unseen part.. Im using VC++ and GLUT API.
Maybe [this Wikipedia article](http://en.wikipedia.org/wiki/Frustum_culling) provides a very basic introduction to the field of culling techniques. A good starting point and one of the easiest techniques is view frustum culling. With this method you check for each object in your scene if it is inside the viewing volume (viewing frustum). This basically amounts to checking for some simplified bounding volume of the geometry (like a box or a sphere, that completely contain the geometry) if it lies inside the viewing frustum, defined by six planes. This can further be optimized by grouping objects by their position and create a so-called bounding volume hierarchy, this way you e.g. first check if a whole city block is inside the viewing volume (by using a bounding volume that contains the whole block) and only if it is, you further check the individual houses. A more complicated technique is occlusion culling, which means checking if an object is completely hidden behind another object. Because these techniques can get substantially more complicated it should (if done) actually be done after the view frustum culling. OpenGL has hardware occlusion queries that can aid you in determining if an object is actually visible, but they require some additional work to work well. Especially for cities there may be special two-dimensional occlusion culling techniques (long time ago I heard about that, don't know). This is just a very broad overview, feel free to google for individual keywords. It is always a good idea to carefully weight if the additional CPU-overhead is worth it (especially with complicated occlusion culling techniques), considering that nowadays the trend is to batch as many geometry as possible into a single draw call (by the way, I hope you don't use immediate mode `glBegin/glEnd`, otherwise changing this to vertex arrays or better VBOs is the first point on your agenda). But view frustum culling might be a nice and easy starting point, especially if the city gets rather large.
Write multi-line variable to .txt (CMD, NO .bat) I'm successfully **splitting** `%PATH%` into multiple lines like this ``` set t=%PATH:;=^&echo.% ``` then this **displays** each path in new line nicely, just as I want: ``` echo %t% ``` However when I want to **write** variable into a file ``` echo %t% >paths.txt ``` only last line is written to the file. What am I doing wrong? # Update Turns out the `set t=%PATH:;=^&echo.%` command does not replace `;` characters with line breaks (as I was told) but instead replaces it with the `&echo.‌​` command, which is later executed.
### What am I doing wrong? You need to surround the last `echo` with `(` and `)` ``` (echo %t%) > paths.txt ``` Corrected batch file (test.cmd): ``` @echo off setlocal set t=%PATH:;=^&echo.% echo %t% (echo %t%) > paths.txt :endendlocal ``` Example usage: ``` > test C:\Windows\system32 C:\Windows C:\Windows\System32\Wbem C:\Windows\System32\WindowsPowerShell\v1.0\ C:\apps\WSCC\Sysinternals Suite C:\apps\WSCC\NirSoft Utilities C:\apps\Calibre\ C:\apps\Git\cmd C:\apps\Git\mingw64\bin C:\apps\Git\usr\bin C:\apps\nodejs\ C:\Users\DavidPostill\AppData\Roaming\npm > type paths.txt C:\Windows\system32 C:\Windows C:\Windows\System32\Wbem C:\Windows\System32\WindowsPowerShell\v1.0\ C:\apps\WSCC\Sysinternals Suite C:\apps\WSCC\NirSoft Utilities C:\apps\Calibre\ C:\apps\Git\cmd C:\apps\Git\mingw64\bin C:\apps\Git\usr\bin C:\apps\nodejs\ C:\Users\DavidPostill\AppData\Roaming\npm ``` --- ### A simpler solution This solution does not require any brackets in the path to be escaped. test.cmd: ``` @echo off setlocal for %%i in ("%path:;=";"%") do ( echo %%~i >> paths.txt ) :endendlocal ``` And from the command line: ``` for %i in ("%path:;=";"%") do echo %~i >> paths.txt ``` ### Further Reading - [An A-Z Index of the Windows CMD command line](http://ss64.com/nt/) - An excellent reference for all things Windows cmd line related. - [syntax-brackets](http://ss64.com/nt/syntax-brackets.html) - Using parenthesis/brackets to group expressions
Detect if qt is running a debug build at runtime How can I detect from within a QObject at runtime whether or not the Qt it's linked against is a debug build or release build? I know about the [QT\_NO\_DEBUG macro](http://www.qtforum.org/article/35913/detect-debug-or-release-mode.html), but that is resolved at build time for Qt. Example 1 of when this would be useful: I have a plugin that acts as a crash handler, providing a backtrace. I only want this plugin to attempt to do all this work if the qt debug symbols are available. Example 2: I have a command line argument parser written in pyqt. It wants to print program help. How does it find out if QApplication will support the -sync switch or not from within pyqt. If I had info on the build type, I could add or remove -sync easily from the list of arguments the program understands.
If you just want to test whether you are running a debug or release build of *your application*, use `QT_DEBUG`: ``` #ifdef QT_DEBUG qDebug() << "Running a debug build"; #else qDebug() << "Running a release build"; #endif ``` Though this obviously uses the pre-processor, rather than checking at runtime. I'm a bit confused as to why you've specified that you want to make this check at runtime, seeing as the decision to build with or without debug capability can only be made prior to compiling... I'm not aware of any easy way in Qt4 to check whether the Qt library you're linking against has been built with debugging enabled (though I did notice that this looks to be changing in Qt5 with the addition of [`QLibraryInfo::isDebugBuild()`](https://doc.qt.io/qt-5/qlibraryinfo.html#isDebugBuild)).
How could I implement in C language using macros and function pointers something similar to lambda functions in C++? I want to implement in C something similar to lambda functions from C++( using macros and function pointers) I think the biggest problem that I am facing is to define a function inside another function, and this thing is not possible in C. I think a better idea is to treat the lambda function ( passed through a macro ) as a lexical block. I started some code: ``` #define func_name(line) func##line #define line __LINE__ #define lambda(body, ret_type, ...) ret_type func_name(line)(__VA_ARGS__) \ { \ body; \ } \ //#include <stdio.h> lambda( printf("Hello from Lambda\n") , void, int a, float b) int main(void) { //printf("Hello World\n"); return 0; } ``` I used gcc compiler with "-E" option to see preprocessor output: ``` void funcline(int a, float b) { printf("Hello from Lambda\n"); } int main(void) { return 0; } ```
It's possible, but the lambdas will have to be stateless (no captures). You can use preprocessor to move function definitions around, but there is no way to use local variables from one function in another, or to introduce some kind of state into a function. Also it makes the code very hard to debug. Since all functions using those pseudo-lambdas have to be wrapped in a macro, all line breaks in them are removed during preprocessing. It becomes impossible to place a breakpoint inside of such a function, or to advance through it line by line. --- Here is an example of the usage. The implementation is at the end of the answer. The explanation of the syntax is right after the example. `[Run on gcc.godbolt.org](https://gcc.godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAMzwBtMA7AQwFtMQByARg9KtQYEAysib0QXACx8BBAKoBnTAAUAHpwAMvAFYTStJg1AB9U8lJL6yAngGVG6AMKpaAVxYM9DgDJ4GmADl3ACNMYhAATgA2UgAHVAVCWwZnNw89eMSbAV9/IJZQ8OiLTCtshiECJmICVPdPLhKy5MrqglzAkLDImIUqmrr0xr62jvzCnoBKC1RXYmR2DgB6JYBqR2JMJgJMBVWmVYMC9CYAOgBSDQBBFdXFJmB2S5u11bfV7yuAWQAhABErhBNgQ5gxjAQAJ6xTCTCCxaqsBSw4IYCGTZ63bCqVixcQY17vWIEYirc4AZj%2BH2%2B/0BfgIsLpq1UpFWjLRQMwIOIDCZpIATD9VhDyT90WSfs9znyyVgaP4qb8AQrjJ9FVdJddbn9MHLdvtVlRXAxrMkLpq1vdHiB8e83gAxOQBRwcrlgyHQ2HMNiw%2BHERHI1Ho82rLE4vHB20Op0QOmw/ioBmCJkstmw4Gg3lSwXC8VBl621YKVBsYyG43UVAJkWS6WyvyYVZRxzKpsa/NXBgQ1Y2Nj685RDSqmn9jSrEikgetgf7WLQ6p7PyJLBjqirX2MAgIXa7Ug2t6bhIN9eCLdKPYsVx9VahNebBaYLDoa9dzd4c9MZDEVBm/N28eYbEWFxTAWVfBc9gAdy/IxrQjd54wgIcARjQRYTTTkM0kPkRUmMUJWDAB1BA6AbMDVggkgAGsFFg/NbXjD5jEQ6lkNjCB0NdVYsJwvCaxlHV60Y5UAmwIQABU21uEckPVadDCfEcp1HARaC7CjiEo1kGCXBsR2wAIrh%2BbxsBVFiriECBTis9EBx/KSB30wzjNMtULKs04bNHAD4QYdA9iICcNHckcWViO9MAfB9nwNI0TQEVY6wYJIBD2fgSTEWhuy3Q5WGCE4FB/KV%2BN1EMDKMkyZKEZVHPKlyaSESS1m8XKTlWL09WqBs0CAw8n1QVcyLCgTVH2Xz9QYdwwjwZBVgAST%2BQrpTwKhfJ1ZVlAAJWwO1ZoADT4xKG1VYxNu2vaPhapg7Vi4w%2BIcZbGrmoD6DYQRtmSBLOSYOgaMejaMO5PYGFQV8jEW4rBOOgI5G8bxLOsm1ZqwQRCC7FgPy/cHDuVeb9LE2axIATXhjzVlMAA1K5jCuDaAHEhFMG1nGNbZ/Io/ZiGASbBAKg6BPlY7HCuMSIGZIVJmVIWxKYsW0T5krBeFmWWTRJkiqK4Vg1pvAADdGEC0XJmFAcWXTQHAtUEcsf5o7vhOmmSYl47lBp4wyeMSnqbphm3bzIrsed13Hfdz2afp0xhJhuGbW1vWeRHQ3jY0U2Ae0wKk%2BthW7YCAB5e2NuDqGo5Dqmw59m0sR8vzAuCuSxrQFmCAXAhyMIBBAuMfS/it%2BXIbtrvC/7gI/iYinS%2B90w/drG3quHpj3IlsevfD27pSKzvh8eq5VkefxiGm1Z0c/VBuxP7GyzitP5Jy44mF54NLXYQLjr%2BM6RIgKCmFiFP6QgdqfQRCwJEEAUToDlrZRGPJNwNkYMgWgGA/DAFWGgLALJvLyX8ifEcn9Yh/1YDCHu1x/Yzxfm/bAH8/Tf1WAvVYsJaCkBwaQCAS8y6T1IKQnaIljAaF7gLO2r9OEmQ0Hg70bV8HsP4WQ4wXBeG2y%2BMYARs0uFcGDswj248V7TA4Uoky2EiHTyzvIxRXC%2BTBxYRPYwsJzCPQ2FsHYewDhHDymcG0j8QDPztjJF0AC/RAIDGAzymc%2B7yK8TQ7R79jq4wCPjImLJzGaOYvIqWiSTpbR2rtOJxhHA50dGJbAG1J55i1DbBxMVjTlDshaBQDwn7STtk2bxIiYRwkAcA0B4CgqyJbI6Z0YTJGCJSdDWGmTQ4WL9sGOpISzIjlGgpAcx0lJjgYKpciVEtI6RXFlN8h8Mbfi6cdGqzlKrB3OAAVkcM8W0Bzh4QGVLTfSdUAQM23hAcw8SfYSwlmci51wrlDz%2BLc469yAhZJzq/amqxXmkHeZPWhj1lBfgWAoM8N9nEOLGhfcoC4oHZX4LQeBEFEFrk6iebc991QGOCcYES4lg6wlQNCBgpArHQvUcvD5rz4FKGZVCrehxKyxFOKsf6rp/LZRIHgYAfgxDIIwCBVum5UX5VWJsXEH4oqEsVWgYgmwFDxF8kSzFH12rkuIYY4wwLQXgsBH0%2BRlrsngp%2BGC4mMLLF3IeQ6ky%2BEKUQz4Xaj1YKvWD39SCz1xgnV/BdWy1hbqgUButfsu29rA3Uw3t3fRvq5EWvjV6tNiaQ1Wq9c6iA7p5W2qycLFJybHXOuMKBKEzTXVT0ze60NKaI2E2MMgMx0aLH5uzW2mtkbjC0AoV/H%2BLJ2oshoTgppzaA5JpzeG2tDLGDsUhf26tRbh1wMPOu8Z7Z%2BUMqFbTRgYRth6ica1RKyVtIsnwAoUQxB8BGH2Pisc0CSQoMwEEv1A7HnmWpsG/9lVl2RrUaMzRraAMM29WaqllrQM/GA4hsysHi1Nug6B9UGaF0Fuw3m3DJDF0gqQ4Rn1eGQNobA8TUt07rKS0rXG0j1GO11u7A22ETbN0PKQ7W7tC8eMsdcjRkdY6qHAkneI4lZBqEMZGDYGakTX7RIJoTFVnIxG9l9KSAA7IKV1en00UeI/h1jK7GXsSEzB0Tu6lBWaIY5nDPq/BwNcMuckjg%2Bj4G/Agck2A%2BKubcB5skXmCDoFoHgYIpw/NkgC05w5FU0MQDbDrVAeAnwATDJgFRft9OXPeIyaoxAzkSlOZSckFX9PSEaGSUgfJSCnPOLp7u4o2yRh6ShX%2B3V1xWJSi3NLGXVgAComAsgbleQbT5hvBDzLaFLvyCxvDNjyYbXXhuTAOAAWhG%2BtyYwRqyLbeHN94ABHIsNQIDFZZKclkiQABemB%2BpdemLKoCnVeJOfouOLrrJSQUlWDw8Uf3POrCa8DvAWYsxwqOwWMKdIqApb5NhPkpyFLI5ZMV0rkPyufe3oSVwBBkAIGqBAMAHBvkMHJ3j5rxnnhTdWFl56mBTF5e9baIrOrStnIqwD5rPwaukDqw1prLXDvOdtOdkgLdvBMWu2Du7eBHvPdjCyLxbEJsDfS9NsbsrtJa6GzNk7bwFv46Wyt3bdINvbct4IDbB22uw9wuLgrbw0qQsZHgf7lIgeCi96D8HfuocChh2buH%2B9BCI6lCjtH/IGscxK6cn4OO/h49tLEQnxPSfk8p9Tl3YunOMnRn4Bz6p8uw6Z8BXL4vbSV/oKz/PdPrgS0xGVI5aHngcGmLQTgpzeCeA4FoUgqBODNiLHMBY/IyQ8FIAQTQXfpiURAKcvkpwIgaCiAADl06c05kgyQRFOVwPkUQYg944JIfv8/h%2BcF4DRZOc/B9d6YTAKAr%2BIBIB6yRMgFBXnFliN/igHAoYMAFwGSBoMnDQLQDsMQDRCAtfsENKsQBCJwDPt1K9AQDnMsigU/qQFgOjEYOILgfgHeDYHrDRLgQBJgMgITosEPnSKUNfpFsEH6Mgc4FgNfsSHgCwKgc/lQAYMAAoOTHgJgBBDnIyrwTIIICIGIOwFIFIfIEoGoNfroI0AYDBKYF2voFFjRJANMAylipwFtt5pVsgBEBEKsFtgRBlJYQRABMSNttCCcCjMgFtmEF%2BLAXfqUNQeUPYL5IMA0KQD4PWOMN0I0JkLegERkAkLemMF0OEMMN4ZfK0AMC4PUHoJYD4S0P0O0CEfERkTkVEcMDkXEQUGEdMOPvMHIbPpsIsDwN3r3lfrgSPhwKoJvlEFtlEJIMguoUgmAacEFKOBAI4CyLgIQOOEVI0OsP/t/lPlwJMLwI/loLhKQEvmSFEKcFwO0RoKcpvhoJvnyJvsfnyJILdufpfqQDwacsnAPkPi0XfiAA/vPtMHALAEgLMAQBngQOQJQF/vQAkUEfgEQACfwNIaIOIPIaCYoSoOoLgcSBFJITgrwQ0RwH3qQLcbwC0TnITl8Zsm0R0V0T0SAasP0YMZCs4EBLMZMQsbPs8dMFuEwFgOEOxPoJwBcVcTcdfvcRYI8bSU/isUvpILpgMRENPicacQOJINvmcZwGSE0XcbfnycsSiXyPKZiYqUsQvqQHrLAckCAJIEAA)` ``` #include <stdio.h> #include <stdlib.h> ENABLE_LAMBDAS( void example1() { int arr[] = {4,1,3,2,5}; FUNC(int)(compare)(const void *a, const void *b) ( return *(int*)a - *(int*)b; ) qsort(arr, 5, sizeof(int), compare); for (int i = 0; i < 5; i++ ) printf("%d ", arr[i]); putchar('\n'); } void example2() { int arr[] = {4,1,3,2,5}; qsort L_(arr, 5, sizeof(int), LAMBDA(int)(const void *a, const void *b) ( return *(int*)a - *(int*)b; )); for (int i = 0; i < 5; i++ ) printf("%d ", arr[i]); putchar('\n'); } int main() { example1(); example2(); } ) // ENABLE_LAMBDAS ``` Notice the `ENABLE_LAMBDAS` macro wrapping the whole snippet. This example uses two ways of defining functions/lambdas: - `FUNC(return_type)(name)(params)(body)` just defines a function. The function definition is moved to the beginning of `ENABLE_LAMBDAS`, so it can be used inside of other functions. - `LAMBDA(return_type)(params)(body)` defines a pseudo-lambda. A function definition for it is generated at the beginning of `ENABLE_LAMBDAS`, with an automatically chosen unique name. `LAMBDA...` expands to that name. If `FUNC` or `LAMBDA` are used inside of parentheses, the parentheses must be preceeded by the `L_` macro. This is a limitation of the preprocessor, unfortunately. The generated functions are always `static`. --- Implementation: ``` // Creates a lambda. // Usage: // LAMBDA(return_type)(params)(body) // Example: // ptr = LAMBDA(int)(int x, int y)(return x + y;); #define LAMBDA LAM_LAMBDA // Defines a function. // Usage: // FUNC(return_type)(name)(params)(body) // Example: // FUNC(int)(foo)(int x, int y)(return x + y;) // some_func(foo); #define FUNC LAM_FUNC // Any time a `LAMBDA` or `FUNC` appears inside of parentheses, // those parentheses must be preceeded by this macro. // For example, this is wrong: // foo(LAMBDA(int)()(return 42;)); // While this works: // foo L_(LAMBDA(int)()(return 42;)); #define L_ LAM_NEST // `LAMBDA` and `FUNC` only work inside `ENABLE_LAMBDAS(...)`. // `ENABLE_LAMBDAS(...)` expands to `...`, preceeded by function definitions for all the lambdas. #define ENABLE_LAMBDAS LAM_ENABLE_LAMBDAS // Lambda names are composed of this prefix and a numeric ID. #ifndef LAM_PREFIX #define LAM_PREFIX LambdaFunc_ #endif // Implementation details: // Returns nothing. #define LAM_NULL(...) // Identity macro. #define LAM_IDENTITY(...) __VA_ARGS__ // Concats two arguments. #define LAM_CAT(x, y) LAM_CAT_(x, y) #define LAM_CAT_(x, y) x##y // Given `(x)y`, returns `x`. #define LAM_PAR(...) LAM_PAR_ __VA_ARGS__ ) #define LAM_PAR_(...) __VA_ARGS__ LAM_NULL( // Given `(x)y`, returns `y`. #define LAM_NO_PAR(...) LAM_NULL __VA_ARGS__ // Expands `...` and concats it with `_END`. #define LAM_END(...) LAM_END_(__VA_ARGS__) #define LAM_END_(...) __VA_ARGS__##_END // A generic macro to define functions and lambdas. // Usage: `LAM_DEFINE(wrap, ret)(name)(params)(body)`. // In the encloding code, expands to `wrap(name)`. #define LAM_DEFINE(wrap, ...) )(l,wrap,(__VA_ARGS__),LAM_DEFINE_0 #define LAM_DEFINE_0(name) name,LAM_DEFINE_1 #define LAM_DEFINE_1(...) (__VA_ARGS__),LAM_DEFINE_2 #define LAM_DEFINE_2(...) __VA_ARGS__)(c, // Creates a lambda. // Usage: `LAM_LAMBDA(ret)(params)(body)`. #define LAM_LAMBDA(...) LAM_DEFINE(LAM_IDENTITY, __VA_ARGS__)(LAM_CAT(LAM_PREFIX, __COUNTER__)) // Defines a function. // Usage: `LAM_FUNC(ret)(name)(params)(body)`. #define LAM_FUNC(...) LAM_DEFINE(LAM_NULL, __VA_ARGS__) // `LAM_LAMBDA` and `LAM_FUNC` only work inside of this macro. #define LAM_ENABLE_LAMBDAS(...) \ LAM_END( LAM_GEN_LAMBDAS_A (c,__VA_ARGS__) ) \ LAM_END( LAM_GEN_CODE_A (c,__VA_ARGS__) ) // Processes lambdas and functions in the following parentheses. #define LAM_NEST(...) )(open,)(c,__VA_ARGS__)(close,)(c, // A loop. Returns the original code, with lambdas replaced with corresponding function names. #define LAM_GEN_CODE_A(...) LAM_GEN_CODE_BODY(__VA_ARGS__) LAM_GEN_CODE_B #define LAM_GEN_CODE_B(...) LAM_GEN_CODE_BODY(__VA_ARGS__) LAM_GEN_CODE_A #define LAM_GEN_CODE_A_END #define LAM_GEN_CODE_B_END #define LAM_GEN_CODE_BODY(type, ...) LAM_CAT(LAM_GEN_CODE_BODY_, type)(__VA_ARGS__) #define LAM_GEN_CODE_BODY_c(...) __VA_ARGS__ #define LAM_GEN_CODE_BODY_l(wrap, ret, name, ...) wrap(name) #define LAM_GEN_CODE_BODY_open() ( #define LAM_GEN_CODE_BODY_close() ) // A loop. Generates lambda definitions, discarding all other code. #define LAM_GEN_LAMBDAS_A(...) LAM_GEN_LAMBDAS_BODY(__VA_ARGS__) LAM_GEN_LAMBDAS_B #define LAM_GEN_LAMBDAS_B(...) LAM_GEN_LAMBDAS_BODY(__VA_ARGS__) LAM_GEN_LAMBDAS_A #define LAM_GEN_LAMBDAS_A_END #define LAM_GEN_LAMBDAS_B_END #define LAM_GEN_LAMBDAS_BODY(type, ...) LAM_CAT(LAM_GEN_LAMBDAS_BODY_, type)(__VA_ARGS__) #define LAM_GEN_LAMBDAS_BODY_c(...) #define LAM_GEN_LAMBDAS_BODY_l(wrap, ret, name, par, ...) static LAM_IDENTITY ret name par { __VA_ARGS__ } #define LAM_GEN_LAMBDAS_BODY_open() #define LAM_GEN_LAMBDAS_BODY_close() ```
Capitalize the first letter of each word - Ruby This is my code and I'm not sure why this doesn't return the expected result: `A Bunny Hops` ``` text= "a bunny hops" final = text.split.each{|i| i.capitalize}.join(' ') puts final ```
Do as below using [`Array#map`](http://www.ruby-doc.org/core-2.1.0/Array.html#method-i-map) : ``` text.split.map { |i| i.capitalize }.join(' ') ``` Corrected and short code : ``` text= "a bunny hops" final = text.split.map(&:capitalize).join(' ') puts final # >> A Bunny Hops ``` Why didn't your one worked : Because [`Array#each`](http://www.ruby-doc.org/core-2.1.0/Array.html#method-i-each) method returns the receiver itself on which it has been called : ``` text= "a bunny hops" text.split.each(&:capitalize) # => ["a", "bunny", "hops"] ``` But `Array#map` returns a new array ``` text.split.map(&:capitalize) # => ["A", "Bunny", "Hops"] ``` I would do it as below using [`String#gsub`](http://www.ruby-doc.org/core-2.1.0/String.html#method-i-gsub): ``` text= "a bunny hops" text.gsub(/[A-Za-z']+/,&:capitalize) # => "A Bunny Hops" ``` Note: *The pattern I used here with `#gsub`, is not the trivial one. I did it as per the string have been given in the post itself. You need to change it as per the text string samples you will be having. But the above is a way to do such things with short code and more Rubyish way.*
CRON: Run job on particular hours I have a spring batch application and i am using CRON to set how often this application runs. But the problem i am running into is that i want to run the job on specific hours ``` 3 am 7 am 11 am 3 pm 7 pm 11 pm ``` As you can see it is every 4 hours but starts at 3 am so i cannot use `*/4` in the hours section of the timing format as this would start the job at 4am I have also tried `'3,7,11,15,19,23'` in the hours section but this does not work either (guessing it only works in the minutes section). Does someone know how i can do this?
Use ``` @Scedule(cron="0 0 3/4 * * ?") ``` *The Pattern `x/y` means: where `<timepart> mod y = x`* or ``` @Scedule(cron="0 0 3,7,11,15,19,21 * * ?") ``` --- According to the [Quartz Cron Trigger Tutorial](http://www.quartz-scheduler.org/documentation/quartz-2.x/tutorials/tutorial-lesson-06): > > The '/' character can be used to specify increments to values. For > example, if you put '0/15' in the Minutes field, it means 'every 15th > minute of the hour, starting at minute zero'. If you used '3/20' in > the Minutes field, it would mean 'every 20th minute of the hour, > starting at minute three' - or in other words it is the same as > specifying '3,23,43' in the Minutes field. Note the subtlety that > "/35" does \*not mean "every 35 minutes" - it mean "every 35th minute > of the hour, starting at minute zero" - or in other words the same as > specifying '0,35'. > > >
Jackson Could not read document: Unrecognized token 'contactForm': was expecting ('true', 'false' or 'null') I want to send a POST request with JQuery to a Spring Controller but i keep getting this error from jquery ``` Could not read document: Unrecognized token 'contactForm': was expecting ('true', 'false' or 'null') at [Source: java.io.PushbackInputStream@38220bcd; line: 1, column: 13]; nested exception is com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'contactForm': was expecting ('true', 'false' or 'null') at [Source: java.io.PushbackInputStream@38220bcd; line: 1, column: 13] ``` This is the POST request ``` $('#contactForm').on('submit', function(e){ e.preventDefault(); var contactForm = new Object; var firstName = $('#firstName').val(); var lastName = $('#lastName').val(); var email = $('#email').val(); var message = $('#message').val(); contactForm.firstName = firstName; contactForm.lastName = lastName; contactForm.email = email; contactForm.message = message; contactForm.accepted = true; console.log(JSON.stringify(contactForm)); $.ajax({ type: 'POST', url: '/checkContact.json', contentType : 'application/json; charset=utf-8', dataType: 'json', data: { contactForm: JSON.stringify(contactForm) }, success: function(response){ console.log(response) $('#success').text(response.message); }, error: function(data){ console.log(data.responseJSON.message); } }) }) ``` and this is the controller ``` @PostMapping("/checkContact.json") public @ResponseBody String sendContactForm(@Valid @RequestBody ContactForm contactForm, BindingResult result, HttpServletRequest request) throws MalformedURLException, JsonProcessingException{ //logic here } ``` And ContactForm ``` public class ContactForm { @NotNull @NotEmpty @ValidEmail private String email; @NotNull @NotEmpty private String firstName; @NotNull @NotEmpty private String lastName; @NotNull @NotEmpty private String message; // @AssertTrue private boolean accepted; //getters and setters } ``` I don't know exactly what is happening, because if I, for instance, try to send to the controller a JSON with POSTMAN with this body, which is the same of the `JSON.stringify(contactForm)`, everything goes well, so Jackson is doing something strange behind the scenes... ``` { "fistName": "John", "lastName": "Smith", "email": "[email protected]", "message": "Hello" "accepted":true } ```
In your jQuery ajax call adjust your data value: ``` $.ajax({ type: 'POST', url: '/checkContact.json', contentType : 'application/json; charset=utf-8', dataType: 'json', data: JSON.stringify(contactForm), success: function(response){ console.log(response) $('#success').text(response.message); }, error: function(data){ console.log(data.responseJSON.message); } }) ``` What is happening is jQuery is converting your object into a query param string and sending that. this looks like: ``` contactForm=%7B%22fistName%22%3A%22John%22%2C%... ``` Jackson is trying to interpret the query params as your request body which fails. This can be confirmed by looking at the network tab in your browser and looking at the body of the request
Make GCC to call pthread\_exit while transforming OpenMP parallel code I just found that gcc's OpenMP implementation (libgomp) doesn't call pthread\_exit(). I need that to use perfsuite (for profiling). Is there any way to tell GCC to include pthread\_exit() at the end of a parallel section of OpenMP while transforming the OpenMP code to pthread codes? I am using GCC 4.7.0 and Perfsuite 1.1.1 .
`libgomp` implements thread pools. Once created, a thread in the pool remains idle until it is signalled to become member of a thread team. After the team finishes its work, the thread goes into an idle loop until it is signalled again. The pool grows on demand but never shrinks. Threads are only signalled to exit at program finish. You can read the `libgomp` code that implements thread pools and teams in the 4.7.x branch [here](http://gcc.gnu.org/svn/gcc/branches/gcc-4_7-branch/libgomp/team.c). Pool threads are terminated like this: `libgomp` registers a destructor by the name of `team_destructor()`. It is called whenever the `main()` function returns, `exit(3)` gets called or the `libgomp` library is unloaded by a call to `dlclose(3)` (if previously loaded with `dlopen(3)`). The destructor deletes one `pthreads` key by the name of `gomp_thread_destructor`, which has an associated destructor function `gomp_free_thread()` triggered by the deletion. `gomp_free_thread()` makes all threads in the pool execute `gomp_free_pool_helper()` as their next task. `gomp_free_pool_helper()` calls `pthread_exit(3)` and thus all threads in the pool cease to exist. Here is the same process in a nice ASCII picture: ``` main() returns, exit() called or library unloaded | | team_destructor() deletes gomp_thread_destructor | | gomp_free_thread() called by pthreads on gomp_thread_destructor deletion | +-------------------------+---------------------------+ | | | gomp_free_pool_helper() gomp_free_pool_helper() ... gomp_free_pool_helper() installed as next task installed as next task installed as next task | | | | | | pthread_exit(NULL) pthread_exit(NULL) ... pthread_exit(NULL) ``` Note that this only happens once at the end of the program execution and not at the end of each `parallel` region.
Do we really need a master branch or can we release straight from a release branch? Currently we have the following branches: - Develop - Release 1 - Release 2 - etc.. - Master When we want to cut off new features for a release, we create a new release branch (say release 3). We then only apply bug fixes etc. to the release branch and no more major changes. When release 3 has been tested and everything is ready, we lock down Release 3 from further changes then merge it into master and release master into production. My question is, **do we really need a master branch?** Why can we not release straight from Release 3 into production. It seems merging into master first only adds more work and extra risk/testing requirements to ensure we did not do a "bad" merge from Release 3 to Master. The other perceived benefit from my point of view is releasing straight from the Release 3 branch means that if something goes terribly wrong and we decide to rollback, we can simple re-release the Release 2 branch into production, then continue making fixes on Release 3. This seems easier than having to rollback master. I've tried looking this up but have only found information on what the difference between release branches and master branches are, but not much on if/why we need both of them. **Edit:** The reason we normally keep multiple release branches is that we often end up with 2 or more "release-worthy" sets of development that are ready for the testing phase. E.g. The testing team might be working on Release 1, and development have just finished the Release 2 work. We do not want to merge the Release 2 and Release 1 work together because the will effectively restart the testing phase and prolong the release. So instead we create a new release branch for the release 2 work.
> > Do we really need a master branch? > > > I think this is a structural/naming thing. It seems that upon release your release and master branches are identical. This would imply it's not necessary for releases. It seems your project uses branches slightly differently than the canonical way. Nominally you have a master/release branch, and releases get tagged. Developers make branches to add features, fixes etc. and then merge their branches into the master so it can be used for release. Then when a bug occurs that needs to be applied to an old release, a new branch is made for that release based on the tagged version, the bug fix is made, merged, what have you, and then a new old version (point release) is made for those still using the old release. If you don't intend to roll fixes back into old releases, or simply don't support them, then they aren't necessary at all, hence the normal way creates them on demand, it seems you create them preemptively. So in your case, **No** you don't really need a Master branch, though it seems what you really don't need is the release branches, until you need to pull changes into them.
windows 8 app roaming storage with custom class I am learning programming in windows 8 with c#. I have worked through many tutorials (such as <http://msdn.microsoft.com/en-us/library/windows/apps/hh986968.aspx>) in the process and I am attempting to create a simple app showing data storage. All of the examples I have been able to find store only simple strings in roaming storage. Is there a way to store more complex data there? example: a List of a basic class Person with a name and age. I attempted to do it as: Saving the data: roamingSettings.Values["peopleList"] = people; Loading the Data: people = (List)roamingSettings.Values["peopleList"]; WinRT information: Error trying to serialize the value to be written to the application data store. when saving the data I get the error "Data of this type is not supported" So, maybe all you can save is string values -- but I have not seen that specified anywhere either.
Yes, you can save your values to raoming data as a collection. The solution for your problem is `ApplicationDataCompositeValue class` See <http://msdn.microsoft.com/en-us/library/windows/apps/windows.storage.applicationdatacompositevalue.aspx> for more information As you mentioned, You are developing in C# , following is the code for your problem I imagined, you have a Person class with two members ``` class person { int PersonID; string PersonName } ``` Now, to read and write values for this class, here is the code First in the constructor of your Window class, under the InitializeComponent();, create an object of roaming settings ``` Windows.Storage.ApplicationDataContainer roamingSettings = Windows.Storage.ApplicationData.Current.RoamingSettings; ``` To Write to a composition, use the following code ``` void write (Person Peopleobj) { Windows.Storage.ApplicationDataCompositeValue composite = new Windows.Storage.ApplicationDataCompositeValue(); composite["PersonID"] = Peopleobj.PersonID; composite["PersonName"] = Peopleobj.PersonName; roamingSettings.Values["classperson"] = composite; } ``` To Read a Person object, use the following code ``` void DisplayOutput() { ApplicationDataCompositeValue composite = (ApplicationDataCompositeValue)roamingSettings.Values["classperson"]; if (composite == null) { // "Composite Setting: <empty>"; } else { Peopleobj.PersonID = composite["PersonID"] ; Peopleobj.PersonName = composite["PersonName"]; } } ```
Flask plus Schedule Hi I want to integrate [schedule](https://github.com/dbader/schedule) with my Flask app since I would need to do some routinely tasks. I found it [here](https://stackoverflow.com/questions/27149386/issue-when-running-schedule-with-flask) that he used threading to run it on the background. However when I tried it on mine, I cannot exit my app using Ctrl-C, I am using Windows. I will soon deploy it on Heroku, what's wrong? Also is there any better and 'human-friendly' like schedule to do some routine task for Flask? Thanks. Here is my code: ``` from flask import Flask from datetime import datetime import gspread from oauth2client.service_account import ServiceAccountCredentials import mysql.connector from mysql.connector import Error import schedule import time from threading import Thread app = Flask(__name__) def job(): print("I'm working...") def run_schedule(): while True: schedule.run_pending() time.sleep(1) @app.route('/') def homepage(): return '<h1>Hello World!</h1>' if __name__ == '__main__': schedule.every(5).seconds.do(job) sched_thread = Thread(target=run_schedule) sched_thread.start() app.run(debug=True, use_reloader=False) ```
Try [APScheduler](http://flask.pocoo.org/docs/1.0/appcontext/#storing-data). It supports background scheduler. Here's sample code I used flask with apscheduler. ``` from flask import Flask from apscheduler.schedulers.background import BackgroundScheduler from apscheduler.executors.pool import ThreadPoolExecutor, ProcessPoolExecutor app = Flask(__name__) executors = { 'default': ThreadPoolExecutor(16), 'processpool': ProcessPoolExecutor(4) } sched = BackgroundScheduler(timezone='Asia/Seoul', executors=executors) def job(): print('hi') sched.add_job(job, 'interval', seconds=5) if __name__ == '__main__': sched.start() app.run(debug=True, use_reloader=False) ```
Hashing the arrangement of an array I am trying to write a function which takes an array of integers, and outputs a hash value to signify their arrangement order. The hash key should be as small as possible (we're dealing with embedded and space/execution optimization is critical) ``` template<size_t N> int CalculateHashOfSortOrder(int* itemsBegin) { // return ??? } ``` In other words, an array `[ 4, 2, 1, 3 ]` should produce an ID which reflects their order and would match any of the following arrays: ``` [ 4, 2, 1, 3 ] [ 4, 2, 0, 3 ] [ 400, 200, 100, 300] et al ``` I am at a loss for how to do this in the most efficient way possible. What is the algorithm to approach this? The methods I come up with seem extremely slow. For example, given the above example, there are 24 possibly arrangements (4 \* 3 \* 2 \* 1), which would fit into 5 bits. Those bits can be calculated like this: - 1st value (1). There are 4 positions possible, so 2 bits are needed to describe its position. This value is at position 2 (0-based). So push binary `10`. - 2nd value (2). There are 3 positions possible, so 2 bits are needed. It's at position 1, so push binary `01`. Hash key is now `1001b`. - 3rd value (3). There are 2 positions possible, so 1 bit is needed. It's at position 1 of the remaining values, so push `1`. The resulting key is `10011b`, but I don't see a clear way to make that anything but obnoxiously inefficient. I think there's another way to approach this problem I just haven't thought of. --- edit: A couple ideas come to mind: 1. Calculate the rank of each item in the array, and hash the rank array. Then by what method can you pack that rank array into a theoretically-smallest ID? This appears to be the most vexing element of my question. 2. Find a way to save item rank as they're inserted, optimizing #1 3. For sufficiently small # of items (<10), it may be possible to just generate a huge tree of `if()` statements via metaprogramming. Need to see if exe size would be a problem.
An equivalent hash goes something like this for `[ 40, 20, 10, 30 ]` 1. 40 is greater than 3 of the subsequent values 2. 20 is greater than 1 of the subsequent values 3. 10 is greater than 0 of the subsequent values 4. 30 has nothing after it, so ignore it. That is a pair nested loop of time Order(N^2). (Actually about 4\*4/2, where there are 4 items.) Just a few lines of code. Pack 3,1,0, either the way you did it or with anatolyg's slightly tighter: ``` 3 * 3! + 1 * 2! + 0 * 1! ``` which equal 20. It needs to be stored in the number of bits needed for 4!, namely 5 bits. I'm pretty sure this is optimal for space. I have not thought of a faster way to compute it other than O(N^2). How big is your N? For N=100, you need about 520 bits and 5K operations. 5K operations might take several microseconds (for C++), and probably less than a millisecond (even for an interpreted language).
How to use d:DesignInstance with types that don't have default constructor? I am binding a textbox to an object, like so: ``` <TextBlock d:DataContext="{d:DesignInstance ViewModel:TaskVM }" Text="{Binding Title}" MouseLeftButtonDown="TextBlock_MouseLeftButtonDown"> </TextBlock> ``` Now I am wondering how to make it display mock data during design. I've tried doing something like that: ``` <TextBlock Text="{Binding Path=Title}" MouseLeftButtonDown="TextBlock_MouseLeftButtonDown"> <d:DesignProperties.DataContext> <ViewModel:TaskVM Title="Mock"/> </d:DesignProperties.DataContext> </TextBlock> ``` However, since TaskVM has no default ctor, I am getting a "No default constructor" found. I know that when I use `d:DataContext="{d:DesignInstance ViewModel:TaskVM }"` it creates a mock data type. Is there a way for me to set the properties of this mock type? Thanks!
The default constructor is required for a type to be instantiated in XAML. As a workaround you can simply create a subclass of `TaskVM` that will have the default contructor and use it as a design time data context. ``` <TextBlock d:DataContext="{d:DesignInstance ViewModel:DesignTimeTaskVM }" Text="{Binding Title}" MouseLeftButtonDown="TextBlock_MouseLeftButtonDown"> </TextBlock> ``` Another alternative is to set `d:IsDesignTimeCreatable` to `False` and a substitute type will be created for you at runtime (using your TaskVM type as a "shape"). ``` <TextBlock d:DataContext="{d:DesignInstance ViewModel:DesignTimeTaskVM, IsDesignTimeCreatable=False}" Text="{Binding Title}" MouseLeftButtonDown="TextBlock_MouseLeftButtonDown"> </TextBlock> ```
Is it possible to translate a SAS infile statement into R code? My knowledge of SAS is inexistent, and I usually work in R and Stata. Recently I downloaded a dataset that is publicly available from the Brazilian government, and for some reason they made it available in raw format with a SAS script to read it in: ``` DATA DOM (COMPRESS = YES); INFILE "...¥T_DOMICILIO_S.txt" LRECL = 164 MISSOVER; INPUT @001 TIPO_REG $2. /* TIPO DE REGISTRO */ @003 COD_UF $2. /* CモDIGO DA UF */ @005 NUM_SEQ $3. /* NレMERO SEQUENCIAL */ @008 NUM_DV $1. /* DV DO SEQUENCIAL */ …Etc etc… RUN; ``` Is it possible to "translate this statement into an equivalent for r? If so, which function should I be looking for?
There's an app for that! Well an R package, anyway, [`SAScii`](http://cran.r-project.org/web/packages/SAScii/index.html), brought to you by the indomitable Anthony Damico. It has two functions: `parse.SAScii` and `read.SAScii`. I've used it with great success on US gummint CDC files. ``` install.packages("SAScii") library(SAScii) > parse.SAScii("test.sas") varname width char divisor 1 TIPO_REG 2 TRUE 1 2 COD_UF 2 TRUE 1 3 NUM_SEQ 3 TRUE 1 4 NUM_DV 1 TRUE 1 Warning message: In readLines(sas_ri) : incomplete final line found on 'test.sas' ``` -- Then you will need to use read.SAScii for the second step, but you did not offer an appropriate test file for that test. The input file, 'test.sas' was: ``` DATA DOM (COMPRESS = YES); INFILE "...¥T_DOMICILIO_S.txt" LRECL = 164 MISSOVER; INPUT @001 TIPO_REG $2. /* TIPO DE REGISTRO */ @003 COD_UF $2. /* CモDIGO DA UF */ @005 NUM_SEQ $3. /* NレMERO SEQUENCIAL */ @008 NUM_DV $1. /* DV DO SEQUENCIAL */ RUN; ``` If you view the ["twotorials" on Youtube by Anthony Damico](http://www.twotorials.com/) or go to [his website](http://www.asdfree.com/) you can see why I used the word "indomitable".
Python: How to copy a list of a dictionaries Python 3. I'm trying to copy a list of a dictionaries without altering the original list. This doesn't seem to work the same way that copying a list does: List of Dictionaries ``` list_of_dict = [{"A":"a", "B": "b"}] table_copy = list(list_of_dict) for x in table_copy: x['B'] = 1 print(list_of_dict) print(table_copy) ``` Yields ``` [{'A': 'a', 'B': 1}] [{'A': 'a', 'B': 1}] ``` For reference this is how copying a list looks like this: ``` orig_list = [1,2,3] copy_list = list(orig_list) copy_list[1] = "a" print(orig_list) print(copy_list) ``` Yields what we expect ``` [1, 2, 3] [1, 'a', 3] ``` How do you actually copy a list of dictionaries?
With the line of code, `table_copy = list(list_of_dict)` you are creating a new *pointer (variable)* but the underlying elements are not copied (which is shallow copy) ``` list_of_dict = [{"A":"a", "B": "b"}] table_copy = list(list_of_dict) id(list_of_dict) Out[8]: 2208287332232 id(table_copy) Out[9]: 2208275740680 id(list_of_dict[0]) Out[10]: 2208275651624 id(table_copy[0]) Out[11]: 2208275651624 <== equal to id(list_of_dict[0]) ``` You should use the [copy](https://docs.python.org/3/library/copy.html) module from the standard library which comes with two useful functions > > **copy(x):** > > > Return a shallow copy of x. > > > **deepcopy(x):** > > > Return a deep copy of x. > > > For your problem, ``` from copy import deepcopy list_of_dict = [{"A":"a", "B": "b"}] table_copy = deepcopy(list_of_dict) ``` The thumb rule is to use, **deepcopy** when you have a complex object i.e. object containing other objects. From the **docs**, > > The difference between shallow and deep copying is only relevant for compound > objects (objects that contain other objects, like lists or class instances): > > > A shallow copy constructs a new compound object and then (to the extent > possible) inserts references into it to the objects found in the original. > > > A deep copy constructs a new compound object and then, recursively, inserts > copies into it of the objects found in the original. > > >
mousewheel,wheel and DOMMouseScroll in JavaScript `DOMMouseScroll` only works for Firefox. `wheel` seems to work for both Firefox and chrome. What is this? Haven't found docs on this one. `mousewheel` doesn't work for Firefox. How should I use them, in order to gain best browser-compatibility. Example given: ``` document.addEventListener('ScrollEvent', function(e){ DoSomething(); }); ```
I would suggest that all three of them be used at the same time to cover all browsers. **Notes:** 1. In versions of Firefox where both the **`wheel`** and **`DOMMouseScroll`** events are supported, we need a way to instruct the browser to execute only **`wheel`** and not both. Something like the following:**`if ("onwheel" in window) ...`** 2. The above check though, in the case of **`IE9`** and **`IE10`** will fail, because even though these browsers support the **`wheel`** event, they don't have the **`onwheel`** attribute in DOM elements. To counter that we can use a flag as shown later on. 3. I believe the number returned by **`e.deltaY`**, **`e.wheelDelta`** and **`e.detail`** is not useful other than helping us determine the direction of the scroll, so in the solution below **`-1`** and **`1`** will be returned. **Snippet:** ``` /* The flag that determines whether the wheel event is supported. */ var supportsWheel = false; /* The function that will run when the events are triggered. */ function DoSomething (e) { /* Check whether the wheel event is supported. */ if (e.type == "wheel") supportsWheel = true; else if (supportsWheel) return; /* Determine the direction of the scroll (< 0 → up, > 0 → down). */ var delta = ((e.deltaY || -e.wheelDelta || e.detail) >> 10) || 1; /* ... */ console.log(delta); } /* Add the event listeners for each event. */ document.addEventListener('wheel', DoSomething); document.addEventListener('mousewheel', DoSomething); document.addEventListener('DOMMouseScroll', DoSomething); ``` --- *Although almost 3 years have passed since the posting of the question, I believe people who stumble upon it in the future will benefit from this answer, so feel free to suggest/make improvements to it.*
Custom Visualizer for DbCommand Hi im trying to create a custom Visualizer for the DbCommand object that should be used in Visual studio 2013. i have the following code ``` using VisualizerTest; using Microsoft.VisualStudio.DebuggerVisualizers; using System; using System.Data.Common; using System.Diagnostics; using System.IO; using System.Runtime.Serialization.Formatters.Binary; using System.Windows.Forms; [assembly: DebuggerVisualizer(typeof(TestVisualizer), typeof(CommandObjectSource), Target = typeof(DbCommand), Description = "Test")] namespace VisualizerTest { public class TestVisualizer : DialogDebuggerVisualizer { protected override void Show(IDialogVisualizerService windowService, IVisualizerObjectProvider objectProvider) { DbCommand command; try { using (Stream stream = objectProvider.GetData()) { BinaryFormatter formatter = new BinaryFormatter(); command = (DbCommand)formatter.Deserialize(stream); } MessageBox.Show(command.CommandText); } catch(Exception ex) { MessageBox.Show(ex.ToString()); } } } } namespace VisualizerTest { [Serializable] public class CommandObjectSource : VisualizerObjectSource { public override void GetData(object target, Stream outgoingData) { if (target != null && target is DbCommand) { DbCommand command = (DbCommand)target; BinaryFormatter formatter = new BinaryFormatter(); formatter.Serialize(outgoingData, command); } } } } ``` But the `CommandObjectSource` is never invoked and instead i get a exception ``` Microsoft.VisualStudio.DebuggerVisualizers.DebugViewerShim.RemoteObjectSourceException: Type 'System.Data.SqlClient.SqlCommand' in Assembly 'System.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' is not marked as serializable. ``` My understanding was that by using a custom VisualizerObjectSource i would get around the Serialization issue? As a side note i have tried to change `Target = typeof(DbCommand)` to `Target = typeof(SqlCommand)` and it made no difference. Test code: ``` class Program { static void Main(string[] args) { using (SqlCommand command = new SqlCommand("SELECT Field1 FROM table WHERE Field2 = @Value1")) { command.Parameters.AddWithValue("@Value1", 1338); TestValue(command); } Console.ReadKey(); } static void TestValue(object value) { VisualizerDevelopmentHost visualizerHost = new VisualizerDevelopmentHost(value, typeof(TestVisualizer)); visualizerHost.ShowVisualizer(); } } ```
Because you are explicitly creating the `VisualizerDevelopmentHost` it won't use the `DebuggerVisualizerAttribute` so you have to pass in your `CommandObjectSource` as the third parameter: ``` VisualizerDevelopmentHost visualizerHost = new VisualizerDevelopmentHost(value, typeof(TestVisualizer), typeof(CommandObjectSource)); ``` With this change your `CommandObjectSource` will be called but you still have the serialization problem because the `BinaryFormatter` also needs the class to be marked as `Seralizabe`... So you should probably only include the `CommandText` (or create a new DTO object and seralize that if you need multiple properties) with: ``` [Serializable] public class CommandObjectSource : VisualizerObjectSource { public override void GetData(object target, Stream outgoingData) { if (target != null && target is DbCommand) { DbCommand command = (DbCommand)target; var writer = new StreamWriter(outgoingData); writer.WriteLine(command.CommandText); writer.Flush(); } } } ``` And read it with: ``` public class TestVisualizer : DialogDebuggerVisualizer { protected override void Show(IDialogVisualizerService windowService, IVisualizerObjectProvider objectProvider) { string command; try { command = new StreamReader(objectProvider.GetData()).ReadLine(); MessageBox.Show(command); } catch (Exception ex) { MessageBox.Show(ex.ToString()); } } } ```
Sift Extraction - opencv I'm trying to get started working with sift feature extraction using (C++) OpenCv. I need to extract features using SIFT, match them between the original image (e.g. a book) and a scene, and after that calculate the camera pose. So far I have found [this algorithm](http://opencv.itseez.com/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html) using SURF. Does anyone know a base code from which I can get started, or maybe a way to convert the algorithm in the link from SURF to SIFT? Thanks in advance. **EDIT:** Ok, I worked out a solution for the sift problem. Now I'm trying to figure the camera pose. I'm trying to use: solvePnP, can anyone help me with an example?
Check out the [feature2d tutorial section](http://docs.opencv.org/doc/tutorials/features2d/table_of_content_features2d/table_of_content_features2d.html) of the new OpenCV docs website. There tutorials with code showing: 1. [Feature detection](http://docs.opencv.org/doc/tutorials/features2d/feature_detection/feature_detection.html#feature-detection) with e.g. SURF 2. [Feature Description](http://docs.opencv.org/doc/tutorials/features2d/feature_description/feature_description.html) 3. [Feature Matching](http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-flann-matcher)
How to get factor of integer number using command-line? I want to get **prime factor of some integer numbers** by using **command-line.** Example: An Integer number `786` has (prime) factors: `2`, `3` and `131` Other example for `1234567890` is : `2`, `3` (two times), `5`, `3607` and `3803`. **How do I get such above result using simple command?**
There is **`factor`** command available as follows: ``` NAME factor - factor numbers SYNOPSIS factor [NUMBER]... DESCRIPTION Print the prime factors of each specified integer NUMBER. If none are specified on the command line, read them from standard input. ``` **Example:** ``` $ factor 786 786: 2 3 131 $ factor 1234567890 1234567890: 2 3 3 5 3607 3803 ``` Visit : [`man factor`](http://manpages.ubuntu.com/manpages/trusty/en/man1/factor.1.html).
How does this Ruby code work - if-stmt with ranges ? I'm currently learning Ruby and I can't seem to wrap around what `if /start/../end` does... Help? ``` while gets print if /start/../end/ end ```
Since you mentioned that you're new to Ruby, it's first worth taking note that you're dealing with Regular Expressions (regex) in the example - anything that is delimited between two forward slashes: ``` /start/ # a regular expression literal ``` Regular Expressions are a powerful way of matching a certain combination of letters from a larger string. ``` "To start means to begin." =~ /start/ #=> true, because 'start' is in the string. ``` The double dot notation is the flip-flop operator, a controversial construct probably inherited from Perl and not usually recommended to be used because it can lead to confusion. It means the following: It will collectively evaluate to false until the left hand operand is true. At which point it will collectively evaluate to true. However it will only remain true until the right hand operand evaluates to true - at which point it will again evaluate collectively to false. Using your above example therefore: ``` while gets print if /start/../end/ end ``` 1. Until 'start' is entered in, the entire expression is false, and nothing is printed. 2. When 'start' is input, the entire expression is true, therefore EVERYTHING input after this point will also be printed out. (despite not being 'start') 3. As soon as 'end' is input, the entire expression evaluates to false, and nothing from that point on is printed out.
How to specify the data types for JSON parsing? I have a JSON response which is an Array of Hash: ``` [{"project" => {"id" => 1, "name" => "Internal"}, {"project" => {"id" => 2, "name" => "External"}}] ``` My code looks like this: ``` client = HTTP::Client.new(url, ssl: true) response = client.get("/projects", ssl: true) projects = JSON.parse(response.body) as Array ``` This gives me an Array but it seems I need to typecast the elements to actually use them otherwise I get `undefined method '[]' for Nil (compile-time type is (Nil | String | Int64 | Float64 | Bool | Hash(String, JSON::Type) | Array(JSON::Type)))`. I tried `as Array(Hash)` but this gives me `can't use Hash(K, V) as generic type argument yet, use a more specific type`. How do I specify the type?
You have to cast these as you access the elements: ``` projects = JSON.parse(json).as(Array) project = projects.first.as(Hash)["project"].as(Hash) id = project["id"].as(Int64) ``` <http://carc.in/#/r/f3f> But for well structured data like this you're better off with using `JSON.mapping`: ``` class ProjectContainer JSON.mapping({ project: Project }) end class Project JSON.mapping({ id: Int64, name: String }) end projects = Array(ProjectContainer).from_json(json) project = projects.first.project pp id = project.id ``` <http://carc.in/#/r/f3g> You can see a slightly more detailed explanation of this problem at <https://github.com/manastech/crystal/issues/982#issuecomment-121156428>
Colored text output in PowerShell console using ANSI / VT100 codes I wrote a program which prints a string, which contains [ANSI escape sequences](https://blogs.msdn.microsoft.com/commandline/2017/10/11/whats-new-in-windows-console-in-windows-10-fall-creators-update/) to make the text colored. But it doesn't work as expected in the default Windows 10 console, as you can see in the screenshot. The program output appears with the escape sequences as printed characters. If I feed that string to PowerShell via a variable or piping, the output appears as intended (red text). How can I achieve that the program prints colored text without any workarounds? [![enter image description here](https://i.stack.imgur.com/VOUk0.png)](https://i.stack.imgur.com/VOUk0.png) This is my program source (Haskell) - but the language is not relevant, just so you can see how the escape sequences are written. ``` main = do let red = "\ESC[31m" let reset = "\ESC[39m" putStrLn $ red ++ "RED" ++ reset ```
Note: - The following **applies to *regular* console windows** on Windows (provided by `conhost.exe`), which are used *by default*, including when a console application is launched from a GUI application. - By contrast, the console windows (terminals) provided by **[Windows Terminal](https://aka.ms/windowsterminal)** as well as **Visual Studio Code's *integrated terminal*** provide **support for VT / ANSI escape sequences *by default***, for *all* console applications. --- While **console windows in Windows 10 do support VT (Virtual Terminal) / ANSI escape sequences** *in principle*, **support is turned *OFF by default***. You have three options: - (a) **Activate support *globally by default, persistently***, via the registry, as detailed in [this SU answer](https://superuser.com/a/1300251/139307). - In short: In registry key `[HKEY_CURRENT_USER\Console]`, create or set the `VirtualTerminalLevel` DWORD value to `1` - From PowerShell, you can do this *programmatically* as follows: `Set-ItemProperty HKCU:\Console VirtualTerminalLevel -Type DWORD 1` - From `cmd.exe` (also works from PowerShell): `reg add HKCU\Console /v VirtualTerminalLevel /t REG_DWORD /d 1` - Open a new console window for changes to take effect. - See caveats below. - (b) **Activate support *from inside your program, for that program (process) only***, with a call to the `SetConsoleMode()` Windows API function. - See details below. - (c) **Ad-hoc workaround, *from PowerShell***: - *PowerShell (Core) 7+*: Enclose external-program calls in `(...)` (invariably collects all output first before printing): - `(.\test.exe)` - Streaming *Windows PowerShell-only* alternative: Pipe output from external programs to `Write-Host` - `.\test.exe | Out-Host` - See details below. --- ### Re (a): The registry-based approach **invariably activates VT support *globally*, i.e., for *all* console windows**, irrespective of what shell / program runs in them: - Individual executables / shells can still deactivate support for themselves, if desired, using method (b). - Conversely, however, this means that the output of any program that doesn't explicitly control VT support will be subject to interpretation of VT sequences; while this is generally desirable, hypothetically this could lead to misinterpretation of output from programs that *accidentally* produce output with VT-like sequences. Note: - While there *is* a mechanism that allows console-window settings to be scoped by startup executable / window title, via *subkeys* of `[HKEY_CURRENT_USR\Console]`, the `VirtualTerminalLevel` value seems not to be supported there. - Even if it were, however, it wouldn't be a *robust* solution, because opening a console window via a *shortcut* file (`*.lnk`) (e.g. from the Start Menu or Task Bar) wouldn't respect these settings, because `*.lnk` files have settings *built into them*; while you can modify these built-in settings via the `Properties` GUI dialog, as of this writing the `VirtualTerminalLevel` setting is not surfaced in that GUI. --- ### Re (b): Calling the [`SetConsoleMode()`](https://learn.microsoft.com/en-us/windows/console/setconsolemode) Windows API function *from inside the program (process)*, as shown [here](https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences#example-of-enabling-virtual-terminal-processing), is cumbersome even in C# (due to requiring P/Invoke declarations), and **may not be an option**: - for programs written in languages from which calling the Windows API is not supported. - if you have a preexisting executable that you cannot modify. In that event, option (c) (from PowerShell), discussed next, may work for you. --- ### Re (c): **PowerShell automatically activates VT (virtual terminal) support *for itself*** when it starts (in recent releases of Windows 10 this applies to both Windows PowerShell and PowerShell (Core) 7+) - but that **does *not* extend to external programs *called from* PowerShell**, in either edition, as of v7.3.2. - **Separately**, in v7.2+ there is the **[`$PSStyle.OutputRendering`](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_Preference_Variables#psstyle) *preference variable***, which **controls whether *PowerShell* commands produce colored output *via the [formatting system](https://docs.microsoft.com/en-us/powershell/scripting/developer/format/formatting-file-overview)***, such as the colored headers of `Get-ChildItem` output. However, this setting has no effect on (direct) output from *external programs*. `$PSStyle.OutputRendering` defaults to `Host`, meaning that only formatted output that prints to the terminal (console) is colored. `$PSStyle.OutputRendering = 'PlainText'` disables coloring, and `$PSStyle.OutputRendering = 'Ansi'` makes it unconditional; see [this answer](https://stackoverflow.com/a/69065907/45375) for more information. However, as a **workaround** you can ***relay* an external program's (stdout) output via *PowerShell's* (success) output stream**, in which case **VT sequences *are* recognized**: - As of PowerShell (Core) 7.3.2, this only works either by **enclosing the call in `(...)`** or by using [`Out-String`](https://learn.microsoft.com/powershell/module/microsoft.powershell.utility/out-string), but note that *all output is invariably collected first* before it is printed.[1] ``` (.\test.exe) ``` - In *Windows PowerShell*, in addition to the above, *streaming* the relayed output is possible too, by piping to `Write-Host` (`Out-Host`, `Write-Output` or `Out-String -Stream` would work too) ``` .\test.exe | Write-Host ``` - Note: You need these techniques only if you want to print to the *console*. If, by contrast, you want to *capture* the external program's output (including the escape sequences), use `$capturedOutput = .\test.exe` **Character-encoding caveat**: Windows PowerShell by default expects output from external programs to use the OEM code page, as defined by the legacy system locale (e.g., `437` on US-English systems) and as reflected in `[console]::OutputEncoding`. .NET console programs respect that setting automatically, but for non-.NET programs (e.g., Python scripts) that use a different encoding (and produce not just pure ASCII output (in the 7-bit range)), you must (at least temporarily) specify that encoding by assigning to `[console]::OutputEncoding`; e.g., for UTF-8: `[console]::OutputEncoding = [Text.Encoding]::Utf8`. Note that this is not only necessary for the VT-sequences workaround, but **generally necessary for PowerShell to interpret non-ASCII characters correctly**. PowerShell *Core* (v6+), unfortunately, as of v7.3.2, still defaults to the OEM code page too, but that should be considered a *bug* (see [GitHub issue #7233](https://github.com/PowerShell/PowerShell/issues/7233)), given that it otherwise defaults to *UTF-8 without BOM*. --- [1] Using `Out-String -Stream` or its built-in wrapper function, `oss`, is tempting in order to achieve streaming output, but this no longer works as of PowerShell 7.3.2, *possibly* due to the optimization implemented in [GitHub PR #16612](https://github.com/PowerShell/PowerShell/pull/16612).
Create a regex to replace the last occurrence of a character in a string I need to create a regex which should look for the last '\*' irrespective of the whitespace in the string. Then, I need to replace that string with some text. Currently, it is replacing the first occurence of '\*' in the string. How do I fix it? Here's my code: ``` const regex = /\*/m; const str = 'Field Name* * '; const replaceStr = ' mandatory'; const result = str.replace(regex, replaceStr); console.log('Substitution result: ', result); ``` Here, the output should be 'Field Name\* mandatory'. But what I get is 'Field Name mandatory \*'.
Instead of RegEx, use [`String#substring`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/substring) and [`String.lastIndexOf`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/lastIndexOf) as below ``` const str = 'Field Name* * '; const replaceStr = 'mandatory'; const lastIndex = str.lastIndexOf('*'); const result = str.substring(0, lastIndex) + replaceStr + str.substring(lastIndex + 1); console.log('Substitution result: ', result); ``` [**Still want to use RegEx?**](https://stackoverflow.com/questions/58988070/create-a-regex-to-replace-the-last-occurrence-of-a-character-in-a-string#comment104227954_58988099) ``` const regex = /\*([^*]*)$/; const str = 'Field Name* * Hello World!'; const replaceStr = ' mandatory'; const result = str.replace(regex, (m, $1) => replaceStr + $1); console.log('Substitution result: ', result); ```
How to observe the Add action of DbSet? I has two classes named `Contact` and `ContactField` as following. When the `ContactField` is added into `Contact`, I hope to assign `SortOrder` to `ContactField` automatically. Do I need to inherit DbSet and customize the `Add` method ? How to achieve it ? ``` public class Foo { private MyDbContext _db = new MyDbContext(); public void HelloWorld() { Contact contact = ....; //< A contact from database. ContactField field = ....; ///< A new field .... ///< assign other properties into this `field` field.FieldType = FieldType.Phone; // How to automatically update `SortOrder` // when adding field into `ContactFields` contact.ContactFields.Add(field); _db.SaveChanges(); } } public class Contact { public long ContactID { get; set; } public string DisplayName { get; set; } public string DisplayCompany { get; set; } public DateTime CreatedTime { get; set; } public DateTime ModifiedTime { get; set; } // Original codes //public virtual ICollection<ContactField> ContactFields { get; set; } public virtual MyList<ContactField> ContactFields { get; set; } } public class ContactField { public long ContactFieldID { get; set; } public int SortOrder { get; set; } public int FieldType { get; set; } public string Value { get; set; } public string Label { get; set; } [Column("ContactID")] public int ContactID { get; set; } public virtual Contact Contact { get; set; } } ``` **Edit:** I found what I need is to monitor the changes of `ICollection<ContactField> ContactFields`. And the `List<T>` is an implementation of `ICollection<T>`. So, I create a custom `MyList` and ask it notifies the changes of `MyList` container. I will test it works or not later. ``` public class MyList<TEntity> : List<TEntity> { public delegate OnAddHandler(object sender, TEntity entry); public event OnAddHandler OnAddEvent; public new void Add(TEntity entity) { OnAddEvent(this, entity); base.Add(entity); } } ```
The DbSet has a [Local](http://msdn.microsoft.com/en-us/library/gg696248%28v=vs.103%29.aspx) property which is an `ObservableCollection`. You can subscribe to the `CollectionChanged` event and update the sort order there. ``` public class Foo { private MyDbContext _db = new MyDbContext(); public void HelloWorld() { _db.Contacts.Local.CollectionChanged += ContactsChanged; Contact contact = ....; //< A contact from database. ContactField field = ....; ///< A new field .... ///< assign other properties into this `field` field.FieldType = FieldType.Phone; // How to automatically update `SortOrder` // when adding field into `ContactFields` contact.ContactFields.Add(field); _db.SaveChanges(); } public void ContactsChanged(object sender, NotifyCollectionChangedEventArgs args) { if (args.Action == NotifyCollectionChangedAction.Add) { // sort } } } ```
Golang template and testing for Valid fields In Go's database/sql package, there are a bunch of Null[Type] structs that help map database values (and their possible nulls) into code. I'm trying to figure out how to test whether a struct *field* is null, or in other words, when its Valid property is false. The recommended way to print a SQL field is to use the .Value property, like this: ``` <div>{{ .MyStruct.MyField.Value }}</div> ``` This works great. But suppose I have something slightly more complicated, where I need to test the value against something else, for example: ``` <select name="y"> {{ range .SomeSlice }} <option value="{{ . }}" {{ if eq $.MyStruct.MyField.Value .}}selected="selected"{{ end }}>{{ . }}</option> {{ end }} </select> ``` As it happens, this works great, too, unless .MyField is not Valid, in which case I get the error, "error calling eq: invalid type for comparison". The error makes sense, because Go can't compare a nil Field against another value (or something like that). I would have thought the 'easy' solution would be to test first whether the Value is nil, and then compare it against what I need, like this: ``` <select name="y"> {{ range .SomeSlice }} <option value="{{ . }}" {{ if and ($.MyStruct.MyField) (eq $.MyStruct.MyField.Value .)}}selected="selected"{{ end }}>{{ . }}</option> {{ end }} </select> ``` In this case, I get the same "error calling eq: invalid type for comparison". I guess that means .MyField "exists" even though the value of .MyField is not Valid. So, then I tried a half dozen other versions, mostly with the same error, for example: ``` <select name="y"> {{ range .SomeSlice }} <option value="{{ . }}" {{ if and ($.MyStruct.MyField.Valid) (eq $.MyStruct.MyField.Value .)}}selected="selected"{{ end }}>{{ . }}</option> {{ end }} </select> ``` At this point, I'm realizing I really don't understand how to test for the existence of a valid field at all. I'd appreciate any help you might have. Thanks.
The `and` function in Go templates is not short-circuit evaluated (unlike the `&&` operator in Go), all its arguments are evaluated always. Quoting from [`text/template`](https://golang.org/pkg/text/template/) package doc: > > > ``` > and > Returns the boolean AND of its arguments by returning the > first empty argument or the last argument, that is, > "and x y" behaves as "if x then y else x". All the > arguments are evaluated. > > ``` > > This means that the `{{if}}` action of yours: ``` {{ if and ($.MyStruct.MyField) (eq $.MyStruct.MyField.Value .)}} ``` Even though the condition would be evaluated to `false` if `$.MyStruct.MyField` is `nil`, but `eq $.MyStruct.MyField.Value .` will also be evaluated and result in the error you get. Instead you may embed multiple `{{if}}` actions, like this: ``` {{if $.MyStruct.MyField}} {{if eq $.MyStruct.MyField.Value .}}selected="selected"{{end}} {{end}} ``` You may also use the `{{with}}` action, but that also sets the dot, so you have to be careful: ``` <select name="y"> {{range $idx, $e := .SomeSlice}} <option value="{{.}}" {{with $.MyStruct.MyField}} {{if eq .Value $e}}selected="selected"{{end}} {{end}}>{{.}}</option> {{end}} </select> ``` **Note:** You were talking about `nil` values in your question, but the `sql.NullXX` types are structs which cannot be `nil`. In which case you have to check its `Valid` field to tell if its `Value()` method will return you a non-`nil` value when called. It could look like this: ``` {{if $.MyStruct.MyField.Valid}} {{if eq $.MyStruct.MyField.Value .}}selected="selected"{{end}} {{end}} ```
Xml node value from an attribute in VB.net I have a XML like ``` <Categories> <category name="a"> <SubCategory>1</SubCategory> <SubCategoryName>name1</SubCategoryName> </category> <category name="b"> <SubCategory>2</SubCategory> <SubCategoryName>name2</SubCategoryName> </category> </Categories> ``` How do I get the value of `<SubCategoryName>` from `<category name="a">`?
As Usman recommended, you can use LINQ, but another popular option is to use XPath. You can use XPath to select matching elements using either the `XDocument` class or the older `XmlDocument` class. Here's how you would do it with XPath via the `XDocument` class: ``` Dim doc As New XDocument() doc.Load(filePath) Dim name As String = doc.XPathSelectElement("/Categories/category[@name='a']/SubCategoryName").Value ``` And here's how you would do it with XPath via the `XmlDocument` class: ``` Dim doc As New XmlDocument() doc.Load(filePath) Dim name As String = doc.SelectSingleNode("/Categories/category[@name='a']/SubCategoryName").InnerText ``` Here's the meaning of the parts of the XPath: - `/Categories` - The slash at the begining instructs it to look in the root of the XML document. The slash is followed by the name of the sub-element we are looking for in the root. - `/category` - The name of the element we are looking for within the `/Categories` element. - `[@name='a']` - The brackets mean that it is a condition--like and `If` statement, of sorts. The @ symbol means that we are specifying an attribute name (as opposed to an element name). - `/SubCategoryName` - The name of the sub-element that we are looking for inside of the `category` element that matched that condition. XPath is very powerful and flexible. XPath is a standard query language that is used by many XML tools and technologies, such as XSLT, so it is very useful to learn. Besides, sometimes, even in documentation, it's handy to be able to specifically reference a particular XML node in a document via a simple string. LINQ is great, but it is a proprietary Microsoft technology and you can't store the LINQ path as a string in a database or configuration file, if necessary, so sometimes XPath is a preferrable method. Another variation of the XPath would be `//category[@name='a']/SubCategoryName`. The double-slash at the beginning instructs it to find the category element anywhere in the document, rather than under any particular parent element.
Declaring list with anonymous type I am trying to declare list which has anonymous type in it. I have tried to assign list value to null and also tried to assign list to new entity list but it shows error. Any pointers? ``` var finalEntries = new List<MyDbTableEntity>(); var groupedItemList = _context.MyDbTableEntity .Select(k => new { name = k.name.ToString(), data = k.info.ToString() }) .GroupBy(k => k.name) .ToList(); finalEntries.AddRange(groupedItemList); ``` Error ``` cannot convert from 'System.Collections.Generic.List<System.Linq.IGrouping<string, <anonymous type: string name, string data>>>' to 'System.Collections.Generic.IEnumerable<MyDbTableEntity>' ```
It does not work because you have a typed list of `MyDbTableEntity`. You cannot add a different type on this typed list. Try to define the right type? ``` var finalEntries = new List<MyDbTableEntity>(); var groupedItemList = _context.MyDbTableEntity .Select(k => new MyDbTableEntity { Name = k.name.ToString(), Data = k.info.ToString() }) .GroupBy(k => k.Name) .ToList(); finalEntries.AddRange(groupedItemList); ``` or it or change it to a list of `object`. ``` var finalEntries = new List<object>(); // not sure about your type var groupedItemList = _context.MyDbTableEntity .Select(k => new { name = k.name.ToString(), data = k.info.ToString() }) .GroupBy(k => k.name) .ToList(); finalEntries.AddRange(groupedItemList); ``` **Importante**: The problem of this is when you lose this scope you will not get a typed anonymous type, I mean, you will need to use some artifacts to read the properties, for sample: use `reflection` to convert the type into a `dictionary`.
Change mouse pointer speed in Windows using python I'm using a Windows10 system. I have a Tkinter canvas which has an image drawn on it. Is there any way to slow down mouse pointer speed when it is hovering over the canvas? I've checked out [this link](https://stackoverflow.com/questions/24068725/how-to-control-mouse-cursor-pointer-speed-using-python) and [this link](http://pyautogui.readthedocs.io/en/latest/mouse.html) but the answer seems unstable.. To be more specific, is it possible to slow down mouse pointer speed in plain Python/Tkinter?
On windows system you can use native [`SystemParametersInfo`](http://allapi.mentalis.org/apilist/SystemParametersInfo.shtml) to change speed of the mouse pointer. It's possible to implement via [`ctype`](https://docs.python.org/3/library/ctypes.html), which is part of Python's standard library (is it counts as a "plain" solution?). Take a look at this snippet: ``` import ctypes try: import tkinter as tk except ImportError: import Tkinter as tk def change_speed(speed=10): # 1 - slow # 10 - standard # 20 - fast set_mouse_speed = 113 # 0x0071 for SPI_SETMOUSESPEED ctypes.windll.user32.SystemParametersInfoA(set_mouse_speed, 0, speed, 0) def proper_close(): change_speed() root.destroy() root = tk.Tk() root.protocol('WM_DELETE_WINDOW', proper_close) tk.Button(root, text='Slow', command=lambda: change_speed(1)).pack(expand=True, fill='x') tk.Button(root, text='Standard', command=change_speed).pack(expand=True, fill='x') tk.Button(root, text='Fast', command=lambda: change_speed(20)).pack(expand=True, fill='x') root.mainloop() ``` But what if that our "standard" speed isn't equal `10`? No problem! Take a look at this snippet: ``` import ctypes try: import tkinter as tk except ImportError: import Tkinter as tk def change_speed(speed): # 1 - slow # 10 - standard # 20 - fast set_mouse_speed = 113 # 0x0071 for SPI_SETMOUSESPEED ctypes.windll.user32.SystemParametersInfoA(set_mouse_speed, 0, speed, 0) def get_current_speed(): get_mouse_speed = 112 # 0x0070 for SPI_GETMOUSESPEED speed = ctypes.c_int() ctypes.windll.user32.SystemParametersInfoA(get_mouse_speed, 0, ctypes.byref(speed), 0) return speed.value def proper_close(): change_speed(standard_speed) root.destroy() root = tk.Tk() root.protocol('WM_DELETE_WINDOW', proper_close) root.minsize(width=640, height=480) standard_speed = get_current_speed() safe_zone = tk.LabelFrame(root, text='Safe Zone', bg='green') slow_zone = tk.LabelFrame(root, text='Slow Zone', bg='red') safe_zone.pack(side='left', expand=True, fill='both') slow_zone.pack(side='left', expand=True, fill='both') slow_zone.bind('<Enter>', lambda event: change_speed(1)) slow_zone.bind('<Leave>', lambda event: change_speed(standard_speed)) root.mainloop() ``` In other words - it's not a hard task at all. We are free at getting/setting mouse speed without crawling at registry and without rocket science computations! More about `SystemParametersInfo` you can find on [MSDN](https://msdn.microsoft.com/en-us/library/windows/desktop/ms724947(v=vs.85).aspx).
Navigator.sendBeacon() data size limits I want to use the quite new beacon api. I searched the web but I couldn't find data what is the size limit of the sent data. In the reference it's written it's meant for small amount of data but I have to know how little...
The maximum size, if set, will be up to the user agent (browser, or whatever). See <http://www.w3.org/TR/beacon/#sec-sendBeacon-method> You could easily create a test for the `data` size limit in your web page by creating strings of variable length `N` (starting with an arbitrarily high value for `N`, and using binary search), and check the return value of `sendBeacon` (per the spec, `sendBeacon` returns `false` when the user agent data limit is exceeded). For example, I used this method to confirm that in Chrome 40 on Windows 7 the limit is `65536` (`2^16`). Example code (without the binary search for `n`): ``` var url = 'http://jsfiddle.net?sendbeacon'; var n = 65536; // sendBeacon limit for Chrome v40 on Windows (2^16) // this method courtesy of http://stackoverflow.com/questions/14343844/create-a-string-of-variable-length-filled-with-a-repeated-character var data = new Array(n+1).join('X'); // generate string of length n if(!navigator.sendBeacon(url, data)) { alert('data limit reached'); } ```
Pandas IndexSlice fails with pd.style Given this dataframe: ``` In [1]: df = pd.DataFrame(np.random.rand(4,4), index=['A','B','C','All'], columns=[2011,2012,2013,'All']).round(2) print(df) Out[1]: 2011 2012 2013 All A 0.94 0.17 0.06 0.64 B 0.49 0.16 0.43 0.64 C 0.16 0.20 0.22 0.37 All 0.94 0.04 0.72 0.18 ``` I'm trying to using `pd.style` to format the output of a dataframe. One keyword is `subset` where you define where to apply your formatting rules (for example: highlight max). The documentation for [pd.style](http://pandas.pydata.org/pandas-docs/stable/style.html#Finer-Control:-Slicing) hints that it's better to use `pd.IndexSlice` for this: > > The value passed to `subset` behaves simlar to slicing a DataFrame. > > > - A scalar is treated as a column label > - A list (or series or numpy array) > - A tuple is treated as (row\_indexer, column\_indexer) > > > Consider using `pd.IndexSlice` to construct the tuple for the last one. > > > I'm trying to understand why it's failing in some cases. Let's say I want to to apply a bar to all rows but the first and last, and all columns but the last. This `IndexSlice` works: ``` In [2]: df.ix[pd.IndexSlice[1:-1,:-1]] Out[2]: 2011 2012 2013 B 0.49 0.16 0.43 C 0.16 0.20 0.22 ``` But when passed to `style.bar`, it doesn't: ``` In [3]: df.style.bar(subset=pd.IndexSlice[1:-1,:-1], color='#d65f5f') TypeError: cannot do slice indexing on <class 'pandas.indexes.base.Index'> with these indexers [1] of <class 'int'> ``` Whereas if I pass it slightly differently, it works: ``` In [4]: df.style.bar(subset=pd.IndexSlice[df.index[1:-1],df.columns[:-1]], color='#d65f5f') ``` [![df.style.bar works as expected](https://i.stack.imgur.com/51Iyo.png)](https://i.stack.imgur.com/51Iyo.png) I'm confused why this doesn't work. There seems to be a bit of lack of documentation regarding `pd.IndexSlice` so maybe I'm missing something. It could also be a bug in `pd.style` (which is fairly new, since `0.17.1` only). Can someone explain what is wrong?
It's too bad this compatibility issue exists. From what I can tell though, you answer your own question. From you doc's you included the line: > > A tuple is treated as (row\_indexer, column\_indexer) > > > This is not what we get with the first slice: ``` In [1]: pd.IndexSlice[1:-1,:-1] Out[2]: (slice(1, -1, None), slice(None, -1, None)) ``` but we do get something of that form from the second slice method: ``` In [3]: pd.IndexSlice[df.index[1:-1],df.columns[:-1]] Out[4]: (Index(['B', 'C'], dtype='object'), Index([2011, 2012, 2013], dtype='object')) ``` I don't think that `pd.IndexSlice` even does anything except wrap the contents in a tuple for this second case. You can just do this: ``` df.style.bar(subset=(df.index[1:-1],df.columns[:-1]), color='#d65f5f') ```
Grails & Spring - in resources.groovy how to setup a list The question is straight forward, how to create a list of bean in resources.groovy? Something like that doesn't work: ``` beans { listHolder(ListHolder){ items = list(){ item1(Item1), item2(Item2), ... } } } ``` Thanks in advance for the help.
If you want a list of references to other *named* beans you can just use normal Groovy list notation and it will all be resolved properly: ``` beans { listHolder(ListHolder){ items = [item1, item2] } } ``` but this doesn't work when the "items" need to be anonymous inner beans, the equivalent of the XML ``` <bean id="listHolder" class="com.example.ListHolder"> <property name="items"> <list> <bean class="com.example.Item1" /> <bean class="com.example.Item2" /> </list> </property> </bean> ``` You'd have to do something like ``` beans { 'listHolder-item-1'(Item1) 'listHolder-item-2'(Item2) listHolder(ListHolder){ items = [ref('listHolder-item-1'), ref('listHolder-item-2')] } } ```
What is Main difference between @api.onchange and @api.depends in Odoo(openerp)? In Odoo v8 there are many API decorators used. But I don't understand the main difference between `@api.depends` and `@api.onchange`. Can anyone help me out from this one? Thank You.
**@api.depends** This decorator is specifically used for "fields.function" in odoo. For a "field.function", you can calculate the value and store it in a field, where it may possible that the calculation depends on some other field(s) of same table or some other table, in that case you can use '@api.depends' to keep a 'watch' on a field of some table. So, this will trigger the call to the decorated function if any of the fields in the decorator is **'altered by ORM or changed in the form'**. **Let's say there is a table 'A' with fields "x,y & z" and table 'B' with fields "p", where 'p' is a field.function depending upon the field 'x' from table 'A', so if any of the change is made in the field 'x', it will trigger the decorated function for calculating the field 'p' in table 'B'.** Make sure table "A" and "B" are related in some way. **@api.onchange** This decorator will trigger the call to the decorated function if any of the fields specified in the decorator is changed in the form. **Here scope is limited to the same screen / model.** Let's say on form we have fields "DOB" and "Age", so we can have @api.onchange decorator for "DOB", where as soon as you change the value of "DOB", you can calculate the "age" field. You may field similarities in @api.depends and @api.onchange, but the some differences are that scope of onchange is limited to the same screen / model while @api.depends works other related screen / model also. For more info, [Here](http://odoo-new-api-guide-line.readthedocs.org/en/latest/decorator.html) is the link that describe all API of Odoo v8.
Why do I get black outlines/edges on a texture in libGDX? Whenever I draw a texture that has alpha around the edges (it is anti-aliased by photoshop), these edges become dark. I have endlessly messed around with texture filters and blend modes but have had no success. Here is what I mean: `minFilter: Linear magFilter: Linear` [![Circle on yellow background](https://i.stack.imgur.com/UwBZO.png)](https://i.stack.imgur.com/UwBZO.png) [![Red background](https://i.stack.imgur.com/lS8iF.png)](https://i.stack.imgur.com/lS8iF.png) `minFilter: MipMapLinearNearest magFilter: Linear` [![enter image description here](https://i.stack.imgur.com/x8yVs.png)](https://i.stack.imgur.com/x8yVs.png) [![enter image description here](https://i.stack.imgur.com/tCbO4.png)](https://i.stack.imgur.com/tCbO4.png) `minFilter: MipMapLinearLinear magFilter: Linear` [![enter image description here](https://i.stack.imgur.com/Sliwk.png)](https://i.stack.imgur.com/Sliwk.png) [![enter image description here](https://i.stack.imgur.com/3rOle.png)](https://i.stack.imgur.com/3rOle.png) As you can see, changing the filter on the libGDX **Texture Packer** makes a big difference with how things look, but alphas are **still dark**. I have tried manually setting the texture filter in libgdx with: ``` texture.setFilter(minFilter, magFilter); ``` But that does not work. I have read that downscaling with a linear filter causes the alpha pixels to default to black? If this is the case, how can I avoid it? I have also tried changing the blend mode: `glBlend(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_APHA)` makes no difference. `glBlend(GL_ONE, GL_ONE_MINUS_SRC_ALPHA)` removes alpha altogether so that doesn't work. I do **NOT** want to set my `minFilter` to `Nearest` because it makes things look terribly pixellated. I have tried every other combination of texture filters but everything results in the same black/dark edges/outline effect.
> > I have read that downscaling with a linear filter causes the alpha pixels to default to black? > > > This is not necessarily true; it depends on what colour Photoshop decided to put in the *fully transparent* pixels. Apparently this is black in your case. The problem occurs because the GPU is interpolating between two neighbouring pixels, one of which is fully transparent (with all colour channels set to zero as well). Let's say that the other pixel is bright red: ``` (255, 0, 0, 255) // Bright red, fully opaque ( 0, 0, 0, 0) // Black, fully transparent ``` Interpolating with a 50/50 ratio gives: ``` (128, 0, 0, 128) ``` This is a half-opaque *dark red* pixel, which explains the dark fringes you're seeing. There are two possible solutions. ### 1. Add bleeds to transparent regions Make sure that the fully transparent pixels have the right colour assigned to them; essentially "bleed" the colour from the nearest non-transparent pixel into adjacent fully transparent pixels. I'm not sure Photoshop can do this, but the libGDX TexturePacker can; see the [`bleed` and `bleedIterations` settings](https://github.com/libgdx/libgdx/wiki/Texture-packer#configuration). You need to be careful to set `bleedIterations` high enough, *and* add enough padding for the bleed to expand into, for your particular level of downscaling. Now the example comes out like this: ``` (255, 0, 0, 255) (255, 0, 0, 0) // Red bled into the transparent region ``` Interpolation now gives a bright red transparent pixel, as desired: ``` (255, 0, 0, 128) ``` ### 2. Use premultiplied alpha This is less finicky to get right, but it helps to know exactly what you're doing. Again TexturePacker has you covered, with the `premultipliedAlpha` setting. This is OpenGL blend mode `glBlend(GL_ONE, GL_ONE_MINUS_SRC_ALPHA)`. The numbers in the example don't change; this is still the pixel that comes out: ``` (128, 0, 0, 128) ``` However, this is no longer interpreted as "half-transparent dark red", but rather as "add some red, and remove some background". More generally, with premultiplied alpha, the colour channels are not "which colour to blend in" but "how much colour to add". Note that a pixel like `(255, 0, 0, 0)` can no longer exist in the source texture: because the alpha is premultiplied, an alpha of zero automatically means that all colour channels must be zero as well. (If you want to get really fancy, you can even use such pixels to apply *additive* blending in the same render pass and the same texture as regular blending!) Further reading on premultiplied alpha: - [Shawn Hargreaves' blog](https://blogs.msdn.microsoft.com/shawnhar/2009/11/06/premultiplied-alpha/) - [Alpha Blending: To Pre or Not To Pre by NVIDIA](https://developer.nvidia.com/content/alpha-blending-pre-or-not-pre)
Please explain break and continue on Objective-C First excuse my newbie question. I am learning Objective C and very new to programming itself. Can you guys explain me what causes a break or continue inside a loop? For example, what is the difference between the two codes? ``` for (int i=0; i<100; i++) { if ([self okToProceed]) { [self doSomething]; } else { break; } } ``` and ``` for (int i=0; i<100; i++) { if ([self okToProceed]) { [self doSomething]; } else { continue; } } ``` Will the first code stop the loop the first time okToProceed returns false and the second loop simply continue to run, but doing nothing when okToProceed is false?
A `break` statement exits the loop. You can think of it as a means to create loop exit conditions. For example, in your code: `for(int i=0; i<100; i++)`, `i<100` is a loop condition. i.e. loop will exit if this condition is not met. Similarly, inside if you have something like `if(i == 34) { break; }`. This will exit the loop when value of `i` reaches 34 even though the loop exit condition specified was `i<100`. --- A `continue` statement is used to skip to the next loop cycle. This statement is used to basically avoid running the rest of the code within the loop. Example: ``` for(i=0; i<5; i++) { if(i == 3) { continue; } print(i); } ``` This loop will print `0 1 2 4`. When `i` will be 3, `continue` will skip to the next loop iteration and the statements after `continue` (*i.e. `print(i);` will not execute*). Ofcourse, the loop condition is checked before the loop runs.
addEventListener memory leak due to frames I have a GreaseMonkey script that works on a site that uses frames as an integral part of its interface. This script leaks memory like a sieve, and I believe it is due to my use of addEventListener within one of the frames. Quite simply, I attach a variety of event listeners, then the frame reloads and I attach the event listeners, and then the frame reloads, around and around for hundreds or possibly thousands of iterations as you interact with various elements in this frame or others. By the end of it, Firefox has gone from ~300M of memory to as much as 2G (or crashes before it gets there). I read somewhere that doing a full page reload would allow FireFox's garbage collection routines to kick in and recover all the memory from orphaned event handlers, and sure enough when I hit F5 after my script has run for a while, within about 10 seconds the memory is back down to 300M. Unfortunately this breaks a different frame in the site (a very popular chat window), so while it does seem to confirm my suspicion that addEventListener is to blame, it's not really an option as a solution. Is there something else I can do to free up the memory properly without forcing a full page refresh? (Currently using GM 1.5 and FF 17, but the issue has existed since GM 0.8/FF 4 or thereabouts.)
Without seeing your complete script, or a [***Short, Self Contained, Compilable Example***](http://sscce.org/), we can't be sure of what is going on. It may be that `addEventListener` is not the problem. Here are some strategies for better code, with fewer memory leaks: 1. Inline/anonymous functions *are* often a culprit, especially with event handlers. **Poor / Leaky:** ``` elem.onclick = function () {/*do something*/}; elem.addEventListener ("click", function() {/*do something*/}, false); $("elem").click ( function () {/*do something*/} ); ``` **Not leaky and also easier to maintain:** ``` elem.onclick = clickHandler; elem.addEventListener ("click", clickHandler, false); $("elem").click (clickHandler); function clickHandler (evt) { /*do something*/ } ``` Note that for userscripts you should [avoid `onclick`, etc. anyway.](http://commons.oreilly.com/wiki/index.php/Greasemonkey_Hacks/Getting_Started#Pitfall_.232:_Event_Handlers) 2. Likewise do not use JS on HTML attributes. EG don't use `<span onclick="callSomeFunction()">`, etc. 3. Minimize the code that runs in iframes to only that code you explicitly want. 1. Use the `@include`, `@exclude`, and `@match` directives to block as many unwanted iframes as possible. 2. [Wrap all code that doesn't need to run in iframes in a block](http://wiki.greasespot.net/Prevent_Execution_in_Frames) like so: ``` if (window.top === window.self) { // Not in a frame } ``` 4. Do not use `innerHTML`. 5. For lots of elements, or elements that come and go with AJAX, do not use `addEventListener()` or jQuery's `.bind()`, `.click()`, etc. This replicates the listener across, potentially, thousands of nodes. Use [jQuery's `.on()`](http://www.jqapi.com/#p=on). That way the listener is attached only once and triggers appropriately via bubbling. (Note that in some rare-ish cases `.on()` can be blocked by the page's javascript.) In your case, you probably want something like: ``` $(document).on ("click", "YOUR ELEM SELECTOR", clickHandler); function clickHandler (evt) { /*do something*/ } ``` 6. To avoid surprise circular references or orphaned items, use jQuery to add or remove elements, rather than direct DOM methods like `createElement()`, `appendChild()`, etc. jQuery is designed/tested to minimize such things. 7. Beware of overusing [`GM_setValue()`](http://wiki.greasespot.net/GM_setValue). It easily can use lots of global resources or cause a script instance to crash. 1. For same-domain values, use `localStorage`. 2. Do not use `GM_setValue()` to store anything but strings. For anything else, use a serializer such as [`GM_SuperValue`](http://userscripts.org/scripts/show/107941). Even innocent looking integers can cause the default `GM_setValue()` to crash. 3. Rather than store lots of small variables, it may be better to wrap them in an object and store *that* with one of the serializers. 8. Always check return values and assume that elements can be missing: This is **poor** (and, alas, typical): ``` $("selector").text($("selector").text().match(/foo=([bar]+)/)[1]); ``` **Better:** ``` var salesItemDiv = $("selector"); var fooMatch = salesItemDiv.text ().match (/\bfoo\s*=\s*([bar]+)\b/i); if (fooMatch && fooMatch.length > 1) { salesItemDiv.text ( fooMatch[1] ); } ``` possibly followed by: ``` salesItemDiv = fooMatch = null; ``` see below. 9. Beware of recursive / inline `setTimeout()` calls. Use `setInterval()` for repeated timing. Just like with event handlers, do not use inline/anonymous functions. 10. Run your code through [JSLint](http://www.jslint.com/). 11. Avoid `eval()` and [auto/hidden `eval()` invocations](http://commons.oreilly.com/wiki/index.php/Greasemonkey_Hacks/Getting_Started#Pitfall_.231:_Auto-eval_Strings). 12. Set variables to `null` when you are done with them. [See this, for example.](https://stackoverflow.com/a/768872/331508) 13. Reference: ["Do you know what may cause memory leaks in JavaScript?"](https://stackoverflow.com/q/170415/331508) 14. [Additional reading on JS memory leaks](https://stackoverflow.com/search?tab=votes&q=%5Bjavascript%5D%20%2btitle:memory%20%2btitle:leaks) 15. [Mozilla *Performance: Leak Tools*](https://wiki.mozilla.org/Performance:Leak_Tools)
Bootstrap responsive table with horizonascroll I am using a bootstrap 3 `responsive-table` with nearly a hundred columns. Since the number of columns are too many, it does not fit into my computer screen. According to the bootstrap [documentation](http://getbootstrap.com/css/#tables) on responsive tables, a device/screen with **768px** wide or less will have a horizontal scroll bar for these tables and the rest will not. Is there a way to make the scroll work even on a laptop/desktop screens?
You can just override the way Bootstrap styles `table-responsive` class by imitating its own CSS. For resolutions lower than 768px, Bootstrap has the following style applied: ``` /* Bootstrap.css */ @media screen and (max-width: 767px) { ... .table-responsive { width: 100%; margin-bottom: 15px; overflow-x: auto; overflow-y: hidden; -webkit-overflow-scrolling: touch; -ms-overflow-style: -ms-autohiding-scrollbar; border: 1px solid #DDD; } } ``` Therefore, if you want the same behaviour to apply on resolutions larger than 768px, you can simply duplicate this in your own styling: ``` /* Your CSS */ .table-responsive { width: 100%; margin-bottom: 15px; overflow-x: auto; overflow-y: hidden; -webkit-overflow-scrolling: touch; -ms-overflow-style: -ms-autohiding-scrollbar; border: 1px solid #DDD; } ```
Inline SVG in Firefox I'm a bit stumped with this one. I'm rendering SVG visualizations using Protovis, a JS library, and it works perfectly well in Chrome as well as Firefox. I save the rendered SVG on my server and try to re-render it in a "gallery" view using a PHP function, and this fails in Firefox. All I see is the text in the SVG, but not the SVG. I save the full svg content, like so: ``` <svg height="220" width="880" stroke-width="1.5" stroke="none" fill="none" font-family="sans-serif" font-size="10px"><g transform="translate(30, 10)"><line stroke-width="1" ``` I've tried using `<object>` but all that does is prompt Firefox to download a plugin it can't find. It works in FF4 beta, but I can't see why it won't work even in Firefox 3.6. Is this something I ought to give up on? You can see a demo here: <http://www.rioleo.org/protoviewer> (click on "gallery") Thanks once again!
Inline SVG only works in Firefox in two situations: - Firefox has the experimental [HTML5 parser](http://hacks.mozilla.org/2010/05/firefox-4-the-html5-parser-inline-svg-speed-and-more/) enabled (ie. you're using a 4.0 nightly) - The document being parsed is not HTML but [XHTML](http://www.greytower.net/archive/articles/xhtmlcontent.html) (`Content-type: application/xhtml+xml`) The `object` approach suggested by Rob should work, as long as the separate SVG file is coming from your server with `Content-type: image/svg+xml` and you use the correct syntax: ``` <object data="foo.svg" type="image/svg+xml" width="400" height="300"> ``` See [Damian Cugley's article 'SVG: object or embed?'](http://www.alleged.org.uk/pdc/2002/svg-object.html) for details of some other options, or use [SVGWeb](http://code.google.com/p/svgweb/).
Screen scraping web page containing button with AJAX I am trying to automate some of our processes, one includes logging in to an external web page, clicking a link to expand details, then grab all details displayed. I have got the process logging in, and can grab all of the details once they are expanded. The problem is with clicking the link. The link is defined like below (I have removed what the `Submit` method actually does as the code is long and probably irrelevant. Obviously the `img` is placeholder just as an example): ``` <a id="form:SummarySubView:closedToggleControl" onclick="A4J.AJAX.Submit(...); return false;" href="#"> <img ... /> </a> ``` I use this data as below: ``` void browser_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser browser = (WebBrowser)sender; HtmlElement expandDetails = browser.Document.GetElementById("form:SummarySubView:closedToggleControl"); //When open ID for element is "form:SummarySubView:openToggleControl" if(expandDetails == null) //If already expanded { //Stuff } else { expandDetails.InvokeMember("click"); //Click on element to run AJAX } } ``` Upon running `expandDetails.InvokeMember("click");` `browser_DocumentCompleted` gets called again as expected but the document is same as before and `expandDetails` is found again with the "closed" id. This means that the details I am looking for are never shown. How do I get access to the document AFTER the AJAX script runs correctly? Adding a `Timer` to delay checking the document doesn't seem to have worked.
So a really simple solution seems to have worked. My code now looks like: ``` void browser_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser browser = (WebBrowser)sender; HtmlElement expandDetails = browser.Document.GetElementById("form:SummarySubView:closedToggleControl"); //When open ID for element is "form:SummarySubView:openToggleControl" if(expandDetails == null) //If already expanded { //Stuff } else { expandDetails.InvokeMember("click"); //Click on element to run AJAX while (expandDetails != null) { expandDetails = browser.Document.GetElementById("form:SummarySubView0:closedToggleControl"); Application.DoEvents(); System.Threading.Thread.Sleep(200); } //Stuff } } ``` So running the `while` loop works fine for my case.
why does this code compile with eclipse compiler but not with javac (maven) There are a bunch of questions like this. I went through most of them but none actually but I couldn't come up with any answer: I have a weird problem in one of my GWT/GWTP classes. The class compiles fine with the Eclipse compiler but fails with the javac compiler (Maven). ``` //additional imports import com.gwtplatform.mvp.client.PresenterWidget; import com.gwtplatform.mvp.client.View; public class MyPresenter extends PresenterWidget<MyPresenter.MyView> { public interface MyView extends View { } some code } ``` When i try to compile with maven I get following error: > > cannot find symbol symbol: class **View** > > > **View** refers to the `View` interface in the `com.gwtplatform.mvp.client` package. I have other classes which look the same and work fine. The weird thing is that if I change the order of the imports or I specify the exact package of the `View` interface it compiles without any problems in maven. To be specific I moved the import for `com.gwtplatform.mvp.client.View` ``` import com.gwtplatform.mvp.client.View; //additional imports import com.gwtplatform.mvp.client.PresenterWidget; ``` I had a similar problem some time ago with cyclic inheritance problem between classes which refer to inner classes (worked in eclipse but didn't in javac). However I am not sure if that is the same problem.
Eclipse's compiler is actually a different compiler than the javac compiler. Sometimes they drift apart in behavior, usually they reconcile quickly. This was very noticable when Java's generics came out. There were cases where eclipse either found fault with a generics directive that javac would permit or javac found fault with generics that eclipse would permit (can't remember which way it drifted apart, too long ago). In either case, javac is more likely to be the correct implementation. In your case, you pollute the namespace with your generics reference to an inner class. Odds are that eclipse is reaching for the "View" in a different priority order than javac. Odds are excellent that either Javac implements the order as it is specified in the Java language guidelines, or the Java guidelines have not yet pronounced the "one true order" of resolving conflicting like-named classes. Normally this is not a problem because it is not permissible to use the same non-fully-qualified name in Java twice; however, with internal classes the specifications can be sort of "worked around". I would make the ``` public interface MyView extends View { } ``` bind to just one view (don't know if `com.gwtplatform.mvp.client.View` or `MyPresenter.View` is the right one) by making the name explicit. ``` public interface MyView extends MyPresenter.View { } ``` or ``` public interface MyView extends com.gwtplatform.mvp.client.View { } ``` That way you don't fall victim to the interface "binding" to the wrong type in a compiler-dependent manner.
Do not list parent directory as part of subdirectories when using find Suppose I have this going: ``` $ find ./src -name '*.txt' ./src/file1.txt ./src/subdir1/file2.txt ./src/subdir1/subsubdir1/file3.txt ./src/subdir2/file4.txt ``` I want to exclude the directory being searched, so something like: `./file1.txt` instead of `./src/file1.txt` `./subdir1/file2.txt` instead of `./src/subdir1/file2.txt` and so on. Adding `-mindepth 1` didn't do anything. Is it possible to do what I'm after purely in `find`?
Expanding on what [@steeldriver said](https://unix.stackexchange.com/questions/734848/do-not-list-parent-directory-as-part-of-subdirectories-when-using-find#comment1395015_734848): ``` $ cd "$(mktemp --directory)" # create temporary directory direnv: unloading $ mkdir foo bar $ touch foo/1 bar/2 $ find foo bar -type f -name '*' -printf '%P\n' 1 2 ``` The `%P` formatting string is documented as follows by the GNU `find` manual: > > `%P` > > File's name with the name of the starting-point under > which it was found removed. > > > Here, "file's name" means the pathname of the found file, not just the filename.
How can I selectively copy files from one directory to another directory? On Linux, how do I selectively copy most – but not all – files from a directory (`dir1`) to another directory (`dir2`)? I do not want to copy `*.c` and `*.txt` files to `dir2`. The `cp` man page online cannot help me.
In addition to eboix's `find` command (which as it stands breaks on whitespace, I'll put a safer way or two at the end), you can use `bash`'s `extglob` feature: ``` # turn extglob on shopt -s extglob # move everything but the files matching the pattern mv dir1/!(*.c) -t dir2 # If you want to exclude more patterns, add a pipe between them: mv dir1/!(*.c|*.txt) -t dir2 ``` See the `bash` man page for more you can do with extglob. Note that this is not recursive and so will only move files in `dir1` directly, not subdirectories. The `find` method is recursive. --- Safer `find` commands: ``` find dir1 ! -name '*.c' -print0 | xargs -0 mv -t dir2 find dir1 ! -name '*.c' -exec mv -t dir2 {} + ``` For more patterns, just add more `! -name` statements: ``` find dir1 ! -name '*.c' ! -name '*.txt' -print0 | xargs -0 mv -t dir2 find dir1 ! -name '*.c' ! -name '*.txt' -exec mv -t dir2 {} + ```
Printing Empty Json as a result I am trying to retrieve some data from my postgres db and printing them to `localhost/db` as json. I am succeeding in printing them without json but I need them in json. main.go: ``` package main import ( "database/sql" "encoding/json" "fmt" "log" "net/http" _ "github.com/lib/pq" ) type Book struct { isbn string title string author string price float32 } var b []Book func main() { db, err := sql.Open("postgres", "postgres://****:****@localhost/postgres?sslmode=disable") if err != nil { log.Fatal(err) } rows, err := db.Query("SELECT * FROM books") if err != nil { log.Fatal(err) } defer rows.Close() var bks []Book for rows.Next() { bk := new(Book) err := rows.Scan(&bk.isbn, &bk.title, &bk.author, &bk.price) if err != nil { log.Fatal(err) } bks = append(bks, *bk) } if err = rows.Err(); err != nil { log.Fatal(err) } b = bks http.HandleFunc("/db", getBooksFromDB) http.ListenAndServe("localhost:1337", nil) } func getBooksFromDB(w http.ResponseWriter, r *http.Request) { fmt.Println(b) response, err := json.Marshal(b) if err != nil { panic(err) } fmt.Fprintf(w, string(response)) } ``` [This is what I get when I access localhost:1337/db](https://i.stack.imgur.com/8kefD.png) And this is the output on the terminal: ``` [{978-1503261969 Emma Jayne Austen 9.44} {978-1505255607 The Time Machine H. G. Wells 5.99} {978-1503379640 The Prince Niccolò Machiavelli 6.99}] ``` Anyone knows what is the problem?
The [`encoding/json`](https://golang.org/pkg/encoding/json/) package uses reflection ([`reflect`](https://golang.org/pkg/reflect/) package) to access fields of structs. You need to export the fields of your struct to make it work (start them with an uppercase letter): ``` type Book struct { Isbn string Title string Author string Price float32 } ``` And when scanning: ``` err := rows.Scan(&bk.Isbn, &bk.Title, &bk.Author, &bk.Price) ``` Quoting from [`json.Marshal()`](https://golang.org/pkg/encoding/json/#Marshal): > > Struct values encode as JSON objects. Each **exported struct field** becomes a member of the object... > > >
Does Pytorch allow to apply given transformations to bounding box coordinates of the image? In Pytorch, I know that certain image processing transformations can be composed as such: `import torchvision.transforms as transforms` `transform = transforms.Compose([transforms.ToTensor()`, `transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])` In my case, each image has a corresponding annotation of bounding box coordinates with YOLO format. Does Pytorch allow to apply these transformations to the bounding box coordinates of the image as well, and later save them as new annotations? Thanks.
The transformations that you used as examples do not change the bounding box coordinates. `ToTensor()` converts a PIL image to a torch tensor and `Normalize()` is used to normalize the channels of the image. Transformations such as `RandomCrop()` and `RandomRotation()` will cause a mismatch between the location of the bounding box and the (modified) image. However, Pytorch makes it very flexible for you to create your own transformations and have control over what happens with the bounding box coordinates. Docs for more details: <https://pytorch.org/docs/stable/torchvision/transforms.html#functional-transforms> As an example (modified from the documentation): ``` import torchvision.transforms.functional as TF import random def my_rotation(image, bonding_box_coordinate): if random.random() > 0.5: angle = random.randint(-30, 30) image = TF.rotate(image, angle) bonding_box_coordinate = TF.rotate(bonding_box_coordinate, angle) # more transforms ... return image, bonding_box_coordinate ``` Hope that helps =)
Bouncing Animation in Android i would like to create animation like below; [![ANimation Bounce ](https://i.stack.imgur.com/I0Xtj.gif)](https://i.stack.imgur.com/I0Xtj.gif) I have used following code: ``` <?xml version="1.0" encoding="utf-8"?> <set xmlns:android="http://schemas.android.com/apk/res/android" android:interpolator="@android:anim/bounce_interpolator" > <scale android:duration="10000" android:fromXScale="1" android:fromYScale="0.5" /> </set> ``` But no able to work out! Any help could be appreciated. Thanks
Try below code **No need any xml** file in this code for the bounce animation as you want. **- Up Bouncing Animation** ``` doBounceAnimation(yourView); private void doBounceAnimation(View targetView) { Interpolator interpolator = new Interpolator() { @Override public float getInterpolation(float v) { return getPowOut(v,2);//Add getPowOut(v,3); for more up animation } }; ObjectAnimator animator = ObjectAnimator.ofFloat(targetView, "translationY", 0, 25, 0); animator.setInterpolator(interpolator); animator.setStartDelay(200); animator.setDuration(800); animator.setRepeatCount(5); animator.start(); } private float getPowOut(float elapsedTimeRate, double pow) { return (float) ((float) 1 - Math.pow(1 - elapsedTimeRate, pow)); } ``` **- Down Bouncing Animation** ``` doBounceAnimation(yourView); private void doBounceAnimation(View targetView) { Interpolator interpolator = new Interpolator() { @Override public float getInterpolation(float v) { return getPowIn(v,2);//Add getPowIn(v,3); for more down animation } }; ObjectAnimator animator = ObjectAnimator.ofFloat(targetView, "translationY", 0, 25, 0); animator.setInterpolator(interpolator); animator.setStartDelay(200); animator.setDuration(800); animator.setRepeatCount(5); animator.start(); } private float getPowIn(float elapsedTimeRate, double pow) { return (float) Math.pow(elapsedTimeRate, pow); } ``` I hope this can help you! Thank You.
Django rest framework auto-populate filed with user.id I cant find a way to auto-populate the field owner of my model.I am using the DRF .If i use ForeignKey the user can choose the owner from a drop down box , but there is no point in that.PLZ HELP i cant make it work.The views.py is not include cause i think there is nothing to do with it. models.py ``` class Note(models.Model): title = models.CharField(max_length=200) body = models.TextField() cr_date = models.DateTimeField(auto_now_add=True) owner = models.CharField(max_length=100) # also tried: # owner = models.ForeignKey(User, related_name='entries') class Meta: ordering = ('-cr_date',) def __unicode__(self): return self.title ``` serializers.py ``` class UserSerializer(serializers.ModelSerializer): class Meta: model = User fields = ('id', "username", 'first_name', 'last_name', ) class NoteSerializer(serializers.ModelSerializer): owner = request.user.id <--- wrong , but is what a need. # also tried : # owner = UserSerializer(required=True) class Meta: model = Note fields = ('title', 'body' ) ```
Django Rest Framework provides a pre\_save() method (in generic views & mixins) which you can override. ``` class NoteSerializer(serializers.ModelSerializer): owner = serializers.Field(source='owner.username') # Make sure owner is associated with the User model in your models.py ``` Then something like this in your view class: ``` def pre_save(self, obj): obj.owner = self.request.user ``` ### REFERENCES <http://www.django-rest-framework.org/tutorial/4-authentication-and-permissions#associating-snippets-with-users> <https://github.com/tomchristie/django-rest-framework/issues/409#issuecomment-10428031>
Django 1.6 filter by hour and time zone issue In my application I use time zones (`USE_TZ=True`) and all the dates I create in my code are aware UTC `datetime` objects (I use `django.util.timezone.now` for the current date and the following helper function to ensure all the dates in my instances are what I expect:) ``` @classmethod def asUTCDate(cls, date): if not timezone.is_aware(date): return timezone.make_aware(date, timezone.utc) return date.replace(tzinfo=timezone.utc) ``` I also enforced the check of naive/aware dates using this snippet (like suggested in the doc): ``` import warnings warnings.filterwarnings( 'error', r"DateTimeField .* received a naive datetime", RuntimeWarning, r'django\.db\.models\.fields') ``` As I understood so far, this is the right way to proceed (this is a quote from the django documentation: *"**The solution to this problem is to use UTC in the code and use local time only when interacting with end users.**"*), and it seems that my app is handling dates very well… but I have just implemented a filter against a model that makes use of the Django 1.6 `__hour` and it force the extraction based on the user timezone, the result is something like: ``` django_datetime_extract('hour', "object"."date", Europe/Rome) = 15 ``` but this breaks my query, since some results I was expecting are not included in the set, but when I use a `__range` to search between dates it seems to work as expected (objects with a date in the range are returned)… so it seems to me that Django takes into account timezones in queries only for the `__hour` filter… but I don't understand why… I was supposing that UTC is used everywhere except in templates where the displayed dates are formatted according to user tz, but maybe that's not true. So my questions are: is the way I'm working with time zones right? Is `__hour` filter wrong or what?
It seems as though you're doing the right thing with dates. However, all the documentation for any date related functionality, such as filtering by hour include this note: > > When USE\_TZ is True, datetime fields are converted to the current time > zone before filtering. > > > For the `range` filter, this note doesn't exist because `range` can be used to not only filter on dates, but other types as well such as integers and characters. ie It is not necessarily datetime aware. In essence the problem comes down to this: where do you draw the line between 'interacting with users' where times are in a local timezone, and what is internal where times are in UTC? In your case, you could imagine a user entering in a search box to search for hour==3. Does that mean for example that your form code should do the conversion between hour==3 and the UTC equivalent? This would then require a special forms.HourField. Or perhaps the value (3) should be fed directly to the query where we know that we're searching on an hour field and so a conversion is required. We really have to follow the documentation on this one. - Values which are going to be filtered against date/time fields using any of the specialised date/time filtering functions will be treated as being in the user's local time zone. - If using the `range` filter for dates no time conversions occur so you are expected to convert the user's entered local time value to UTC.
Changing project port number in Visual Studio 2013 How can I change the project port number in Visual Studio 2013 ? I'm using ASP.Net and I need to change the port number while debugging in Visual Studio 2013.
There are two project types in VS for ASP.NET projects: **Web Application Projects** (which notably have a .csproj or .vbproj file to store these settings) have a Properties node under the project. On the Web tab, you can configure the Project URL (assuming IIS Express or IIS) to use whatever port you want, and just click the Create Virtual Directory button. These settings are saved to the project file: ``` <ProjectExtensions> <VisualStudio> <FlavorProperties GUID="{349c5851-65df-11da-9384-00065b846f21}"> <WebProjectProperties> <DevelopmentServerPort>10531</DevelopmentServerPort> ... </WebProjectProperties> </FlavorProperties> </VisualStudio> </ProjectExtensions> ``` **Web Site Projects** are different. They don't have a .\*proj file to store settings in; instead, the settings are set in the *solution* file. In VS2013, the settings look something like this: ``` Project("{E24C65DC-7377-472B-9ABA-BC803B73C61A}") = "WebSite1(1)", "http://localhost:10528", "{401397AC-86F6-4661-A71B-67B4F8A3A92F}" ProjectSection(WebsiteProperties) = preProject UseIISExpress = "true" TargetFrameworkMoniker = ".NETFramework,Version%3Dv4.5" ... SlnRelativePath = "..\..\WebSites\WebSite1\" DefaultWebSiteLanguage = "Visual Basic" EndProjectSection EndProject ``` Because the project is identified by the URL (including port), there isn't a way in the VS UI to change this. You should be able to modify the solution file though, and it should work.
Extract email address from string - php I want to extract email address from a string, for example: ``` <?php // code $string = 'Ruchika <[email protected]>'; ?> ``` From the above string I only want to get email address `[email protected]`. Kindly, recommend how to achieve this.
Try this ``` <?php $string = 'Ruchika < [email protected] >'; $pattern = '/[a-z0-9_\-\+\.]+@[a-z0-9\-]+\.([a-z]{2,4})(?:\.[a-z]{2})?/i'; preg_match_all($pattern, $string, $matches); var_dump($matches[0]); ?> ``` see [demo here](https://eval.in/474164) Second method ``` <?php $text = 'Ruchika < [email protected] >'; preg_match_all("/[\._a-zA-Z0-9-]+@[\._a-zA-Z0-9-]+/i", $text, $matches); print_r($matches[0]); ?> ``` See [demo here](https://eval.in/474180)
Comments in .ghci file Is it possible to add comments to a `.ghci` file? E.g. ``` :set +r # https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/ghci.html#faq-and-things-to-watch-out-for ``` This would be useful both for documenting and for toggling behaviour.
Sure, they use the same `--` as normal Haskell comments. In fact, this is what my `.ghci` file looks like, with lots of stuff commented-out: ``` -- :def hoogle \x -> return $ ":!hoogle \"" ++ x ++ "\"" -- :def doc \x -> return $ ":!hoogle --info \"" ++ x ++ "\"" :set -XTypeOperators :set -XTupleSections :set -XFlexibleContexts :set -XGADTs -- Pretty printing of it -- :set -package ghci-pretty -- import IPPrint.Colored -- :set -interactive-print=IPPrint.Colored.cpprint ... ``` (Not sure why I did that BTW, I normally just delete stuff I don't use and restore it through version control if necessary.) --- To be precise (as the comments remind me to be), `.ghci` comments are lines starting with `--` . *Unlike* haskell comments, they can not be appended to a line containing code, nor can the space after the `--` be omitted.
After migrating to tomcat 8 aliases doesn't work any more after trying to migrate our app from tomcat 7 to tomcat 8 we have found that aliases does not work as before. Here is a content of context.xml file: ``` <Context reloadable="true" aliases="/d1=C://dir1,/d2=C://temp//dir2//,/d3=C://temp//dir3//" > <Valve className="org.apache.catalina.valves.RemoteAddrValve" allow=".*" /> </Context> ``` On tomcat 7 I can ropen this urls: ``` http://localhost:8080/myapp/d2/data.xml http://localhost:8080/myapp/d3/data.png ``` On tomcat 8 I get 404 error. Any idea? Thanks.
I have found a solution. Problem was in context.xml. To make alias work on tomcat 8 here is required change in context.xml: ``` <Context reloadable="true" > <Resources> <PreResources base="C://dir1" className="org.apache.catalina.webresources.DirResourceSet" webAppMount="/d1" /> <PreResources base="C://temp//dir2//" className="org.apache.catalina.webresources.DirResourceSet" webAppMount="/d2" /> <PreResources base="C://temp//dir3//" className="org.apache.catalina.webresources.DirResourceSet" webAppMount="/d3" /> </Resources> <Valve className="org.apache.catalina.valves.RemoteAddrValve" allow=".*" /> </Context> ```
Going "Stateless" and loading scripts dynamically What I want to know is if I am approaching this from the right angle. I have an asp.net app I am building. I am using a Masterpage for the overall look of the app (below you can see the code). I'd like to have the menu system use a dynamic load like jQuery's `.load()` function to load the content. That is fine and I have that down. The `.load()` function uses `innerHTML` to pump that content into the page. This is a problem if on that page you want to load module specific scripts and styles. My question is, in an environment such as this, how do you guys load your scripts for these modules? Should I load every script on the initial load of the app? This app will not ever be "that big" however I want to make sure I do it right just in case. **MasterSheet** ``` <div id="primaryNavigation"> <ul> <li class="current"><a href="../Default.aspx">Main</a></li> <li><a href="../Modules/Page1.aspx">Some Overview</a></li> <li><a href="../Modules/Page2.aspx">Reporting</a></li> <li><a href="../Modules/Page3.aspx">More Reporting</a></li> <li><a href="../Modules/Page4.aspx">About</a></li> </ul> </div> <div id="mainContentContainer"> <asp:ContentPlaceHolder ID="cphBody" runat="server" /> </div> ``` **Example Module inside of the Content tag** ``` <div id="container"> Inside a page <script id="scriptToLoad" type="text/javascript"> alert('Something'); head.ready(function () { console.log('please print'); }); </script> </div> <div id="includeScripts"> ../Files/Javascript/SomeModuleSpecificJs.js ../Files/Javascript/SomeModuleSpecificJs1.js </div> ``` My idea was to set up a `div` in each module that would have the id of "includeScripts" and load those from a method within the mastersheet like this. This method works (needs some tweeking obviously) however if the user keeps clicking on modules eventually every file will be loaded. If thats the case I might as well load them all on the mastersheet. **JS to be ran when the MasterPage is loaded** ``` $navigation = $("#primaryNavigation").delegate('ul li a', 'click', function () { $('#primaryNavigation').find('li').removeClass('current'); $(this).parent().addClass('current'); $('#mainContentContainer').load($(this).attr('href') + ' #container'); // Obviously this would overwrite the content from the container, this is merely proof of concept $('#mainContentContainer').load($(this).attr('href') + ' #includeScripts'); var jsArray = $('#includeScripts').text().trim().split("\n"); $.each(jsArray, function (index, value) { $.getScript(value); }); return false; }); ```
I don't know about `.load()`, but JQuery's `.html()`, `.append()`, and a few other related functions will automatically run any script tags that they find in the given HTML. If `load()` doesn't do that for you, it should be easy enough to use `$.get(..., function(){$('#myElement').html();});` instead. You could even write your own extension specifically for this purpose. Style sheets may be a different story. I've typically just used a single style sheet per page. ### Edit I just spent some more time reading your question, and I realized that I didn't answer it fully. > > Should I load every script on the initial load of the app? > > > It really depends on the size of your scripts and the way you expect users to interact with your system. In [this seminar](http://www.google.com/events/io/2010/sessions/architecting-performance-gwt.html), the people who made Google Wave talk about how they addressed this issue. At one point the speaker says, "Perceived latency is the most important thing to optimize for." The problem was, in an early version, their javascript file (optimized and compiled by GWT) was a few megabytes in size. People with a slow connection (a cell phone browser, e.g.) would have to wait a long time for all this code to download before they could see what was in their Inbox. Their solution was to create "split points" in their code so that it could be loaded in chunks. The code necessary for displaying the Inbox could be loaded first, whereas the Contacts panel could wait until the user clicks "Contacts." But you can take this too far. The other speaker in this video says the time spent in loading falls largely under one of two categories: - Fetching data you don't need, and - Too many HTTP requests Each HTTP round-trip involves a certain amount of overhead, so it can be worthwhile to load *some* code you don't need yet in order to avoid having to make another round-trip in a few milliseconds when you realize you need it. Since you say: > > This app will not ever be "that big" > > > ... I'm guessing that you'll probably fall mostly under the latter category (too many HTTP requests). The best thing to do in that case is: 1. Use a tool like [Chirpy](http://chirpy.codeplex.com/) to consolidate all your javascript files into a single file (which can be automatically minified when not in Debug mode). 2. If your application has a login page that doesn't use all this javascript functionality, add a `script` tag for this javascript file at the bottom of the login page so that the user's browser will download the file behind the scenes while the user is busy entering their username and password. The master page for the rest of the site should simply include the script file once in a standard `script` tag. 3. Make sure your site's caching rules are set up properly so that user's browser will only request this file once. 4. Make sure your site is set to compress this javascript file since javascript (especially minified javascript) lends itself to gzip compression very nicely. Once you've done this, you should find that there is no "perceived latency" from loading your javascript file. If your application does eventually become "that big," you'll want to break your program down into modules like the Google Wave team did. But choose your modules based on how you expect the system to be used. If only a small handful of users is likely to use your admin interface, for example, you'll want to put all of your admin UI code into a separate module that "normal" users will never have to download. When deciding where to draw the line, UI experts basically say one-fifth of a second is the point where the typical human's brain starts wondering, "Did that work?" If a user clicks a button and has to wait longer than that before they see something happen, you've reached the point of "perceived latency." Anything beyond that will become increasingly annoying to the user.
Android Material Design Button Styles I'm confused on button styles for material design. I'd like to get colorful raised buttons like in the attached link., like the "force stop" and "uninstall" buttons seen under the usage section. Are there available styles or do I need to define them? <http://www.google.com/design/spec/components/buttons.html#buttons-usage> I couldn't find the default button styles. Example: ``` <Button style="@style/PrimaryButton" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Calculate" android:id="@+id/button3" android:layout_below="@+id/editText5" android:layout_alignEnd="@+id/editText5" android:enabled="true" /> ``` If I try to change the background color of the button by adding ``` android:background="@color/primary" ``` all of the styles go away, such as the touch animation, shadow, rounded corner, etc.
You can use the **[Material Component library](https://github.com/material-components/material-components-android)**. Add [the dependency](https://github.com/material-components/material-components-android/blob/master/docs/getting-started.md) to your `build.gradle`: ``` dependencies { implementation ‘com.google.android.material:material:1.3.0’ } ``` Then add the [**`MaterialButton`**](https://github.com/material-components/material-components-android/blob/master/docs/components/MaterialButton.md) to your layout: ``` <com.google.android.material.button.MaterialButton style="@style/Widget.MaterialComponents.Button.OutlinedButton" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/app_name" app:strokeColor="@color/colorAccent" app:strokeWidth="6dp" app:layout_constraintStart_toStartOf="parent" app:shapeAppearance="@style/MyShapeAppearance" /> ``` You can check the [full documentation](https://material.io/develop/android/components/material-button/) here and [API here](https://developer.android.com/reference/com/google/android/material/button/MaterialButton). To change the **background color** you have 2 options. 1. Using the **`backgroundTint`** attribute. Something like: ``` <style name="MyButtonStyle" parent="Widget.MaterialComponents.Button"> <item name="backgroundTint">@color/button_selector</item> //.. </style> ``` 2. It will be the best option in my opinion. If you want to override some theme attributes from a default style then you can use new **`materialThemeOverlay`** attribute. Something like: ``` <style name="MyButtonStyle" parent="Widget.MaterialComponents.Button"> <item name=“materialThemeOverlay”>@style/GreenButtonThemeOverlay</item> </style> <style name="GreenButtonThemeOverlay"> <!-- For filled buttons, your theme's colorPrimary provides the default background color of the component --> <item name="colorPrimary">@color/green</item> </style> ``` The option#2 requires at least the version `1.1.0`. [![enter image description here](https://i.stack.imgur.com/w0zEu.png)](https://i.stack.imgur.com/w0zEu.png)[![enter image description here](https://i.stack.imgur.com/hbFcR.png)](https://i.stack.imgur.com/hbFcR.png) You can use one of these styles: - **Filled Button (default)**: `style="@style/Widget.MaterialComponents.Button` - **Text Button**: `style="@style/Widget.MaterialComponents.Button.TextButton"` - **OutlinedButton**: `style="@style/Widget.MaterialComponents.Button.OutlinedButton"` --- **OLD Support Library:** With the new [Support Library 28.0.0](https://developer.android.com/topic/libraries/support-library/revisions.html#28-0-0-alpha1), the Design Library now contains the `MaterialButton`. You can add this button to our layout file with: ``` <android.support.design.button.MaterialButton android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="YOUR TEXT" android:textSize="18sp" app:icon="@drawable/ic_android_white_24dp" /> ``` By default this class will use the **accent colour** of your theme for the buttons filled background colour along with white for the buttons text colour. You can customize the button with these attributes: - `app:rippleColor`: The colour to be used for the button ripple effect - `app:backgroundTint`: Used to apply a tint to the background of the button. If you wish to change the background color of the button, use this attribute instead of background. - `app:strokeColor`: The color to be used for the button stroke - `app:strokeWidth`: The width to be used for the button stroke - `app:cornerRadius`: Used to define the radius used for the corners of the button
font-awesome animation not working with display: none; If I hide an element with `display: none;`, and then show it at a later time with `$(".fa-spin").show()` the `fa-spin` animation doesn't work. Note that everything works properly if the element is not hidden in the beginning but is hidden later with: ``` $(".fa-spin").hide() ``` This is the `.fa-spin` implementation: ``` .fa-spin { -webkit-animation: fa-spin 2s infinite linear; animation: fa-spin 2s infinite linear; } @-webkit-keyframes fa-spin { 0% { -webkit-transform: rotate(0deg); transform: rotate(0deg); } 100% { -webkit-transform: rotate(359deg); transform: rotate(359deg); } } ``` Can you explain this behavior? **I am asking the reason of this behavior, not workarounds.** <https://jsfiddle.net/md0ej7pt/>
`jQuery.show()` sets the `display` property to `inline` when called on the `i` element. According to WC3 documentation `inline` elements cannot be animated: <https://drafts.csswg.org/css-transforms-1/> > > **Transformable element** > > > A transformable element is an element in one of these categories: an > element whose layout is governed by the CSS box model which is either > *a block-level or atomic inline-level element, or whose display property computes to table-row, table-row-group, table-header-group, > table-footer-group, table-cell, or table-caption* [CSS21] > > > To correct this, use Vallius's suggestion of setting display: `inline-block` instead of using `show()`, or wrap the element and hide the parent instead. <https://jsfiddle.net/359zLsdf/2/> ``` <span class="coggy" style="display:none" ><i class="fa fa-cog fa-spin" aria-hidden="true"></i></span> <i class="fa fa-cog fa-spin myCog " style="display:none" aria-hidden="true"></i> $(".coggy").show(); $(".myCog").css("display","inline-block"); ```
Query for VHDL synthesis for IC Design (Not FPGA), specifically in case of variable assignment If for a given process, I declare a variable (let's say a 1 bit variable, `variable temp : std_logic;`) then can I assign a value to the variable if a given condition returns true, i.e. ``` if (xyz=1) then --Assuming that this condition returns TRUE temp:= '1'; ``` ?? Will this logic be synthesizable for ASICs?
Yes. Variables are synthesisable for both FPGA and IC. A process is a little bit of software that models a little bit of hardware. That little bit of software can use variables, but as variables are only in scope within a process, ultimately you do have to drive a signal - the output of the little bit of hardware. For example, here is some combinational logic: ``` process (A, B, C, D) variable TMP : std_logic; begin if A = '1' then TMP := B and C; TMP := TMP and D; else TMP := '0'; end if; F <= TMP; end process; ``` Here is an example of using a variable that will synthesise to combinational logic on the D input of a flip-flop (because it is in a clocked process): ``` process (CLOCK) variable TMP : std_logic; begin if rising_edge(CLOCK) then TMP := A and B; Q <= TMP; end if; end process; ``` And here is an example of using a variable in a clocked process that will synthesise to a flip-flop (with an AND gate on its D input): ``` process (CLOCK) variable TMP : std_logic; begin if rising_edge(CLOCK) then Q <= TMP; TMP := A and B; end if; end process; ``` The only difference between the two clocked processes is the order. In the first, the variable is assigned to before being accessed; in the second, it is accessed before it is assigned to. - If you assign to a variable before accessing it in a clocked process combinational logic will be inferred; - if you access a variable before assigning to it in a clocked process, a flip-flop will be inferred. - Do not ever access a variable before assigning to it in a combinational process: latches will be inferred. Variables retain their value between executions of a process. Therefore, if a variable is accessed before being assigned to in a clocked process, the value read must have been written on a previous execution of the process. In a clocked process, that previous execution will have been on a previous clock edge: hence, a flip-flop is inferred.
Fastest RDP client on linux I currently use KRDC on CentOS, but mouse pointer is a little slow and I'm sure that this is not caused by my internet line speed, because the windows RDP client works and everything is fine. Which RDP client is the fastest on CentOS (or any other Linux OS)?
KRCD and I suspect all other Linux RDC clients use [`rdesktop`](http://www.rdesktop.org/) under the hood. I'd encourage you to play with the settings of `rdesktop`, I myself have used it without any speed issues. It's very possible that your windows RDP connection uses different options, especially the `-x` parameter to `rdesktop`: ``` Changes default bandwidth performance behaviour for RDP5. By default only theming is enabled, and all other options are disabled (corresponding to modem (56 Kbps)). Setting experience to b[roadband] enables menu animations and full window dragging. Setting experience to l[an] will also enable the desktop wallpaper. Setting experience to m[odem] disables all (including themes). Experience can also be a hexidecimal number containing the flags. ```
Mercurial - merging same changeset to a repository twice? We have these Mercurial repositories: ``` Trunk | | |---------myapp_1_0_23 (created off release 1.0.23) | |---------myapp-newstuff (created off rel 2.0.4) ``` Release schedule (nothing yet released): - v1.0 from `myapp_1.0.23`, any add'l changes in this repo will get merged to the trunk - v2.0 from the trunk - v3.0 or v4.0 released based on a merge of `myapp-newstuff` and the trunk. At the time of the merge the trunk may have v2.0 code or some new features that we'll release from the trunk as v3.0 After making changes in `myapp_1.0.23`, we merge them to the trunk, but let's say we also need them in `myapp-newstuff` so we also merge them there. What then happens when we eventually merge `myapp-newstuff` code to the trunk? The trunk already has changes made in `myapp_1.0.23` so what happens when we merge those same changesets from `myapp-newstuff` back to the trunk? Will Mercurial be smart enough to know those changesets are already in the trunk?
Mercurial will handle this situation just great -- because you're using 'merge'. When you're using export/import (or transplant), *cherry picking* as it's called, and you have the same changesets in there multiple times with different node ids (due to different parents) then Mercurial can't know "Oh, this one's already here". However, so long as you're merging Mercurial will do a great job of saying "oh, this repo already has that changeset so I don't need to re-apply it". The general rule of thumb is: "Make every change with as early a parent as you can and then merge down". If I have a bug that's in version one, two, and three, I fix it in one and then merge into two and then merge into three. If instead you fix it first in three, then you have to try to get it into two without bringing all the other changes in version three with it -- which is hard and often requires the very cherry picking we're trying to avoid.
Using GPG with C? I am writing a communication program in C and I am looking for the best way to use GnuPG encryption. I am already using symmetric encryption algorithms via the mcrypt library but wish to incorporate some public-key capabilities, preferably using GnuPG if possible. Is there a good library available to accomplish this? Would it be better to try to interact with GPG itself directly via the program to accomplish this? Any insight would be appreciated as I would like to keep this implementation as clean as possible. Thanks.
Unfortunately, [GnuPG](http://www.gnupg.org/) is designed to be used interactively, and not as an API. You mention that you are looking to incorporate some public-key capabilities. SSL and TLS are alternatives to GPG which see much more frequent use. If public-key capabilities in general are what you seek, [GnuTLS](http://www.gnu.org/software/gnutls/) is an API for use in network based programs that provides exactly what you want. It enjoys a great deal of support, and provides SSL and TLS public-key encryption capabilities. However, if you are dead set on using GPG, [GPGME](http://www.gnupg.org/related_software/gpgme/index.en.html) is a project that exists to wrap an API around GPG. I have not used it and cannot advise on its use, but suspect that it may be somewhat forced.
C# Programmatically Unminimize form How do I take a form that is currently minimized and restore it to its previous state. I can't find any way to determine if its previous `WindowState` was `Normal` or `Maximized`; but I know the information has to be stored somewhere because windows doesn't have a problem doing it with apps on the taskbar.
There is no managed API for this. The way to do it is to PInvoke [GetWindowPlacement](http://www.pinvoke.net/default.aspx/user32.getwindowplacement) and check for `WPF_RESTORETOMAXIMIZED`. For details, see this [Microsoft How To](http://support.microsoft.com/kb/110620) (which is demonstrating the technique in VB). In C#, this would be: ``` [DllImport("user32.dll", SetLastError = true)] [return: MarshalAs(UnmanagedType.Bool)] private static extern bool GetWindowPlacement(IntPtr hWnd, ref WINDOWPLACEMENT lpwndpl); private struct WINDOWPLACEMENT { public int length; public int flags; public int showCmd; public System.Drawing.Point ptMinPosition; public System.Drawing.Point ptMaxPosition; public System.Drawing.Rectangle rcNormalPosition; } public void RestoreFromMinimzied(Form form) { const int WPF_RESTORETOMAXIMIZED = 0x2; WINDOWPLACEMENT placement = new WINDOWPLACEMENT(); placement.length = Marshal.SizeOf(placement); GetWindowPlacement(form.Handle, ref placement); if ((placement.flags & WPF_RESTORETOMAXIMIZED) == WPF_RESTORETOMAXIMIZED) form.WindowState = FormWindowState.Maximized; else form.WindowState = FormWindowState.Normal; } ```
Elegant solution to parse date I want to parse some text into a date. However, there is no guarantee that the text has the desired format. It may be `2012-12-12` or `2012` or even . Currently, I am down the path to nested try-catch blocks, but that's not a good direction (I suppose). ``` LocalDate parse; try { parse = LocalDate.parse(record, DateTimeFormatter.ofPattern("uuuu/MM/dd")); } catch (DateTimeParseException e) { try { Year year = Year.parse(record); parse = LocalDate.from(year.atDay(1)); } catch (DateTimeParseException e2) { try { // and so on } catch (DateTimeParseException e3) {} } } ``` What's an elegant solution to this problem? Is it possible to use `Optional`s which is absent in case a exception happened during evaluation? If yes, how?
The [`DateTimeFormatterBuilder`](http://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatterBuilder.html) class contains the building blocks to make this work: ``` LocalDate now = LocalDate.now(); DateTimeFormatter fmt = new DateTimeFormatterBuilder() .appendPattern("[uuuu[-MM[-dd]]]") .parseDefaulting(ChronoField.YEAR, now.getYear()) .parseDefaulting(ChronoField.MONTH_OF_YEAR, now.getMonthValue()) .parseDefaulting(ChronoField.DAY_OF_MONTH, now.getDayOfMonth()) .toFormatter(); System.out.println(LocalDate.parse("2015-06-30", fmt)); System.out.println(LocalDate.parse("2015-06", fmt)); System.out.println(LocalDate.parse("2015", fmt)); System.out.println(LocalDate.parse("", fmt)); ``` The [`parseDefaulting()`](http://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatterBuilder.html#parseDefaulting-java.time.temporal.TemporalField-long-) method allows a default value to be set for a specific field. In all cases, a `LocalDate` can be parsed from the result, because enough information is available. Note also the use of "[...]" sections in the pattern to define what is optional.
How do I make the script to run automatically when tun0 interface up/down events? I use VPN client to connect to my corporate servers. It creates tun0 interface after starting the client. I've written script which install specific routes point to tun0 interface and rest to use normal wifi connection. So that, only my office related traffic goes via VPN and rest are goes via home internet connection. How do I make the script to run automatically when tun0 interface up/down events ?.
I am not sure about `tun0`, but I think the script in `/etc/network/if-up.d/` and `/etc/network/if-down.d/` are invoked when an interface goes up or down, respectively. Inside the script you can determine which interface is interested from the content of the variable `IFACE`. To be sure, add a simple script to `/etc/network/if-up.d/` which content is ``` #!/bin/sh # filename: tun-up if [ "$IFACE" = tun0 ]; then echo "tun0 up" >> /var/log/tun-up.log fi ``` make it executable ``` sudo chmod +x /etc/network/if-up.d/tun-up ``` then see if the up events are recorded in `/var/log/tun-up.log`
Why is Angular $timeout blocking End-To-End tests? I've made a directive which is used to show notification messages to a user. To show the notification I wrote this: ``` $scope.$watch($messaging.isUpdated, function() { $scope.messages = $messaging.getMessages(); if ($scope.messages.length > 0) { $timeout(function() { for (var i = 0; i < $scope.messages.length; i++) { if (i + 1 < $scope.messages.length) { $messaging.removeMessage($scope.messages[i]); } else { $messaging.removeMessage($scope.messages[i]); } } }, 5000); } }); ``` I'm using $timeout to make sure the messages stays on the screen for 5 seconds. Now I want to write End-To-End test on it so I can be sure the notification is shown. Problem is when the notification is shown the End-To-End is also timed out just like the notification message. This makes it impossible to check whether the right notification is shown. . This is my test code: ``` it('submit update Center', function() { input('center.Name').enter('New Name'); input('center.Department').enter('New Department'); input('center.Contact').enter('New contact'); input('center.Street').enter('New street'); input('center.City').enter('New city'); input('center.Country').enter('New Country'); element('button#center_button').click(); expect(element('.feedback').count()).toBe(1); expect(element('.feedback:first').attr('class')).toMatch(/success/); expect(element('.error.tooltip').count()).toBe(0); }); ``` I’d like to avoid using the javascript setTimeout() and hope there’s another (Angular) solution to this problem.
Bad news, pal. It's a known issue in AngularJs. There is discussion [here](https://groups.google.com/forum/?fromgroups=#!topic/angular/e2CfWnHjvfU) and a "somehow related" [issue](https://github.com/angular/angular.js/issues/1472) here. Fortunately, you can workaround this by rolling up your own `$timeout` service, calling `setTimeout` and calling `$apply` by hand (this is a suggestion from the discussion I've referred). It's really simple, although it's really ugly. A simple example: ``` app.service('myTimeout', function($rootScope) { return function(fn, delay) { return setTimeout(function() { fn(); $rootScope.$apply(); }, delay); }; }); ``` Note this one is not compatible with Angular `$timeout`, but you can extend your functionality if you need.
Cuda programming on Android Phone? I'm looking for the most compact device that features a Cuda compatible device (optimal would be size of a smartphone / tablet). In the end I would like to install the Caffe Neural Network library on the device, so it should be capable of running Cuda 6.5 (and have a least 150 MB of GPU memory). Did anyone have the same problem yet? I've found the [Nvidia Shield Tablet](https://shield.nvidia.de/store/tablet/k1). It features a Tegra K1 GPU, which should be Cuda capable. However I'm not sure since it seems to be some kind of integrated graphics chip. Also 8GB of memory as proclaimed [here](http://www.nvidia.com/object/tegra-k1-processor.html) sounds suspicious. Maybe they mean rather some sort of shared memory where they the share the RAM-memory with the GPU...
As OP stated Tegra K1 indeed does have CUDA (CUDA 6.5) support as stated in CodeWorks for Android 1R4 release notes. However K1 isn't the only CUDA capable android device. According to release notes; - TK1 Reference Device (Ardbeg) - Google Project Tango Tablet - MiPad - SHIELD Tablet 8 - SHIELD Android TV support CUDA as well. There seems to be versional differences between different devices so supported features may differ too. You should follow up-to-date documentation from GameWorks site for detailed info and examples. 1. [NVIDIA CodeWorks for Android](https://developer.nvidia.com/codeworks-android) 2. [CodeWorks for Android 1R4 Release Notes](http://docs.nvidia.com/gameworks/index.html#developertools/mobile/codeworks_android/codeworks_android_release_notes.htm)
iOS 11 NSPredicate search on Swift array crashing - NSUnknownKeyException I am using NSPredicate to filter an array in Swift. The problem is after updating to iOS 11 (Xcode 9 /w Swift 4), I keep getting a crash on the filter line. Here is the crash log: > > Terminating app due to uncaught exception 'NSUnknownKeyException', reason: >'[ valueForUndefinedKey:]: this class is not key >value coding-compliant for the key name.' > > > Here is an example of the class that I have an array of: ``` final class Model: NSObject { let name: String init(name: String) { self.name = name } } ``` Here is the code that is crashing: ``` let myArray = [Model(name: "Jason"), Model(name: "Brian")] let predicate = NSPredicate(format: "name == 'Jason'") let filteredArray = myArray.filter { predicate.evaluate(with: $0)} ``` Question is why is this crashing now that I updated to iOS 11?
After fighting with this for a while, I finally came across the answer! A subtlety of updating to Swift 4 is that classes that are subclasses of NSObject are no longer implicitly exposed to objective-c like they were before. Because of this, you need to explicitly annotate classes/functions with @objc. The compiler will notify you of places where the annotation is needed, but not in this case. Ultimately because of this, the key-value lookup was no longer implicitly exposed to objective-c, which is needed to filter with NSPredicate. The code below fixes the crash! **Solution 1** ``` extension Model { @objc override func value(forKey key: String) -> Any? { switch key { case "name": return name default: return nil } } } ``` **Solution 2** Alternative thanks to Uros19: Instead of implementing the above function, you could annotate the property directly with @objc (e.g., `@objc let name: String`). You lose a little bit of clarity as to why you are annotating the property with @objc, but this is only a small consideration. I hope this saves some people time and frustration :)
How to update React state once useMutation is done? I add an user to an api with react-query's useMutation hook. It works. Now, I need to add the new user to my array of users, which is in my state. I know I'm supposed to query all the users with useQuery and then use onSuccess inside useMutation to modify the cache. But in certain cases, I don't fetch the users with useQuery, so I need to update a local state as I would do with a normal promise. For the moment, I simply check if the prop "success" is true and if so, I update the array. But it only works on the second click. Why and how to fix this? It seems the success condition inside onAddUser() is only reached on a second click. ``` export default function App() { const { updateUser } = userService(); const { update, loading, error, data, success } = updateUser(); const [users, setUsers] = useState(allUsers); const onAddUser = async () => { await update(newUser); if (success) { return setUsers((users) => [...users, data]); } }; return ( <> <div> {users.map((user) => ( <div key={user.id}> {user.name} - {user.job} </div> ))} </div> {loading && <div>sending...</div>} {error && <div>error</div>} <button onClick={() => onAddUser()}>add user</button> </> ); } ``` Here is also a sandbox: <https://codesandbox.io/s/usemutation-test-co7e9?file=/src/App.tsx:283-995>
The `success` prop returned from `updateUser` (I assume this is somehow the result returned by useMutation) will only update on the next render cycle. Functions will still keep the reference to whatever they closured over, even if you have an async await in there. This is nothing react-query specific, this is just how react works. I would suggest to use the `onSuccess` callback of the mutation function, or use `mutateAsync` and check the result, though you have to keep in mind to catch errors manually if you use `mutateAsync`. You can read about `mutateAsync` [here](https://react-query.tanstack.com/guides/mutations#promises), I'm gonna show you a `mutate` example: ``` const { mutate, loading } = useMutation(() => ...); const onAddUser = () => mutate(newUser, { onSuccess: (newData) => setUsers((users) => [...users, data]); }); }; ``` also, please don't violate the rules of hooks. You can only call hooks from functional components or other hooks (=functions that start with `use`), but in your codesandbox, you call `useMutation` from the `updateUser` function ...
Mongo Schema-less Collections & C# I'm exploring Mongo as an alternative to relational databases but I'm running into a problem with the concept of schemaless collections. In theory it sounds great, but as soon as you tie a model to a collection, the model becomes your defacto schema. You can no longer just add or remove fields from your model and expect it to continue to work. I see the same problems here managing changes as you have with a relational database in that you need some sort of script to migrate from one version of the database schema to the other. Am I approaching this from the wrong angle? What approaches do members here take to ensure that their collection items stay in sync with their domain model when making updates to their domain model? Edit: It's worth noting that these problems obviously exist in relational databases as well, but I'm asking specifically for strategies in mitigating the problem using schemaless databases and more specifically Mongo. Thanks!
Schema migration with MongoDB is actually a lot less painful than with, say, SQL server. Adding a new field is easy, old records will come in with it set to null or you can use attributes to control the default value `[BsonDefaultValue("abc", SerializeDefaultValue = false)]` The `[BsonIgnoreIfNull]` attribute is also handy for omitting objects that are null from the document when it is serialized. Removing a field is fairly easy too, you can use `[BSonExtraElements]` (see [docs](http://www.mongodb.org/display/DOCS/CSharp+Driver+Serialization+Tutorial#CSharpDriverSerializationTutorial-Supportingextraelements)) to collect them up and preserve them or you can use `[BsonIgnoreExtraElements]` to simply throw them away. With these in place there really is no need to go convert every record to the new schema, you can do it lazily as needed when records are updated, or slowly in the background. --- PS, since you are also interested in using dynamic with Mongo, here's an [experiment](http://blog.abodit.com/2011/05/class-free-persistence-multiple-inheritance-in-c-sharp-mongodb/) I tried along those lines. And here's an updated post with a complete [serializer and deserializer for dynamic objects](http://blog.abodit.com/2011/09/dynamic-persistence-with-mongodb-look-no-classes-polymorphism-in-c/).
Microservices architecture with Django I have some questions about creating microservices with Django. Let's say we have an online shop or a larger system with many database requests and users. I want to practice and simulate a simple microservice to learn something new. We want to create a microservices-based system with the following components: 1. A Django-based microservice with its admin panel and full functionality (excluding DRF). 2. One or more microservices with a React/Angular frontend. 3. Several additional microservices to separate functionalities. I'm unsure about the architecture. Let's assume we want to manage data using the Django admin panel. The simplest solution would be to add DRF to the first microservice and extend its functionality (REST app) - instead of creating different services (3.). 1. But what if we want to separate functionality into different microservices? 2. Should the microservices in point 3 be connected to the same database and treated as different Django projects (with DRF)? 3. Can we use GoLang, FastAPI, or Java Spring for the third microservice? If yes, should all models be duplicated and registered in the first microservice? 4. Alternatively, is there a better way to approach this? It would be great to hear your perspective and methods on how to proceed with this. Have a wonderful day!
First a quick summary of Microservices vs Monolithic apps pros and cons (this is important). Microservices: [ PROS ] - scalability (they scale independently) - flexibility (each microservice can use its own stack & hardware setup) - isolation (the failure of one microservice does not affect another, only its service fails.) [ CONS ] - Complexity (so much infrastructure to setup and maintain at every layer) - Data consistency (each db is independent so makink sure consistency is maintained is added complexity) - Distributed system challenges ( latency/fault tolerance and testing is much harder) Now for your questions: 1. separating functionality into different microservices. That is what apps in a Django project are for, and is a core principle of software engineering, separation of concerns can still be applied in a monolithic application. When discussing microservices, the questions should be about what benefit would it bring at the cost of complexity, such having a service that does pure gpu computation, perhaps would benefit from being a microservice running in on an optimized language and system with access to GPUs. I would even argue you should only transition to using microservices, when you have explored all other solutions, and have composed an irrefutable argument to do so with the team. 2. Should microservices be connected to the same DB. Microservices should have their own db, see Isolation. otherwise it's the same as just using a monolithic app with extra complexity and no benefit. 3. Can you use a different stack, and should duplicated models be registered. This again comes under a missunderstanding of what a microservice is. Your microservice should encapsulate the minimum amount of data it needs to function independently. 4. Alternative: Design your Monolithic application very well, separate out your concerns and business logic in a de-coupled design, even if it's a monolith. you have to have the mindset of: "if I want to swap this functionality for a microservice, how easily will it be to rip it out, what is the coupling, etc...) A good design leads to scalability and maintainability which is most important. It also allows other people to contribute their expertise on a subset of the project, without needing to know the whole.
Objective-C - CTFont change font style? I have a CTFont that contains a font style, and sumbolic traits. I want to create a new font with a new style that inherits the symbolic traits of the first font. How can I achieve this? ``` CTFontRef newFontWithoutTraits = CTFontCreateWithName((CFString)newFontName, CTFontGetSize(font), NULL); CTFontRef newFont = CTFontCreateCopyWithSymbolicTraits(newFontWithoutTraits, CTFontGetSize(font), NULL, 0, CTFontGetSymbolicTraits(font)); ``` the new font is null here I don't know what should I pass to the `4th` parameter in `CTFontCreateCopyWithSymbolicTraits`.
I do this line of code to generate a bold font from non-bold font: ``` CTFontRef newFont = CTFontCreateCopyWithSymbolicTraits(currentFont, 0.0, NULL, (wantBold?kCTFontBoldTrait:0), kCTFontBoldTrait); ``` - `currentFont` is the `CTFontRef` I want to add symbolic traits to - `wantBold` is a boolean to tell if I want to add or remove the bold trait to the font - `kCTFontBoldTrait` is the symbolic trait I want to modify on the font. The 4th parameter is the value you want to apply. The 5th is the mask to select the symbolic trait. --- You may thing of it as bitmask to apply to the symbolic trait, where the 4th parameter of `CTFontCreateCopyWithSymbolicTraits` is the value and the 5th parameter is the mask: - If you want to *set* the symtrait and add it to the font, iOS will probably apply sthg like `newTrait = oldTrait | (value&mask)`, setting the bit corresponding to `mask` to the value of `value`. - If you want to *unset* the symtrait and remove it from the font, you use the value of 0 as the 4th parameter and iOS will probably apply sthg like `newTrait = oldTrait & ~mask` to unset the bit. - But if you need to, you can also set and unset multiple bits (thus multiple symbolic traits) at once, using the right `value` that have 1 on bits to set and 0 on bits to unset (or to ignore), and and using the right `mask` that have 1 on bits that needs to be modified and 0 on bits that don't need to be changed. --- ***[EDIT2]*** **I finally managed to find the solution for your specific case:** you need to get the symtraits mask of your `font` as you already do… and bitwise-or it with the symtraits of your `newFontWithoutTraits` font. This is because `newFontWithoutTraits` actually **do** have default symtraits (contrary to what I thought, it has a non-zero `CTFontSymbolicTraits` value) as the symtraits value also contains info for the font class and such things (so even a non-bold, non-italic font can have a non-zero symtraits value, log the hex value of the symtraits of your font to understand better). So **this is the code you need**: ``` CTFontRef font = CTFontCreateWithName((CFStringRef)@"Courier Bold", 12, NULL); CGFloat fontSize = CTFontGetSize(font); CTFontSymbolicTraits fontTraits = CTFontGetSymbolicTraits(font); CTFontRef newFontWithoutTraits = CTFontCreateWithName((CFStringRef)@"Arial", fontSize, NULL); fontTraits |= CTFontGetSymbolicTraits(newFontWithoutTraits); CTFontRef newFont = CTFontCreateCopyWithSymbolicTraits(newFontWithoutTraits, fontSize, NULL, fontTraits, fontTraits); // Check the results (yes, this NSLog create leaks as I don't release the CFStrings, but this is just for debugging) NSLog(@"font:%@, newFontWithoutTraits:%@, newFont:%@", CTFontCopyFullName(font), CTFontCopyFullName(newFontWithoutTraits), CTFontCopyFullName(newFont)); // Clear memory (CoreFoundation "Create Rule", objects needs to be CFRelease'd) CFRelease(newFont); CFRelease(newFontWithoutTraits); CFRelease(font); ```
How many states of radio button are predefined in browser? I faced a task to develop custom radio button. To avoid styling issues I want to know all possible predefined states (like checked, active and else) in order to make CSS rules for them being sure that I am not going have some unexpected styling. What are these states and their combinations?
In W3C [Selectors Level 3 document](https://www.w3.org/TR/css3-selectors/) there are following states pseudo-classes which can be applied to radio element: - :checked - :enabled - :disabled Keeping in mind that radio element can be not checked, it is necessary to mention that there is the unchecked state. These states come in pairs and one state in each pair is always present. This way we get 4 possible states for now (they can be found on mindmap below). There do exist dynamic pseudo-classes which can be applied to html radio element: - :hover - :active - :focus They can be used in any combination or not used at all. There are 8 possible combinations of them. Combining this with 4 enabled-disabled, checked-unchecked pairs we get 32 possible stated which can be found on the following mindmap (zoom to see it full size): [![Radio button possible states](https://i.stack.imgur.com/hOM7Z.png)](https://i.stack.imgur.com/hOM7Z.png) Every pass that includes at least two nodes counting from the checked-unchecked node is a state, for example: - checked-disabled, - checked-enabled-focus, - unchecked-enabled-hover-active. Near every enabled path, there is a picture of radio button state as it is implemented in Chrome Browser on MacOS Sierra now. There is only one picture for each disabled path — they are all the same for other disabled states. There are 10 pictures in total representing each state. Numbers are used for convenience.
Print the validation loss in each epoch in PyTorch I want to print the model's validation loss in each epoch, what is the right way to get and print the validation loss? Is it like this: ``` criterion = nn.CrossEntropyLoss(reduction='mean') for x, y in validation_loader: optimizer.zero_grad() out = model(x) loss = criterion(out, y) loss.backward() optimizer.step() losses += loss display_loss = losses/len(validation_loader) print(display_loss) ``` or like this ``` criterion = nn.CrossEntropyLoss(reduction='mean') for x, y in validation_loader: optimizer.zero_grad() out = model(x) loss = criterion(out, y) loss.backward() optimizer.step() losses += loss display_loss = losses/len(validation_loader.dataset) print(display_loss) ``` or something else? Thank you.
**NO!!!!** [![enter image description here](https://i.stack.imgur.com/pncMq.gif)](https://i.stack.imgur.com/pncMq.gif) Under no circumstances should you train your model (i.e., call `loss.backward()` + `optimizer.step()`) using validation / test data!!! If you want to validate your model: ``` model.eval() # handle drop-out/batch norm layers loss = 0 with torch.no_grad(): for x,y in validation_loader: out = model(x) # only forward pass - NO gradients!! loss += criterion(out, y) # total loss - divide by number of batches val_loss = loss / len(validation_loader) ``` Note how `optimizer` has nothing to do with evaluating the model on the validation set. You do not change the model according to the validation data - only validate it.
How to detect new element creation in jQuery? Lets say I have the following code that returns number of anchor elements on a page: ``` function getLinkCount() { alert("Links:" + $("a").length); } ``` If I call in on document ready it would work as expected. But what if now a new link gets inserted into a page dynamically through Javascript, how can I get notified to run link counter function again? (I have no control over a script that could create a new link). Basically I am looking for something similar to `live()` only that would be watching element creation event, something like: ``` $("a").live("create", getLinkCount); ``` that would trigger when a new element is created.
You can use the [`.livequery()` plugin](https://github.com/hazzik/livequery/blob/master/README.md) for this, it runs for each element, including new ones, like this: ``` $("a").livequery(getLinkCount); ``` However, this plugin is out-of-date and is not recommended for current versions of jQuery. It's usually easier to do this when you create the elements though, for example if you're doing it after AJAX requests, the [`.ajaxComplete()` handler](http://api.jquery.com/ajaxComplete/) may be a good place, for example: ``` $(document).ajaxComplete(getLinkCount); ``` This would run after each request, and since you normally create elements in your `success` handler, they would already be present when this complete handler runs.
jquery trigger - 'change' function I have a radio button set called "pick\_up\_point" and I have a change handler to detect the radio button that is checked. In the change handler I call a function "clearFields()" which basically clears out the input fields. ``` function clearFields() { $("#Enquiry_start_point").val(""); $("#Enquiry_start_town").val(""); $("#Enquiry_start_postcode").val(""); } $("input[name='pick_up_point']").change(function() { if($("input[name='pick_up_point']:checked").val() == "pick_up_airport") { $("#pick_up_airport_div").slideDown(); $("#start_point_div").hide(); clearFields(); } }); ``` I also have a trigger which will retain the view if the form is redisplayed due to a validation error. ``` $('input[name=\'pick_up_point\']').trigger('change'); ``` Now when I post the form the trigger is run and it calls the change handler, which of course runs the clearFields() function. So how can I get around this? I don't want the fields being cleared when the form is re-displayed.
Try using a custom event handler, like so: ``` $("input[name='pick_up_point']").change(function() { $(this).trigger("displayForm"); clearForm(); }); $("input[name='pick_up_point']").bind("displayForm", function() { if($("input[name='pick_up_point']:checked").val() == "pick_up_airport") { $("#pick_up_airport_div").slideDown(); $("#start_point_div").hide(); } }); ``` So instead of triggering the change event, trigger the custom `displayForm` handler, like this: ``` $('input[name=\'pick_up_point\']').trigger('displayForm'); ``` And now, when `change` is triggered, it works as expected, but for the special case of displaying the form without clearing the input fields, you can simply trigger your new custom event handler.
How can I pass a socket from parent to child processes I'm stuck on a problem in a C program on Linux. I know that when a processes is forked the child process inherits some things from the parent, including **open** file descriptors. The problem is that I'm writing a multi-process server application with a master process that accepts new connections and puts the descriptors into shared memory. When the child process attempts to read from one of these descriptors from the shared memory, on `select()` i got an `EBADF` error! How can the child process read and use a socket (or any file descriptor in general) created by a parent process *after* it has been forked?
When you call fork, the child process inherits copies of all open file descriptors. The typical way of doing this is for a parent process to open a listening socket, call accept which blocks until a connection arrives and then calls fork after receiving the connection. The parent then closes it's copy of the file descriptor, while the new child process can keep using the file descriptor and do any processing which is needed. Once the child is done it *also* closes the socket. It's important to remember two things: 1. The file descriptor / socket is a resource in the operating system and after the fork the parent and child each have a handle to that resource, which is kind of like a reference counted smart pointer. I explain this [in more detail here](https://stackoverflow.com/a/598759/71074). The second thing is that only file descriptors which are opened *before* calling fork are shared, because after forking parent and child are completely separate processes, even though they may share some resources like file descriptors which existed prior to the fork. If you're using a model where you want to have a parent handing out work to worker processes, it may be better for you to consider using threads, and [a thread pool](https://stackoverflow.com/a/14316163/71074). By the way, you can download allot of nice examples of servers and clients from [Unix Network Programming](http://www.unpbook.com/) website.
Variable declaration in the switch head? While browsing the code of my friend I came to notice this: ``` switch(State &state = getState()) { case Begin: state = Search; break; // other stuff similar } ``` What is with the variable in the switch header? He is using GCC so I think this might be a GCC extension. Any idea?
It's not a secret or a GCC extension. Variables are allowed to be declared in the conditions of things like `if`s, `while`s, and `switch`es. For example: ``` while (char c = cin.get()) { ... } ``` or ``` if (int* something = (int*)malloc(4)) { // but don't use malloc in C++ // ... } ``` After they are declared an initialised, they are converted to a `bool` value and if they evaluate to `true` the block is executed, and the block is skipped otherwise. Their scope is that of the construct whose condition they are declared in (and in the case of `if`, the scope is also over all the `else if` and `else` blocks too). In §6.4.1 of the C++03 standard, it says > > Selection statements choose one of several flows of control. > > > > ``` > selection-statement: > > if ( condition ) statement > if ( condition ) statement else statement > switch ( condition ) statement > > condition: > > expression > type-specifier-seq declarator = assignment-expression > > ``` > > So as you can see, it allows `type-specifier-seq declarator = assignment-expression` in the condition of an `if` or `switch`. And you'd find something similar in the section for the "repetition constructs". Also, `switch`es work on integral or `enum` types *or* instances of classes that can be implicitly converted to an integral or `enum` type (§6.4.4): > > The value of a condition that is an initialized declaration in a > switch statement is the value of the declared variable if it has > integral or enumeration type, or of that variable implicitly converted > to integral or enumeration type otherwise. > > > I actually learned of this [**FROM AN ANSWER YOU POSTED**](https://stackoverflow.com/a/312449/726361) on the "Hidden Features of C++" question. So I'm glad I could remind you of it :)
How to get filename when using PHPickerViewController for photo How to get filename when using PHPickerViewController for photo this is my function code ``` func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) { dismiss(animated: true, completion: nil) for item in results { item.itemProvider.loadObject(ofClass: UIImage.self) {(image, error) in if let image = image as? UIImage{ } } } } ``` Please help, thank you
Hope you van find file name by using this: ``` item.itemProvider.loadFileRepresentation(forTypeIdentifier: "public.item") { (url, error) in if error != nil { print("error \(error!)"); } else { if let url = url { let filename = url.lastPathComponent; print(filename) } } } ``` You can use this to get file name from `UIImagePickerController` ``` func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) { if let imageURL = info[UIImagePickerControllerReferenceURL] as? URL { let result = PHAsset.fetchAssets(withALAssetURLs: [imageURL], options: nil) let asset = result.firstObject print(asset?.value(forKey: "filename")) } dismiss(animated: true, completion: nil) } ```
searching in belongsToMany relationship in laravel 5 Actually i want to search those question which user want to search after select any subject or course. if a remove either `whereHas` from subject or course its works but with both its not working. Please give a better solution for searching in belongsToMany realtionship. i have a question table with `Question` model class ``` class Question extends Model{ public function courses(){ return $this->belongsToMany('App\Models\Course','course_questions'); } public function subjects(){ return $this->belongsToMany('App\Models\Subject','subject_questions'); } } ``` and in my `searchController` ``` public function index(Request $request){ $questions = Question::with(['user','courses','branches','subjects','years','universities','question_type']) ->where("status","=",1) ->where(function($query) use($request){ $q = $request->q; if(isset($q) && !is_null($q)){ $query->where("question","LIKE","%$q%"); } }) ->whereHas('subjects',function($query) use($request){ $subjects = $request->subject; if(isset($subjects)){ $_subjects = explode(" ",$subjects); $query->whereIn("slug",$_subjects) ->orWhereIn("subject_name",$_subjects); } }) ->whereHas('courses',function($query) use($request){ $course = $request->course; if(isset($course)){ $_course = explode(" ",$course); $query->whereIn("slug",$_course) ->orWhereIn("course",$_course); } }) ->paginate(); if($request->ajax()){ $returnHTML = view('questions.question_list')->with('questions', $questions)->render(); return response()->json(array('success' => true, 'pageContent'=>$returnHTML)); } ```
You should build your query probably this way - you should verify conditions before adding any constraints to your query: ``` $query = Question::with(['user','courses','branches','subjects','years','universities','question_type']) ->where("status","=",1); $q = $request->q; if(isset($q) && !is_null($q)) { $query = $query->where("question","LIKE","%$q%"); } $subjects = $request->subject; if (isset($subjects)) { $query = $query->whereHas('subjects',function($query) use($subjects){ $_subjects = explode(" ",$subjects); $query->whereIn("slug",$_subjects) ->orWhereIn("subject_name",$_subjects); }); } $course = $request->course; if (isset($course)) { $query = $query->whereHas('courses',function($query) use($course ){ $_course = explode(" ",$course); $query->whereIn("slug",$_course) ->orWhereIn("course",$_course); }); } $questions = $query->paginate(); ```
Correct way to setting a tag to all cells in TableView I'm using a button inside a `tableView` in which I get the `indexPath.row` when is pressed. But it only works fine when the cells can be displayed in the screen without `scroll`. Once the tableView can be scrolleable and I scrolls throught the tableview, the `indexPath.row` returned is a wrong value, I noticed that initially setting 20 objects, for example `Check` is just printed 9 times no 20. ``` - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; lBtnWithAction = [[UIButton alloc] initWithFrame:CGRectMake(liLight1Xcord + 23, 10, liLight1Width + 5, liLight1Height + 25)]; lBtnWithAction.tag = ROW_BUTTON_ACTION; lBtnWithAction.titleLabel.font = luiFontCheckmark; lBtnWithAction.tintColor = [UIColor blackColor]; lBtnWithAction.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin; [cell.contentView addSubview:lBtnWithAction]; } else { lBtnWithAction = (UIButton *)[cell.contentView viewWithTag:ROW_BUTTON_ACTION]; } //Set the tag lBtnWithAction.tag = indexPath.row; //Add the click event to the button inside a row [lBtnWithAction addTarget:self action:@selector(rowButtonClicked:) forControlEvents:UIControlEventTouchUpInside]; //This is printed just 9 times (the the number of cells that are initially displayed in the screen with no scroll), when scrolling the other ones are printed NSLog(@"Check: %li", (long)indexPath.row); return cell; } ``` To do something with the clicked index: ``` -(void)rowButtonClicked:(UIButton*)sender { NSLog(@"Pressed: %li", (long)sender.tag); } ``` Constants.h ``` #define ROW_BUTTON_ACTION 9 ``` What is the correct way to get the `indexPath.row` inside `rowButtonClicked` or setting a tag when I have a lot of of cells in my `tableView`?
My solution to this kind of problem is not to use a tag in this way at all. It's a complete misuse of tags (in my opinion), and is likely to cause trouble down the road (as you've discovered), because cells are reused. Typically, the problem being solved is this: A piece of interface in a cell is interacted with by the user (e.g. a button is tapped), and now we want to know *what row* that cell currently corresponds to so that we can respond with respect to the corresponding data model. The way I solve this in my apps is, when the button is tapped or whatever and I receive a control event or delegate event from it, to walk up the view hierarchy from that piece of the interface (the button or whatever) until I come to the cell, and then call the table view's `indexPath(for:)`, which takes a cell and returns the corresponding index path. The control event or delegate event always includes the interface object as a parameter, so it is easy to get from that to the cell and from there to the row. Thus, for example: ``` UIView* v = // sender, the interface object do { v = v.superview; } while (![v isKindOfClass: [UITableViewCell class]]); UITableViewCell* cell = (UITableViewCell*)v; NSIndexPath* ip = [self.tableView indexPathForCell:cell]; // and now we know the row (ip.row) ``` --- [**NOTE** A possible alternative would be to use a custom cell subclass in which you have a special property where you store the row in `cellForRowAt`. But this seems to me completely unnecessary, seeing as `indexPath(for:)` gives you exactly that same information! On the other hand, there is no `indexPath(for:)` for a *header/footer*, so in that case I *do* use a custom subclass that stores the section number, as in [this example](https://github.com/mattneub/Programming-iOS-Book-Examples/blob/master/bk2ch08p451dynamicTableContent/ch21p718sections/RootViewController.swift) (see the implementation of `viewForHeaderInSection`).]
What're the differences between these 2 ways of of launching WSL? I installed Ubuntu on WSL by doing `wsl --install Ubuntu`. When I went to set my profile in Windows Terminal, I see two different Ubuntu's. [![two separate ubuntu profiles](https://i.stack.imgur.com/AfqcH.png)](https://i.stack.imgur.com/AfqcH.png) The first one launches by doing `wsl -d Ubuntu` and the other just runs `ubuntu.exe` The difference I noticed is that the first one uses my local filesystem whereas the one that runs `ubuntu.exe` uses a virtual filesystem. Which of these is the proper one to be using and how can I safely remove the other one?
> > The difference I noticed is that the first one uses my local filesystem whereas the one that runs `ubuntu.exe` uses a virtual filesystem. Which of these is the proper one to be using and how can I safely remove the other one? > > > See [this answer](https://superuser.com/a/1737981/1210833) for some background on why you may have two different profiles pointing to the same WSL distribution, which one to remove, and how. Also see [this one](https://superuser.com/a/1739388/1210833) if you have any trouble removing it. As for the "difference" that you are seeing, it's likely just the *default starting directory* that is different between the two. The one with the actual Ubuntu icon is created by the installer, and it should take you directly to your home directory in Ubuntu. However, there may be some confusion here, because `~` (`/home/<user>`) is actually the virtual drive. The one with the generic Penguin icon should be the one auto-created by Windows Terminal (which *should* have been removed/hidden by the installer-version, but sometimes isn't). That one it will default to the *Windows* directory where it was executed from. The confusion may be because this is going to appear as something like `/mnt/C/Windows/System32` (or another directory on `/mnt/c`). That's actually your real, Windows drive, which is mounted into Ubuntu using a network share.
core 2.1 signalR successfully connecting but client is null I am attempting to use signalR in the new dotnet core2.1. I have my C# server set up and an ionic 3 app ready as the client. I got as far as making a successful websocket connection between them. When I try to use a SendAsync from the server to the client though, the client shows as null even though a successful connection was made. I've done some testing and even tried using the OnConnectedAsync override to see what was happening. It does trigger but the context.user.identity.name is empty and the client still comes up null. In the browser log: ``` Information: WebSocket connected to ws://localhost:xxxx/myHub ``` in my Hub ``` public async Task Send(string message) { await Clients.All.SendAsync("SendMessage", message); } ``` in the controller ``` public async Task<IActionResult> NewBuild() { await myHub.Send("New Build Completed."); return Ok(); } ``` in my startup ``` services.AddCors(options => options.AddPolicy("CorsPolicy", builder => { builder.AllowAnyMethod().AllowAnyHeader().AllowAnyOrigin().AllowCredentials(); })); services.AddSignalR(); app.UseCors("CorsPolicy"); app.UseSignalR(routes => { routes.MapHub<myHub>("myHub"); }); ``` and in ionic ``` hubConnection: HubConnection; this.hubConnection = new HubConnection('http://localhost:xxxx/myHub', { transport: TransportType.WebSockets }); this.hubConnection.on("SendMessage", () => { this.InitTodayBuilds(); }); this.hubConnection.start().catch(() => console.error); ``` As I said, it seems to be connecting just fine. But when the Send method gets called it gets a null exception. Am I missing something stupid here? Any help is appreciated!
I was facing the same problem. I Injected the IHubContext into my hub class to call the client side methods for the server side. ``` public class ChangeRequest: Hub, ITransientDependency { public IAbpSession AbpSession { get; set; } protected IHubContext<ChangeRequest> _context; public ILogger Logger { get; set; } public ChangeRequest(IHubContext<ChangeRequest> context) { AbpSession = NullAbpSession.Instance; Logger = NullLogger.Instance; _context = context; } public async Task SendMessage(string message) { await _context.Clients.All.SendAsync("getMessage", string.Format("User {0}: {1}", AbpSession.UserId, message)); } } ```
Is \_\_TIME\_\_ preprocessor macro guaranteed to be constant within a file? Just out of curiosity I am wondering whether the value given by the standard `__TIME__` preprocessor macro can change within a single translation unit? In other words, is `__TIME__` determined once during preprocessing and then fixed, or is it re-evaluated each time it is encountered? If this is not specified by the C standard, is there a de facto standard behavior among the major implementations (gnu, clang, intel, msvc)?
C does not rigorously specify the the " time of translation" is constant throughout a pre-processing/compilation. My experience has been it is constant. But since it is only to the second, a given compilation would need to cross a second boundary to differ. > > `__TIME__` The time of translation of the preprocessing translation unit: a character string literal of the form `"hh:mm:ss"` as in the time generated by the > `asctime` function. If the time of translation is not available, an > implementation-defined valid time shall be supplied. C11 §6.10.8.1 1 > > > The definitions for `__DATE__` and `__TIME__` when respectively, the date and > time of translation are not available (6.10.8.1). (Informative) C11 §J.3.11 1 > > >
Are newly created cookies not available until a subsequent page load? When I first create a cookie I don't seem to be able to grab that same cookie until a subsequent page load. It's as if the cookie doesn't exist to the browser until the page is requested a second time. I'm using the Kohana PHP framework: ``` Cookie::set('new_cookie', 'I am a cookie'); $cookie = Cookie::get('new_cookie'); //$cookie is NULL the first time this code is run. If I hit the page again and then call Cookie:get('new_cookie'), the cookie's value is read just fine. ``` So, I'm led to believe that this is normal behavior and that I probably don't understand how cookies work. Can anyone clarify this for me?
Cookies are set in HTTP headers, so when the server returns the page. When you reload the page, your browser will send them back to the server. So, it is perfectly normal they are "visible" just after a new request. Here is an example response from the server: ``` HTTP/1.1 200 OK Content-type: text/html Set-Cookie: name=value Set-Cookie: name2=value2; Expires=Wed, 09-Jun-2021 10:18:14 GMT (content of page) ``` When you reload the page, your browser sends this: ``` GET / HTTP/1.1 Host: www.example.org Cookie: name=value; name2=value2 Accept: */* ``` This is why the server can see them only after a new request by the browser.
Jetpack Compose onClick ripple is not propagating with a circular motion? As can be seen in gif [![enter image description here](https://i.stack.imgur.com/kRXER.gif)](https://i.stack.imgur.com/kRXER.gif) when `Column` that contains of Text, Spacer, and `LazyRowForIndexed` is touched ripple is not propagating with circular motion. And it gets touched effect even when horizontal list is touched. ``` @Composable fun Chip(modifier: Modifier = Modifier, text: String) { Card( modifier = modifier, border = BorderStroke(color = Color.Black, width = Dp.Hairline), shape = RoundedCornerShape(8.dp) ) { Row( modifier = Modifier.padding(start = 8.dp, top = 4.dp, end = 8.dp, bottom = 4.dp), verticalAlignment = Alignment.CenterVertically ) { Box( modifier = Modifier.preferredSize(16.dp, 16.dp) .background(color = MaterialTheme.colors.secondary) ) Spacer(Modifier.preferredWidth(4.dp)) Text(text = text) } } } @Composable fun TutorialSectionCard(model: TutorialSectionModel) { Column( modifier = Modifier .padding(top = 8.dp) .clickable(onClick = { /* Ignoring onClick */ }) .padding(16.dp) ) { Text(text = model.title, fontWeight = FontWeight.Bold, style = MaterialTheme.typography.h6) Spacer(Modifier.preferredHeight(8.dp)) Providers(AmbientContentAlpha provides ContentAlpha.medium) { Text(model.description, style = MaterialTheme.typography.body2) } Spacer(Modifier.preferredHeight(16.dp)) LazyRowForIndexed(items = model.tags) { _: Int, item: String -> Chip(text = item) Spacer(Modifier.preferredWidth(4.dp)) } } } @Preview @Composable fun TutorialSectionCardPreview() { val model = TutorialSectionModel( clazz = MainActivity::class.java, title = "1-1 Column/Row Basics", description = "Create Rows and Columns that adds elements in vertical order", tags = listOf("Jetpack", "Compose", "Rows", "Columns", "Layouts", "Text", "Modifier") ) Column { TutorialSectionCard(model) TutorialSectionCard(model) TutorialSectionCard(model) } } ``` What should be done to have circular effect, but not when list itself or an item from list is touched, or scrolled?
You have to apply a Theme to your composable, which in turn provides a default ripple factory, or you have to set the ripple explicitly: ``` @Preview @Composable fun TutorialSectionCardPreview() { MaterialTheme() { Column { TutorialSectionCard ... } } } ``` or ``` Column( modifier = Modifier .padding(top = 8.dp) .clickable( onClick = { /* Ignoring onClick */ }, indication = rememberRipple(bounded = true) ) .padding(16.dp) ) { // content } ``` (As of compose version 1.0.0-alpha09 there seems to be no way to prevent the ripple from showing when content is scrolled)
Natural sort, array of objects, multiple columns, reverse, etc I desperately need to implement client side sorting that emulates sorting through our tastypie api, which can take multiple fields and return sorted data. So if for example I have data like: ``` arr = [ { name: 'Foo LLC', budget: 3500, number_of_reqs: 1040 }, { name: '22nd Amendment', budget: 1500, number_of_reqs: 2000 }, { name: 'STS 10', budget: 50000, number_of_reqs: 500 }, ... etc. ] ``` and given columns to sort e.g.:`['name', '-number_of_reqs']` it should sort by `name` (ascending) and `number_of_reqs` (descending). I can't get my head around this, first of all it has to be "natural sort", it supposed to be fairly easy to get if we're talking about sorting a single column, but I need to be able to sort in multiple. Also I'm not sure why I'm getting different results (from the way how api does it) when using lodash's `_.sortBy`? Is `_.sortBy` not "natural" or it's our api broken? Also I was looking for an elegant solution. Just recently started using [Ramdajs](http://ramdajs.com), it's so freaking awesome. I bet it would be easier to build sorting I need using that? I've tried, still can't get it right. Little help? upd: I found [this](http://devintorr.es/blog/2010/06/25/a-natural-sorting-comparator-for-javascripts-arrayprototypesort/) and using it with Ramda like this: ``` fn = R.compose(R.sort(naturalSort), R.pluck("name")) fn(arr) ``` seems to work for flat array, yet I still need to find a way to apply it for multiple fields in my array
> > > ``` > fn = R.compose(R.sort(naturalSort), R.pluck("name")) > > ``` > > seems to be working > > > Really? I would expect that to return a sorted array of names, not sort an array of objects by their name property. Using `sortBy` unfortunately doesn't let us supply a custom comparison function (required for natural sort), and combining multiple columns in a single value that compares consistently might be possible but is cumbersome. > > I still don't know how to do it for multiple fields > > > Functional programming can do a lot here, unfortunately Ramda isn't really equipped with useful functions for comparators (except [`R.comparator`](http://ramda.github.io/ramdocs/docs/R.html#comparator)). We need three additional helpers: - `on` (like the [one from Haskell](http://hackage.haskell.org/package/base-4.7.0.1/docs/Data-Function.html#v:on)), which takes an `a -> b` transformation and a `b -> b -> Number` comparator function to yield a comparator on two `a`s. We can create it with Ramda like this: ``` var on = R.curry(function(map, cmp) { return [R.useWith](http://ramda.github.io/ramdocs/docs/R.html#useWith)(cmp, map, map); return R.useWith(cmp, [map, map]); // since Ramda >0.18 }); ``` - `or` - just like `||`, but on numbers not limited to booleans like [`R.or`](http://ramda.github.io/ramdocs/docs/R.html#or). This can be used to chain two comparators together, with the second only being invoked if the first yields `0` (equality). Alternatively, a library like [`thenBy`](https://github.com/bergus/thenBy.js) could be used for this. But let's define it ourselves: ``` var or = R.curry(function(fst, snd, a, b) { return fst(a, b) || snd(a, b); }); ``` - `negate` - a function that inverses a comparison: ``` function negate(cmp) { return [R.compose](http://ramda.github.io/ramdocs/docs/R.html#compose)([R.multiply](http://ramda.github.io/ramdocs/docs/R.html#multiply)(-1), cmp); } ``` Now, equipped with these we only need our comparison functions (that natural sort is an adapted version of the one you found, see also [Sort Array Elements (string with numbers), natural sort](https://stackoverflow.com/q/15478954/1048572) for more): ``` var NUMBER_GROUPS = /(-?\d*\.?\d+)/g; function naturalCompare(a, b) { var aa = String(a).split(NUMBER_GROUPS), bb = String(b).split(NUMBER_GROUPS), min = Math.min(aa.length, bb.length); for (var i = 0; i < min; i++) { var x = aa[i].toLowerCase(), y = bb[i].toLowerCase(); if (x < y) return -1; if (x > y) return 1; i++; if (i >= min) break; var z = parseFloat(aa[i]) - parseFloat(bb[i]); if (z != 0) return z; } return aa.length - bb.length; } function stringCompare(a, b) { a = String(a); b = String(b); return +(a>b)||-(a<b); } function numberCompare(a, b) { return a-b; } ``` And now we can compose exactly the comparison on objects that you want: ``` fn = R.sort(or(on(R.prop("name"), naturalCompare), on(R.prop("number_of_reqs"), negate(numberCompare)))); fn(arr) ```