text
stringlengths
64
81.1k
meta
dict
Q: How to return the index Element in an Elements list using jsoup? I have two lists I need to both iterate at the same time, getting the same n-th element from them. This how I solved: import org.jsoup.nodes.Element; import org.jsoup.select.Elements; [...] int idx = 0; for(Element A : ListA) { String B = ListB.eq(idx).text(); System.out.println(A.text()+ " " + B); ++idx; } In order to return the following output: A1 B1 A2 B2 ... An Bn It'd be cleaner if I could extract from ListA the current n-th element index. But how? I did not find any suitable method. Any clue? Thanks in advance. A: I don't know if it works, but you can try ListA.indexOf(A) to get the current index.
{ "pile_set_name": "StackExchange" }
Q: C#: Adding 3 numbers using the data type int and print their sum Here is my code in C#: Once this program runs it easily terminates and I can't see its output, can someone tell me what's wrong with this code? using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Program { static int sum(int num1, int num2, int num3) { int total; total = num1 + num2 + num3; return total; } static void Main(string[] args) { Console.Write("\n\nFunction to calculate the sum of two numbers :\n"); Console.Write("--------------------------------------------------\n"); Console.Write("Enter a number1: "); int n1 = Convert.ToInt32(Console.ReadLine()); Console.Write("Enter a number2: "); int n2 = Convert.ToInt32(Console.ReadLine()); Console.Write("Enter a number3: "); int n3 = Convert.ToInt32(Console.ReadLine()); Console.WriteLine("\nThe sum of three numbers is : {0} \n", sum(n1, n2, n3)); } } } A: You need something that would prevent console window close like Console.ReadKey() at the end of your program.
{ "pile_set_name": "StackExchange" }
Q: Transitive dependencies with different linkage to third-party libraries I have a little bit of a convoluted question. I have a 3rd party dependency, which comes in static (libthird.a) and shared pic form (libthird.so). I have a library, util, that depends on libthird. And I have applications that depend on util that want to link libthird statically, and I have some shared libraries I need to produce that depend on util and need to link libthird dynamically. My current (working) approach is something like the following: add_library(third INTERFACE) target_link_libraries(third INTERFACE /path/to/libthird.a) add_library(third_shared INTERFACE) target_link_libraries(third_shared INTERFACE /path/to/libthird.so) add_library(util ${UTIL_SOURCES}) add_library(util_shared ${UTIL_SOURCES}) # same sources again!! target_link_libraries(util PUBLIC third) target_link_libraries(util_shared PUBLIC third_shared) add_executable(some_app ...) target_link_libraries(some_app PRIVATE util) add_library(some_shared_object ...) target_link_libraries(some_shared_object PUBLIC util_shared) This works. But I'm building util (and, in reality, another half dozen libraries or so) twice... just to get different linker dependencies. Is there a saner way of doing this in cmake? If I just target_link_libraries() on the top-level some_app and some_shared_object, I get the linker flags emitted in the wrong order, since util does depend on third. A: The approach that you're taking is definitely one I've seen and also used myself before. It is fine if it is used for a small archive and/or one-off usage. For the following examples, I assume a project structure like the following: foo/ CMakeLists.txt include/ foo.h src/ foo.c So, I've also (naively?) used the following approach based on the knowledge that you can create (on Unix-based systems at least) a shared library from a static archive. project(foo C) set(SOURCES "src/foo.c") set(LIBNAME "foo") add_library(${LIBNAME} STATIC ${SOURCES}) target_include_directories(${LIBNAME} PUBLIC "include") target_compile_options(${LIBNAME} PUBLIC "-fPIC") # get_property(CUR_PREFIX TARGET ${LIBNAME} PROPERTY PREFIX) get_property(CUR_SUFFIX TARGET ${LIBNAME} PROPERTY SUFFIX) get_property(CUR_NAME TARGET ${LIBNAME} PROPERTY NAME) get_property(CUR_OUTPUT_NAME TARGET ${LIBNAME} PROPERTY OUTPUT_NAME) get_property(CUR_ARCHIVE_OUTPUT_NAME TARGET ${LIBNAME} PROPERTY ARCHIVE_OUTPUT_NAME) message(STATUS "prefix: ${CUR_PREFIX}") message(STATUS "suffix: ${CUR_SUFFIX}") message(STATUS "name: ${CUR_NAME}") message(STATUS "output name: ${CUR_OUTPUT_NAME}") message(STATUS "archive name: ${CUR_ARCHIVE_OUTPUT_NAME}") add_custom_command(TARGET ${LIBNAME} POST_BUILD COMMAND ${CMAKE_C_COMPILER} -shared -o libfoo.so -Wl,--whole-archive libfoo.a -Wl,--no-whole-archive WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}) My unresolved problem with this approach is that I haven't found a reliable and portable way to get the name of the static archive (libfoo.a). Hence, all those message commands for the target properties; they might behave differently on your platforms. The most efficient way, that is also well-supported by cmake, is to use an object library: # cmake file cmake_minimum_required(VERSION 3.2) project(foo C) set(SOURCES "src/foo.c") set(LIBNAME "foo") set(LIBNAME_OBJ "${LIBNAME}_obj") add_library(${LIBNAME_OBJ} OBJECT ${SOURCES}) target_include_directories(${LIBNAME_OBJ} PUBLIC "include") target_compile_options(${LIBNAME_OBJ} PUBLIC "-fPIC") # add_library(${LIBNAME} SHARED $<TARGET_OBJECTS:${LIBNAME_OBJ}>) add_library(${LIBNAME}_static STATIC $<TARGET_OBJECTS:${LIBNAME_OBJ}>) All examples were tested with cmake version 3.6.1. Hope this helps. Let us know, if you've found a better way.
{ "pile_set_name": "StackExchange" }
Q: How to assign javascript string to java string? This is a dropdownlist in which multiple values can be selected -- http://paste.ubuntu.com/7845559/ A loop has been used to create the options in the list accessing values from database. This is the javascript function I am trying to use to read the multiple datas selected in the list -- http://paste.ubuntu.com/7845571/ I am not sure if the variable str in the javascript function is storing the values from dropdownlist. My questions are-- How can I assign the javascript variable str to java string variable ? After doing 1 how can I send the java variable to a servlet ? I need to send this information to servlet to update infos in database. If this approach is wrong which one is a better way to access datas from the list and send them to servlet ? A simple code snippet will be very helpful. A: You are mixing up client-side and server-side code. A JSP is can be thought of as 2 programs intermingled. The program which runs on the server (anything that is inside scriptlets <% or output by custom tags) and the program which is executed by the user's browser when they receive the response. Things the Server-side code can do: Access the database Create Java variables (which are stored as request or session attributes) Interact with other Java objects Output HTML and in-line JavaScript Things the Client-side code can do: Create and manipulate JavaScript variables Dynamically edit the HTML DOM Submit Forms Things the Client-side code can't do: Access the database Create Java variables Interact with other Java objects Therefore, your Server-side code should output the HTML code for a Form, and this Form can be filled out by the user, submitted and the data will be sent back to a Servlet on the server which can access the database, make the changes and generate more HTML to send back to the user (eg validation error or thank-you page). Please see this question (although it is PHP, the ideas are the same): What is the difference between client-side and server-side programming?
{ "pile_set_name": "StackExchange" }
Q: Expanding PHP: Hypertext Processor I am learning recursion now in my programming class, and while I understand how to use recursion for things like factorials and backtracking algorithms, I've been trying to wrap my head around how a recursive acronym, such as PHP, would be iterated for quite some time know. How could one write a program to iterate the expansion of PHP n times? EDIT: I need to clarify my question PHP is a recursive acronym, stands for PHP: Personal home page. So if you were to expand PHP: Hypertext processor an infinite number of times, would it look like PHP:Hypertext Processor Hypertext processor hypertext processor... or soemthing different A: I am completely guessing as to what you're asking, but if I'm right it would be something like this: function recursePHP($n) { if ($n <= 0) return 'PHP'; return recursePHP($n-1) . 'Hypertext Processor'; } This is taking a recursive function approach to your question that would spit out, given n=2: "PHP Hypertext Processor Hypertext Processor" I see this as the correct way to recurse through the acronym, as thinking about the first iteration you end up with "PHP Hypertext Processor", the PHP in this resolves to "PHP Hypertext Processor" so that becomes "PHP Hypertext Processor Hypertext Processor" with the bold section being the PHP in the first iteration, etc. Until you've iterated through the specified number of recursions then you don't resolve PHP and end up with PHP at the beginning followed by n "Hypertext Processor"s.
{ "pile_set_name": "StackExchange" }
Q: IBM Worklight 6.0 - application-descriptor.xml I recently migrated to Worklight Studio 6, and the application-descriptor.xml of my app is being flagged due to the width, height, and worklightServerRootURL elements. I've looked in the documentation but haven't found a mention yet of how to migrate these elements for Worklight 6. Are there replacements? A: The mentioned elements have been removed (worklightServerRootURL) and moved (height, width - into the Adobe Air element, if the environment exist in the project). Your application-descriptor.xml should not flag you about these. If you do get flagged (do you mean you get an error or warning?), it means something in the upgrade process of your project has gone bad. In any case, to overcome this simply remove the offending elements. That said, because this could indicate on a bad upgrade other things may have gone wrong as well. If you can add any more information it will help. Can you share your project? (export it as .zip file from Eclipse)
{ "pile_set_name": "StackExchange" }
Q: How to check if machine has a Touch Bar Is there any API to check whether the running machine has a touch bar (or Xcode's touch bar simulator)? I have some methods to be invoked only if a touch bar exists. If I just check the nullability of touchBar property of a responder, it automatically creates a touchBar instance even if the machine doesn't support it. But I don't want to create one when it doesn't make sense. A: From Apple's NSTouchBar reference: There is no need, and no API, for your app to know whether or not there is a Touch Bar available. Whether your app is running on a machine that supports the Touch Bar or not, your app’s onscreen user interface (UI) appears and behaves the same way. So clearly Apple's view is the touch bar is additional UI which replicates functionality available elsewhere and as such your app doesn't need to know whether it is present or not. So the answer to your question is there is no public API intended for this purpose. (I suspect you can figure it out - consider the delegates called, events generated, etc. - without calling any private UI or relying on machine ID's, but I don't know you can.) HTH
{ "pile_set_name": "StackExchange" }
Q: arm64 flag like arc flag (-fno-objc-arc) is there a opportunity to set a flag for specific files like the arc flag (-fno-objc-arc) in the compiler settings to use the 32bit lib instead of the 64bit lib? The thing is, that I use a class with some functions they doesn't work in 64bit. A: No, there is no source file-specific flag for specifying the bit architecture like you find with -fno-objc-arc. This is because you cannot have a single program compiled partially for 32bit and partially for 64bit architectures, so you can't enable or disable this on a per source file basis like you could with -fno-objc-arc.
{ "pile_set_name": "StackExchange" }
Q: How to create a link tag cloud I need to generate a text link cloud something like the image attached. As some words are vertical, I am thinking of doing it via CSS3. But it is consuming lot of time. Do you know any website or any better idea of how I can do it fast? I am using transform property. A: A list of websites: http://www.edudemic.com/9-word-cloud-generators-that-arent-wordle/ http://www.wordle.net/ http://www.tagxedo.com/app.html http://www.tagcloud-generator.com/ http://tagcrowd.com/ http://www.tagcloudgenerator.com/ Hope this helps! :)
{ "pile_set_name": "StackExchange" }
Q: Поиск хотя бы одного элемента из списка в строке Допустим, есть список логинов users = ["igor", "mera", "miracle", "serg", "gena", "nol", "vasya",] И есть переменная username = input("Ваш логин: ") Нужно, чтобы при вводе логина он проверялся по списке и если такого не оказывается, то вывести приветственное сообщение типа "Вы у нас недавно, добро пожаловать!". Если он есть, то вывести другое сообщение. Я никак не могу разобраться с этим :( Убил на это пару дней, пробовал метод str.find и индексами списка, ничего не добился. Поможете додуматься до решения? A: Воспользуйтесь оператором in (также not in). Пример: if username not in users: print('вы у нас недавно, добро пожаловать!') else: print('другое сообщение')
{ "pile_set_name": "StackExchange" }
Q: Inserting string variable into S-Proc parameter I'm receiving an error for the following query: EXEC dbo.sp_Sproc_Name @Param1=@ParamValue1 ,@Param2='lorem ipsum "' + @ParamValue2 + '" dolor' I get the error: Incorrect syntax near '+'. Therefore, how can I pass a variable as part of my parameter value like I'm trying to do above? Many Thanks. A: Unfortunately, T-SQL does not allow you to build a string inline as a parameter (there are certain exceptions for literals), so you will need to do this: DECLARE @ParamValue2mod AS varchar(whatever) SET @ParamValue2mod = 'lorem ipsum "' + @ParamValue2 + '" dolor' EXEC dbo.sp_Sproc_Name @Param1=@ParamValue1 ,@Param2=@ParamValue2mod
{ "pile_set_name": "StackExchange" }
Q: properly accessing images within a I currently have: <div id="thumbImages"> <ul> <li><img src="thumbimages/test1.jpg" alt="thumb1" width="125" height="100" /></li> <li><img src="thumbimages/test2.jpg" alt="thumb2" width="125" height="100" /></li> <li><img src="thumbimages/test3.jpg" alt="thumb3" width="125" height="100" /></li> <li><img src="thumbimages/test4.jpg" alt="thumb4" width="125" height="100" /></li> </ul> </div> in my HTML and I am attempting to add button like functionality to the thumbnails with this javascript var isMousedOver = [ false, false, false, false ]; function init() { DoStuffWithThumbs(); } this.onload = init(); function DoStuffWithThumbs() { var thumbs = document.getElementById("thumbImages"); var itemsUL = thumbs.getElementsByTagName("ul"); var itemsLI = itemsUL.item(0).getElementsByTagName("li"); for (var i = 0; i < itemsLI.length; ++i) { var curThumb = itemsLI[i]; curThumb.onclick = DoStuff(i); curThumb.onmouseover = MouseOver(i); curThumb.onmouseout = MouseOut(i); } } function MouseOver(val) { isMousedOver[val] = true; } function MouseOut(val) { isMousedOver[val] = false; } function DoStuff(val) { if(isMousedOver[val] == true) { //stuff is done here ( I know the stuff in question is working) } } However currently I am getting no visible response from this at all on the page when I have separately tested the result itself ( simply flipping an image and changing some text on the page based on another array). Which leads me to believe I am accessing the elements incorrectly. I am new to using Javascript alongside html so forgive me if I have made some grave error. Am I accessing my elements properly? or is this entirely the wrong way to go about accessing them/using onmouseover/onmouseout? A: You are invoking functions instead of assigning them as handlers. So you need to fix that first. To fix it according to your current approach, each function you invoke would need to return a function that references the current i value. But instead of tracking the mouseover state in an Array, it would be simpler to track it on the DOM element itself by adding a property. function DoStuffWithThumbs() { var thumbs = document.getElementById("thumbImages"), itemsUL = thumbs.getElementsByTagName("ul"), itemsLI = itemsUL.item(0).getElementsByTagName("li"); for (var i = 0; i < itemsLI.length; ++i) { itemsLI[i].onclick = DoStuff; itemsLI[i].onmouseover = MouseOver; itemsLI[i].onmouseout = MouseOut; } } function MouseOver(val) { this._over = true; } function MouseOut(val) { this._over = false; } function DoStuff(val) { if(this._over === true) { // do your stuff } }
{ "pile_set_name": "StackExchange" }
Q: Views from HashMap not being named correctly I am attempting to create a dynamic U.I. from a JSON response. I have the following code. class LoadAllQuestions extends AsyncTask<String, String, String> { private ProgressDialog pDialog; JSONParser jParser = new JSONParser(); JSONArray questions = null; protected void onPreExecute() { super.onPreExecute(); pDialog = new ProgressDialog(getActivity()); pDialog.setMessage("Loading questions. Please wait..."); pDialog.setIndeterminate(false); pDialog.setCancelable(true); pDialog.show(); } protected String doInBackground(String... args) { // getting JSON string from URL companyName = cn.getText().toString(); projectName = pn.getText().toString(); String componentName = (String) ab.getSelectedTab().getText(); List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(3); nameValuePairs.add(new BasicNameValuePair("company", companyName)); nameValuePairs.add(new BasicNameValuePair("project", projectName)); nameValuePairs.add(new BasicNameValuePair("component", componentName)); JSONObject json = jParser.makeHttpRequest(url, "POST", nameValuePairs); // Check your log cat for JSON response Log.d("All Questions: ", json.toString()); try { // Checking for SUCCESS TAG int success = json.getInt(TAG_SUCCESS); if (success == 1) { Log.v("RESPONSE", "Success!"); // products found: getting Array of Questions questions = json.getJSONArray(TAG_QUESTIONS); // looping through All Questions for (int i = 0; i < questions.length(); i++) { JSONObject c = questions.getJSONObject(i); // Storing each JSON item in variable String name = c.getString(TAG_NAME); String field = c.getString(TAG_FIELD); String value = c.getString(TAG_VALUE); // creating new HashMap HashMap<String, String> map = new HashMap<String, String>(); // adding each child node to HashMap key => value map.put(TAG_NAME, name); map.put(TAG_FIELD, field); map.put(TAG_VALUE, value); infoList.add(map); } } else { // no products found Log.v("ERROR", "No JSON for you!"); } } catch (JSONException e) { e.printStackTrace(); } return null; } protected void onPostExecute(String string) { // dismiss the dialog pDialog.dismiss(); // loop through infoList for (int i = 0; i < infoList.size(); i++) { // get HashMap HashMap<String, String> map = infoList.get(i); // if the answer should be a radio button, inflate it if (map.get(TAG_FIELD).equals("Radio")) { Log.v("RESPONSE", "About to create a radio button"); // find LinearLayout content = (LinearLayout) view .findViewById(R.id.genA_layout); // create TextView tv = new TextView(getActivity()); RadioGroup rg = new RadioGroup(getActivity()); rg.setOrientation(RadioGroup.HORIZONTAL); RadioButton rb = new RadioButton(getActivity()); RadioButton rb2 = new RadioButton(getActivity()); LinearLayout ll = new LinearLayout(getActivity()); // set rb.setLayoutParams(new LinearLayout.LayoutParams( LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); rb2.setLayoutParams(new LinearLayout.LayoutParams( LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); ll.setLayoutParams(new LinearLayout.LayoutParams( LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); rb.setText(TAG_VALUE); rb2.setText(TAG_VALUE); tv.setText(map.get(TAG_NAME)); ll.setOrientation(LinearLayout.HORIZONTAL); // add rg.addView(rb); rg.addView(rb2); ll.addView(tv); ll.addView(rg); content.addView(ll); } // else inflate the view as an EditText field else if (map.get(TAG_FIELD).equals("Text Field")) { Log.v("RESPONSE", "About to create an EditText"); // find LinearLayout content = (LinearLayout) view .findViewById(R.id.genA_layout); // create TextView tv = new TextView(getActivity()); EditText et = new EditText(getActivity()); LinearLayout ll1 = new LinearLayout(getActivity()); // set tv.setLayoutParams(new LinearLayout.LayoutParams( LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); et.setLayoutParams(new LinearLayout.LayoutParams( LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); ll1.setLayoutParams(new LinearLayout.LayoutParams( LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); tv.setText(map.get(TAG_NAME)); ll1.setOrientation(LinearLayout.HORIZONTAL); // add ll1.addView(tv); ll1.addView(et); content.addView(ll1); } else if (map.get(TAG_FIELD).equals("Check Box")) { Log.v("RESPONSE", "About to create a CheckBox"); // find LinearLayout content = (LinearLayout) view .findViewById(R.id.genA_layout); // create TextView tv = new TextView(getActivity()); CheckBox cb = new CheckBox(getActivity()); LinearLayout ll = new LinearLayout(getActivity()); // set cb.setLayoutParams(new LinearLayout.LayoutParams( LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); ll.setLayoutParams(new LinearLayout.LayoutParams( LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); tv.setText(map.get(TAG_NAME)); ll.setOrientation(LinearLayout.HORIZONTAL); // add ll.addView(tv); ll.addView(cb); content.addView(ll); } else if (map.get(TAG_FIELD).equals("Drop Down Menu")) { Log.v("RESPONSE", "About to create a Drop Down Menu"); } } // find LinearLayout loader = (LinearLayout) view .findViewById(R.id.loader_layout); Button save = (Button) view .findViewById(R.id.generalAssets_save_button_ID); // set loader.setVisibility(View.GONE); save.setVisibility(View.VISIBLE); }; } and the JSON { "questions": [ { "display_name": "Store #", "field_type": "Text Field", "option_value": "" }, { "display_name": "Address", "field_type": "Text Field", "option_value": "" }, { "display_name": "Type of Business", "field_type": "Drop Down Menu", "option_value": "Education\r\nHealth\r\nComputers\r\nFood\r\nRetail\r\nOther" }, { "display_name": "Is this business good?", "field_type": "Radio", "option_value": "Yes\r\nNo" }, { "display_name": "Are they nice people?", "field_type": "Check Box", "option_value": "Yes\r\nNo" } ], "success": 1 } Now I want it to look like this Store # ------------------ <EditText> Address ------------------ <EditText> Is this business good? --- <RadioButton> (Yes) <RadioButton> (No) Type of business? -------- <Spinner> (Education\r\nHealth\r\nComputers\r\nFood\r\nRetail\r\nOther) etc, However right now it's doing this, literally, these are the values I see. Store# ------------------ nothing Address ----------------- nothing Is this business good?--- <RadioButton> (option_value) <RadioButton> (option_value) Are they nice people?---- <CheckBox> So this may of been more code than you needed to see but I wanted to make sure you fully understood what was going on. *edited to show changes After the edit it's working, kind of... I will post another question if I need more help with this. Original Problem solved. Special Thanks @sarwar A: It looks like you're testing the wrong thing when you go to inflate your layout. This is where you get into trouble: for (String key : map.keySet()) { You go and test all the keys in each map against the different types of fields. But there's no reason to have that inner loop at all. Your field_type is always contained in: map.get(TAG_FIELD) Likewise if you want the display_name, you can get it with map.get(TAG_NAME)Cut out the inner loop and compare your types to the correct values and you should be on the right track.
{ "pile_set_name": "StackExchange" }
Q: Disabling Server 2003 Wonderware App Server Page File on VMware There are so many threads on whether you should mess with the page file or not. This scenario describes a unique circumstance that is real world in my production enviornment. The conclusion I've come to in order to fix my problem is to disable the page file. I'm running a series of guest VMs all of which Server 2003 Enterprise Edition (inorite?). For my physical hosts, I'm running HP DL380 G7's loaded with VMware's ESXi 5.0 (managed via vCenter). For storage I have an HP P2000 G3 SAS array loaded with 16 300 GB 10k SAS Drives in RAID 6, call it LUN01. These virtual servers make up our Wonderware environment with a single SQL server and Historian, two application servers, and two terminal servers. The work that this stack performs is mission critical, and determines whether the facility can serve its function or not. (i.e., when server goes down, the business goes down) Recently, several disk failures in the P2000 array caused me to rethink the architecture from the ground up. Reconstructing disks in the array severely hurt the performance to the point where the wonderware app became completely unresponsive. Since these VMs all run I/O intensive applications, and RAID reconstruction places such a high demand on a RAID. I've determined that the bottleneck during disk reconstruction occurs because of application server disk writes. Seemingly because its using the system page file instead of RAM. The amount of network I/O thus becomes directly linked to disk I/O. Consequently severe performance impact on the disks during reconstruction directly impacts APP server I/O. It makes very little sense why its designed this way, but it perfectly explains why a server that stores nothing locally (an app server) would sustain 10Mbps disk write rate (vmware performance statistics for the app server VM). So... what I'm thinking is given the circumstances I want to disable the page file in the guest OS (server 2003 EE) to prevent the deployed wonderware app engine from creating such high disk I/O demands... and as a result lessen the impact of future disk reconstructions in the RAID. What do you think? Does this justify disabling the page file? Am I overlooking another solution to minimize the performance impact of raid reconstruction? A: I was able to figure this out with a lot of on the phone time with Wonderware. Basically inside each App Engine deployed to the Galaxy there is a configurable parameter called the "Checkpoint Period." The Checkpoint Period is the period of time between when Archestra will write the current state (values, variables, etc...) of the application to disk. It does this so that in the event of a server reboot or system crash, the application can resume from its most recent state without data loss. If your application is designed to store values in galaxy objects themselves you have to weigh out how much data loss you can tolerate. If your application is designed to merely process data, and offloads the job of storing information to a SQL server or leaves the values in a Tag Database, then you don't risk losing any data by increasing this value. ArchestrA currently has about 9000 tags. What this means is that between any two seconds, 9000 values could have changed resulting in 9000 values to write to disk... every second. Most of these values overwrite values that were stored the previous second. Systems that are designed to monitor analogue inputs will always have a massive number of changes every second. As an admin you have to decide how much of that is noise and how much of that data needs to be captured for trending/tracking etc... Increasing the default value of 0 ms (which the system interprets as "no default specified, use 1 second") to 5000 ms dropped my disk activity from over 300 IOPs to less than 25 IOPs. We actually staggered each App Engine with a prime number near 5000 ms so that each engine's Checkpoint Period would make independent requests to the disks for I/O activity. This is particularly important for virtualization of controls systems. Performance and scalability become an issue when you have many servers running on the same array.
{ "pile_set_name": "StackExchange" }
Q: difference in determinants of Positive Definite Matrices Let $A$ and $B$ be positive definite matrices (psd) of the same size, such that $A>B$ (i.e. $A-B$ is also psd). I wonder if $det(A)>det(B)$? I have tried to find a counter example, but couldn't find. A: Since $A$ and $B$ are symmetric and real, the Min-Max Theorem applies: $$ \lambda_k(A)=\min_{\dim M=k}\max\{\langle Ax,x\rangle:\ x\in M, \|x\|=1\}, $$ where $\lambda_k(A)$ denotes the $k^{\rm th}$ eigenvalue of $A$ in nonincreasing order. As $\langle Ax,x\rangle>\langle Bx,x\rangle$ for all $x$, it follows that $$\lambda_k(A)\geq\lambda_k(B),\ \ \ k=1,\ldots,n.$$ Then $$ \det A=\prod_k\lambda_k(A)\geq\prod_k\lambda_k(B)=\det B. $$
{ "pile_set_name": "StackExchange" }
Q: Is Java Servlet Container also a class? Is the container a set of environment or just a Java class that can call methods in servlets? What's the component of servlets container? A: The servlet Container is a java application (Multiple java classes with a Main class), this application implement the Java Servlet Specifications: https://jcp.org/aboutJava/communityprocess/final/jsr369/index.html
{ "pile_set_name": "StackExchange" }
Q: StackMob or Server (MySql) to handle user account I am developing an application for Android/iOS that have a login and registration form. Actually I am doing it using JSON to store an email and an encrypted password, and others user information in a MySql server. But I found a (Android Problem), How to send a email link to user recovery his password, I dont want to send his password directly to email, I´d like to send a unique link that he can call app again and submit a new password, I can't find a way to do it with MySql and stackmob shows me an easier way. Problems: What is the safest way? Should I move all my database from MySql to StackMob Cloud Server or only the user email and password? Compatibility: In StackMob I need to have two database, one for android platform and another for iOS? StackMob or MySql what your experience says? I think about implement Facebook integration, it looks easier in Stackmob then do it myself for Android and iOS. A: I'm the Platform Evangelist for StackMob. I'll do my best to answer your questions. StackMob does provide a Password Reset Feature. http://developer.stackmob.com/tutorials/android/Forgot-Password Depends. If you want to add access controls to your data based on the currently logged in user, StackMob helps you do this. You can control create, read, update and delete permissions based on the user through relationships, roles and ownership (who created it). No, you only have one StackMob app (and set of data) for all platforms iOS, Android, HTML5, etc. You use the same API keys for all versions of the app. It's very easy to integrate Facebook login with StackMob so you user's can authenticate using Facebook. Once logged in you can access other data based on the user's permissions (see #2)
{ "pile_set_name": "StackExchange" }
Q: Mutiplication of n functions I want to write a function in Python that returns the multiplication of n functions (f1(x) * f2(x) * f3(x) * ... * fn(x)). I was thinking in something like: def mult_func(*args): return lambda x: args(0)(x) * args(1)(x) ... but I don't know exactly how to loop through the n functions in args. Thank you. A: Its very simple - just use reduce: from operator import mul def mult_func(*args): return lambda x: reduce(mul, (n(x) for n in args), 1) That's just a generator expression looping through the functions, and reducing by multiplication. A: args is just a tuple, but it will be difficult to iterate over them the way you need to in a lambda expression (unless you use reduce). Define a nested function instead. def mult_func(*args): def _(x): rv = 1 for func in args: rv *= func(x) return rv return _
{ "pile_set_name": "StackExchange" }
Q: Swift - Protocol default implementation in extension with generic superclass constraint I am trying to constraint a protocol extension to a generic class. My goal is to provide a default implementation of a protocol where Self is any instance that is subclass of some generic class. Consider the below example: protocol Printable { var value: String { get } } class Printer<P: Printable> { let printable: P init(printable: P) { self.printable = printable } } protocol Press { func print() } // error: reference to generic type 'Printer' requires arguments in <...> extension Press where Self: Printer { func print() { // Do Something with self.printable.value } } The compiler gives error error: reference to generic type 'Printer' requires arguments in <...>. I don't understand why this should not be allowed. As long as Press is some kind of Printer which always works with some kind of Printable things should work, right? Or I am missing something? Can you point what could be the right way of achieving something like this? A: This is because Printer<A> and Printer<B> are different types, even A & B are Printable, so due to possible ambiguity compiler generate error. You need the following (tested with Xcode 11.4) extension Press { func print<P>() where Self: Printer<P>, P: Printable { // Do Something with self.printable.value } }
{ "pile_set_name": "StackExchange" }
Q: nodejs - parsing and storing return of a function into an array How would I go about parsing the output returned from a function call, line by line, into an array where one line from output would be one array element? I can naively do it via storing to a file and then reading the file back-in, but this seems to be unnecessary overhead and not a very elegant and tidy solution. A: You probably want to split the data on carriage return. e.g. var myArray = data.split('\n'); converts a file Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation to an array: ['Lorem ipsum dolor sit amet, consectetur adipiscing elit, ', 'sed do eiusmod tempor incididunt ut labore et dolore magna ', 'aliqua. Ut enim ad minim veniam, quis nostrud exercitation'] So, to parse the data on the fly you could do: var myProcessedArray = myFunction() // produces some multi-line data .split('\n') // now we have array of strings .map(function(line) { // process lines line = line.replace('ipsum', ''); // remove the word 'ipsum' return line; })
{ "pile_set_name": "StackExchange" }
Q: Nested for loop through different sets of data in R How can I create a nested for loop that uses different data sets (each consisting of several data files) as input and then save the results variable- specific? I have written a for loop that subsets different climate data files for one country and then sums up the values for Temperature. The data looks like this and is given for every day in every region of both countries (one file=one region) Date |Prec |Temperature ----------|-----|----------- 13-01-1992| 1 | 1 14-01-1992| 0 | 1.5 15-01-1993| 0.8 | -0.4 16-01-1993| 0 | -2.2 17-01-1994| 0 | -2.35 13-01-1994| 0.3 | -2.95 14-01-1995| 1 | -8.95 15-01-1995| 2 | -7.25 16-01-1996| 1.5 | -6 17-01-1996| 0 | -8.3 13-02-1997| 1 | -0.3 14-02-1997| 0.1 | -0.15 15-02-1998| 0 | -2.5 16-02-1998| 0.2 | -3.4 17-02-1999| 0.9 | -0.4 16-03-1999| 2.6 | 8.4 17-03-2000| 1.7 | 11 18-03-2000| 4.7 | 4.65 19-03-2001| 1 | 2.95 20-03-2001| 0.6 | 4.7 13-08-2002| 2 | 22.35 14-08-2002| 1 | 20 15-08-2003| 1.7 | 21.4 16-08-2003| 0.5 | 21.55 17-08-2004| 0.4 | 21.5 17-02-2004| 0.3 | -0.6 18-02-2005| 0.8 | -3.4 19-02-2005| 1.2 | -3 20-02-2006| 0.8 | 2 21-02-2006| 6 | 1.2 Now I want this to run over data sets of two different countries. A different number of data-files belongs to each country. I tried this: Temperature<-matrix(1995:2006,12,1) Country_A<-c("1.csv","2.csv","3.csv") Country_B<-c("4.csv","5.csv") country<-c(Country_A, Country_B) country_names<-c("Country_A "," Country_B ") for(j in 1:2) {for(i in country[j]) { name <- country_names[j] Data<-read.csv(i, header=TRUE, sep = ",") Data$Dates<-as.Date(Data$Date, "%d-%m-%Y") Data95<-subset(Data, Dates>="1995-01-01") Data$Years<- as.numeric(format(Data$Dates, "%Y")) Temperature<-cbind(Temperature, aggregate(Data95$Column1, by= list(Data95$Years),FUN=sum))}} Instead of looping through one country after each other, like this only the files 1 and 2 are addressed. I think the problem is about country<-c(Country_A, Country_B) I assume that an array could be the solution to address the countries separately and maybe also to save the Temperature results country specific. Unfortunately I am quite new to R and therefore I don’t know how to set this up. I would be very happy about any help! A: Temperature<-matrix(1995:2006,12,1) # Below are just for understanding. Country_A represents just the names of files in the A directory # Country_A<-c("1.csv","2.csv","3.csv") # Country_B<-c("4.csv","5.csv") lA=list.files(path = "countryA_pathname", pattern= ".csv") lB=list.files(path = "countryB_pathname", pattern= ".csv") l1A = paste0("countryA_pathname", lA) # l1A = c("countryA_pathname/1.csv", "countryA_pathname/2.csv", "countryA_pathname/3.csv") l1B = paste0("countryB_pathname", lB) # l1B = c("countryB_pathname/4.csv", "countryB_pathname/5.csv") abc <- function(path) { Data = read.csv(path) Data$Date<-as.Date(Data$Date, "%d-%m-%Y") Data$Years<- as.numeric(format(Data$Date, "%Y")) Data95 = subset(Data , Date >="1995-01-01") Temperature <- ddply(Data95, "Years", function(x) sum(x$Temperature))[-1]#JUST EXTRACTS THE SUM COLUMN Temperature } LA = lapply(l1A, abc) LB = lapply(l1B, abc) dA = cbind(Temperature, as.data.frame(LA)) colnames(dA) <- c("Temperature", lA) dB = cbind(Temperature, as.data.frame(LB)) colnames(dB) <- c("Temperature", lB) Hope this works
{ "pile_set_name": "StackExchange" }
Q: Where is the Error in my C++ code? Here is the error screenshot: http://prntscr.com/9n6ybt Here is the code: #include <iostream> using namespace std; int main() { int a, b; cin>>a>>b; for(int i=a;i<=b;i++) { if (b%i==0) { cout << i << " "; } } return 0; } A: for(int i=a;i<=b;i++) { if (b%i==0) { cout << i << " "; } } Will give a division by zero if i == 0. You'll have to check the input, or the value of i, for example: for(int i=a; i<=b; i++) { if (i > 0 && b%i==0) { cout << i << " "; } } If i == 0, b%i==0 will not be evaluated. A: You are not handling the case where i might be 0 (division by 0) so b % i is indetermined. You can solve it by going this way: if (i==0) continue;
{ "pile_set_name": "StackExchange" }
Q: Does Olympus OM-D E-M1 Mark II High Res Shot mode work with manual lenses? I've read that Olympus OM-D E-M1 Mark II High Res Shot mode has some aperture settings limitations. Does it work with fully manual legacy lenses that don't report aperture to the camera at all? A: I have the OM-D E-M1 II, and as far as I can tell, High Res Shot (HRS) can be used with any lens, provided you use some common sense. I have used HRS with a Samyang 7.5 fisheye lens, an OM 500 f8 mirror lens and an adapted (EF) Sigma 10-20 lens, among others. Because of the way HRS works, I try to keep shutter speeds as high as possible. The longer the exposure, the greater the chance of movement in the scene or camera vibration. I would limit apertures to f8/f11 even with a legacy lens, as diffraction will rob you of image sharpness above this, and sharpness is one of the reasons to use HRS after all. A: Beyond is the info I found so far regarding this mode. So far, no definitive conclusion can be made regarding your problem... but I would venture to say that legacy lenses seem to have no aperture limitation. From the manual, no aperture limitation The manual of the Olympus OM-D E-M1 Mark II doesn't provide any hint about aperture limitation when using High Res Shot. The dedicated section can be found page 48 and page 99. The only parts related to High Res Shot's limitation can be found on the following pages: Page 91: Bracketing can not be combined with HDR, interval-timer photography, digital shift, multiple-exposure photography, or high res shots. Page 97: The following is not available while keystone compensation is in effect : [...] High Res Shot From Internet: no aperture limitation ?! This website gets more in depth about how the High Res Shot mode works. Aperture is mentioned 13 times, mostly about focus shifting issues when stopping down a lens BUT, in the commentary you can read: With native lenses you are limited to f8 or wider, legacy lenses can be used at whatever aperture you like of course. So it seems that legacy lenses can be used! This website, apparently from on Olympus employee, indicates: Camera settings limitations: shutter speed not longer than 8 sec, aperture not narrower than F8, ISO not higher than 1600 and flash sync not faster than 1/50sec (previously in E-M5 Mark II or PEN-F, flash sync limit was 1/20sec) Ok, no precision regarding the type of lens... This page says, when using High Res Shot mode (lens not mentioned): No aperture we tried (from f/2.8 to f/8) was able to begin resolving any of dots with the E-M5 II, so diffraction limiting is not to blame. It only confirms that High Res Shot mode is working at least from f/2.8 to f/8. Conclusion No clear indication regarding fully manual legacy lenses that don't report aperture to the camera... but I would venture to say that legacy lenses seem to have no aperture limitation.
{ "pile_set_name": "StackExchange" }
Q: For loop in batch file stopping unwantedly I have a batch file doing a for loop that looks something like this: for /f %%F in ("%DIR%\*.xml") do something.exe --Option1=Value1 --Option2=Value2 ... --File="%%~fF" where "something" is a not owned black box (don't know how it works, can't be modified) that performs certain operation. This runs correctly and returns a "completed successfully" message, however for some reasons it forces my for loop to stop and run the next line of my batch file without performing the operation for all the rest of the files. Is there any way I can force it to continue and not exit the for loop? A: It seems that you want to loop through file names, remove the /f flag. for %%G in ("%DIR%\*.xml") do ... Since your doSomething program terminates after finishing it's work, you will need to start the doSomething on a separate process. Simply change doSomething "%%~fF" to start "" programName "%%~fF"
{ "pile_set_name": "StackExchange" }
Q: Efficient string similarity grouping Setting: I have data on people, and their parent's names, and I want to find siblings (people with identical parent names). pdata<-data.frame(parents_name=c("peter pan + marta steward", "pieter pan + marta steward", "armin dolgner + jane johanna dough", "jack jackson + sombody else")) The expected output here would be a column indicating that the first two observations belong to family X, while the third and fourth columns are each in a separate family. E.g: person_id parents_name family_id 1 "peter pan + marta steward", 1 2 "pieter pan + marta steward", 1 3 "armin dolgner + jane johanna dough", 2 4 "jack jackson + sombody else" 3 Current approach: I am flexible regarding the distance metric. Currently, I use Levenshtein edit-distance to match obs, allowing for two-character differences. But other variants such as "largest common sub string" would be fine if they run faster. For smaller subsamples I use stringdist::stringdist in a loop or stringdist::stringdistmatrix, but this is getting increasingly inefficient as sample size increases. The matrix version explodes once a certain sample size is used. My terribly inefficient attempt at looping is here: #create data of the same complexity using random last-names #(4mio obs and ~1-3 kids per parents) pdata<-data.frame(parents_name=paste0(rep(c("peter pan + marta ", "pieter pan + marta ", "armin dolgner + jane johanna ", "jack jackson + sombody "),1e6),stringi::stri_rand_strings(4e6, 5))) for (i in 1:nrow(pdata)) { similar_fatersname0<-stringdist::stringdist(pdata$parents_name[i],pdata$parents_name[i:nrow(pdata)],nthread=4)<2 #[create grouping indicator] } My question: There should be substantial efficiency gains, e.g. because I could stop comparing strings once I found them to sufficiently different in something that is easier to assess, eg. string length, or first word. The string length variant already works and reduces complexity by a factor ~3. But thats by far too little. Any suggestions to reduce computation time are appreciated. Remarks: The strings are actually in unicode and not in the Latin alphabet (Devnagari) Pre-processing to drop unused characters etc is done A: There are two challenges: A. The parallel execution of Levenstein distance - instead of a sequential loop B. The number of comparisons: if our source list has 4 million entries, theoretically we should run 16 trillion of Levenstein distance measures, which is unrealistic, even if we resolve the first challenge. To make my use of language clear, here are our definitions we want to measure the Levenstein distance between expressions. every expression has two sections, the parent A full name and the parent B full name which are separated by a plus sign the order of the sections matters (i.e. two expressions (1, 2) are identical if Parent A of expression 1 = Parent A of expression 2 and Parent B or expression 1= Parent B of expression 2. Expressions will not be considered identical if Parent A of expression 1 = Parent B of expression 2 and Parent B of expression 1 = Parent A of expression 2) a section (or a full name) is a series of words, which are separated by spaces or dashes and correspond to the the first name and last name of a person we assume the maximum number of words in a section is 6 (your example has sections of 2 or 3 words, I assume we can have up to 6) the sequence of words in a section matters (the section is always a first name followed by a last name and never the last name first, e.g. Jack John and John Jack are two different persons). there are 4 million expressions expressions are assumed to contain only English characters. Numbers, spaces, punctuation, dashes, and any non-English character can be ignored we assume the easy matches are already done (like the exact expression matches) and we do not have to search for exact matches Technically the goal is to find series of matching expressions in the 4-million expressions list. Two expressions are considered matching expression if their Levenstein distance is less than 2. Practically we create two lists, which are exact copies of the initial 4-million expressions list. We call then the Left list and the Right list. Each expression is assigned an expression id before duplicating the list. Our goal is to find entries in the Right list which have a Levenstein distance of less than 2 to entries of the Left list, excluding the same entry (same expression id). I suggest a two step approach to resolve the two challenges separately. The first step will reduce the list of the possible matching expressions, the second will simplify the Levenstein distance measurement since we only look at very close expressions. The technology used is any traditional database server because we need to index the data sets for performance. CHALLENGE A The challenge A consists of reducing the number of distance measurements. We start from a maximum of approx. 16 trillion (4 million to the power of two) and we should not exceed a few tens or hundreds of millions. The technique to use here consists of searching for at least one similar word in the complete expression. Depending on how the data is distributed, this will dramatically reduce the number of possible matching pairs. Alternatively, depending on the required accuracy of the result, we can also search for pairs with at least two similar words, or with at least half of similar words. Technically I suggest to put the expression list in a table. Add an identity column to create a unique id per expression, and create 12 character columns. Then parse the expressions and put each word of each section in a separate column. This will look like (I have not represented all the 12 columns, but the idea is below): |id | expression | sect_a_w_1 | sect_a_w_2 | sect_b_w_1 |sect_b_w_2 | |1 | peter pan + marta steward | peter | pan | marta |steward | There are empty columns (since there are very few expressions with 12 words) but it does not matter. Then we replicate the table and create an index on every sect... column. We run 12 joins which try to find similar words, something like SELECT L.id, R.id FROM left table L JOIN right table T ON L.sect_a_w_1 = R.sect_a_w_1 AND L.id <> R.id We collect the output in 12 temp tables and run an union query of the 12 tables to get a short list of all expressions which have a potential matching expressions with at least one identical word. This is the solution to our challenge A. We now have a short list of the most likely matching pairs. This list will contain millions of records (pairs of Left and Right entries), but not billions. CHALLENGE B The goal of challenge B is to process a simplified Levenstein distance in batch (instead of running it in a loop). First we should agree on what is a simplified Levenstein distance. First we agree that the levenstein distance of two expressions is the sum of the levenstein distance of all the words of the two expressions which have the same index. I mean the Levenstein distance of two expressions is the distance of their two first words, plus the distance of their two second words, etc. Secondly, we need to invent a simplified Levenstein distance. I suggest to use the n-gram approach with only grams of 2 characters which have an index absolute difference of less than 2 . e.g. the distance between peter and pieter is calculated as below Peter 1 = pe 2 = et 3 = te 4 = er 5 = r_ Pieter 1 = pi 2 = ie 3 = et 4 = te 5 = er 6 = r_ Peter and Pieter have 4 common 2-grams with an index absolute difference of less than 2 'et','te','er','r_'. There are 6 possible 2-grams in the largest of the two words, the distance is then 6-4 = 2 - The Levenstein distance would also be 2 because there's one move of 'eter' and one letter insertion 'i'. This is an approximation which will not work in all cases, but I think in our situation it will work very well. If we're not satisfied with the quality of the results we can try with 3-grams or 4-grams or allow a larger than 2 gram sequence difference. But the idea is to execute much fewer calculations per pair than in the traditional Levenstein algorithm. Then we need to convert this into a technical solution. What I have done before is the following: First isolate the words: since we need only to measure the distance between words, and then sum these distances per expression, we can further reduce the number of calculations by running a distinct select on the list of words (we have already prepared the list of words in the previous section). This approach requires a mapping table which keeps track of the expression id, the section id, the word id and the word sequence number for word, so that the original expression distance can be calculated at the end of the process. We then have a new list which is much shorter, and contains a cross join of all words for which the 2-gram distance measure is relevant. Then we want to batch process this 2-gram distance measurement, and I suggest to do it in a SQL join. This requires a pre-processing step which consists of creating a new temporary table which stores every 2-gram in a separate row – and keeps track of the word Id, the word sequence and the section type Technically this is done by slicing the list of words using a series (or a loop) of substring select, like this (assuming the word list tables - there are two copies, one Left and one Right - contain 2 columns word_id and word) : INSERT INTO left_gram_table (word_id, gram_seq, gram) SELECT word_id, 1 AS gram_seq, SUBSTRING(word,1,2) AS gram FROM left_word_table And then INSERT INTO left_gram_table (word_id, gram_seq, gram) SELECT word_id, 2 AS gram_seq, SUBSTRING(word,2,2) AS gram FROM left_word_table Etc. Something which will make “steward” look like this (assume the word id is 152) | pk | word_id | gram_seq | gram | | 1 | 152 | 1 | st | | 2 | 152 | 2 | te | | 3 | 152 | 3 | ew | | 4 | 152 | 4 | wa | | 5 | 152 | 5 | ar | | 6 | 152 | 6 | rd | | 7 | 152 | 7 | d_ | Don't forget to create an index on the word_id, the gram and the gram_seq columns, and the distance can be calculated with a join of the left and the right gram list, where the ON looks like ON L.gram = R.gram AND ABS(L.gram_seq + R.gram_seq)< 2 AND L.word_id <> R.word_id The distance is the length of the longest of the two words minus the number of the matching grams. SQL is extremely fast to make such a query, and I think a simple computer with 8 gigs of RAM would easily do several hundred of million lines in a reasonable time frame. And then it's only a matter of joining the mapping table to calculate the sum of word to word distance in every expression, to get the total expression to expression distance. A: You are using the stringdist package anyway, does stringdist::phonetic() suit your needs? It computes the soundex code for each string, eg: phonetic(pdata$parents_name) [1] "P361" "P361" "A655" "J225" Soundex is a tried-and-true method (almost 100 years old) for hashing names, and that means you don't need to compare every single pair of observations. You might want to go further and do soundex on first name and last name seperately for father and mother. A: My suggestion is to use a data science approach to identify only similar (same cluster) names to compare using stringdist. I have modified a little bit the code generating "parents_name" adding more variability in first and second names in a scenario close to reality. num<-4e6 #Random length random_l<-round(runif(num,min = 5, max=15),0) #Random strings in the first and second name parent_rand_first<-stringi::stri_rand_strings(num, random_l) order<-sample(1:num, num, replace=F) parent_rand_second<-parent_rand_first[order] #Paste first and second name parents_name<-paste(parent_rand_first," + ",parent_rand_second) parents_name[1:10] Here start the real analysis, first extract feature from the names such as global length, length of the first, length of the second one, numeber of vowels and consonansts in both first and second name (and any other of interest). After that bind all these feature and clusterize the data.frame in a high number of clusters (eg. 1000) features<-cbind(nchars,nchars_first,nchars_second,nvowels_first,nvowels_second,nconsonants_first,nconsonants_second) n_clusters<-1000 clusters<-kmeans(features,centers = n_clusters) Apply stringdistmatrix only inside each cluster (containing similar couple of names) dist_matrix<-NULL for(i in 1:n_clusters) { cluster_i<-clusters$cluster==i parents_name<-as.character(parents_name[cluster_i]) dist_matrix[[i]]<-stringdistmatrix(parents_name,parents_name,"lv") } In dist_matrix you have the distance beetwen each element in the cluster and you are able to assign the family_id using this distance. To compute the distance in each cluster (in this example) the code takes approximately 1 sec (depending on the dimension of the cluster), in 15mins all the distances are computed. WARNING: dist_matrix grow very fast, in your code is better if you will analyze it inside di for loop extracting famyli_id and then you can discard it.
{ "pile_set_name": "StackExchange" }
Q: Converting JavaScript function to Java I want to change this code from JavaScript to Java servlet. Can anyone guide me in finding the solution? var dob1 = document.getElementById(id).value; var today = new Date(), dob = new Date(dob1), age = new Date(today - dob).getFullYear() - 1970; A: Use the Calendar API. String dobString = "1978-03-26"; Date dobDate = new SimpleDateFormat("yyyy-MM-dd").parse(dobString); Calendar dobCalendar = Calendar.getInstance(); dobCalendar.setTime(dobDate); Calendar today = Calendar.getInstance(); int age = -1; while (today.after(dobCalendar)) { age++; today.add(Calendar.YEAR, -1); } System.out.println(age); // 32 Since the Calendar API is horrible, I'd suggest JodaTime instead. String dobString = "1978-03-26"; DateTime dobDate = DateTimeFormat.forPattern("yyyy-MM-dd").parseDateTime(dobString); DateTime today = new DateTime(); int age = Years.yearsBetween(dobDate, today).getYears(); System.out.println(age); // 32
{ "pile_set_name": "StackExchange" }
Q: Trouble with CSS Hover and Active I'm trying to change the image of the button when its on hover and active, currently it will show the button but when you go to hover over it doesn't change, I've tried giving the buttons there own id as well as just replacing the current image with another but it doesn't work. html: <div id="navcontainer" class="column five"> <ul id="navmain"> <li><a href="index.html" id="home">Home</a></li> <li><a href="philosophy.html" id="btnphil">Philosophy</a></li> <li><a href="econews.html" id="btnnews">Eco News</a></li> <li><a href="diy.html" id="btndiy">DIY</a></li> <li><a href="takeaction.html" id="btntake">Take Action </a></li> </ul> </div><!-- .sidebar#sideLeft --> CSS: #navcontainer{ padding:10px 30px; width:220px; float: left; margin-top:480px; } #navmain li{ list-style:none; } #navmain li, #navmain a{ text-decoration:none; height:38px; width:153px; background-image: url('../images/button.png') ; background-position: center; text-align: center; color:#000; margin-left:-10px; margin-top:20px; vertical-align: -22%; #navmain, #home a:hover { text-decoration:none; height:38px; width:153px; background-image: url('../images/buttonhover.png') ; background-position: center; text-align: center; color:#000; margin-left:-10px; margin-top:20px;;} } #navmain a:active { border-top-color: #297ab0; background: #297ab0; } A: You have to clean up you CSS selectors. You're not being consistent: // This is applying the image #navmain li, #navmain a{...} // This is swapping but it starts with "#home" instead of "#navmain" #navmain, #home a:hover {...} So try: #navmain a:hover{...}
{ "pile_set_name": "StackExchange" }
Q: Is it possible to recuperate data entered by user in a dynamic table I have created a dynamic table using html+php with input like a form (this is a matrix in reality) and I want to know if it is possible to recuperate data entered by user in a dynamic table? This is my code: <?php $rows = 3; // define number of rows echo ' <form action="f.php" method="post">'; echo "<table border='1'>"; for($tr=1;$tr<=$rows;$tr++){ echo "<tr>"; echo "<th> E".$tr." </th>"; for($td=1;$td<=$rows;$td++){ echo '<td><input type="number" name="etat" placeholder="nb d etat" /></td>'; } echo "</tr>"; } echo "</table>"; echo '<input type="submit" value="Create Table">'; echo '</form>' ?> A: Yes it is possible but you have to create form by giving row and column number because you want to create matrix: $rows = 3; // define number of rows echo ' <form action="f.php" method="post">'; echo "<table border='1'>"; for($tr=1;$tr<=$rows;$tr++){ echo "<tr>"; echo "<th> E".$tr." </th>"; for($td=1;$td<=$rows;$td++){ echo '<td><input type="number" name="etat_'.$tr.'_'.$td.'" placeholder="nb d etat" /></td>'; } echo "</tr>"; } echo "</table>"; echo '<input type="submit" name="submit" value="Create Table">'; echo '</form>'; in f.php fetch data : if(isset($_POST['submit'])) { print_r($_POST); } It gives you output: Array ( [etat_1_1] => 1 //means 1st row 1st column [etat_1_2] => 2 //means 1st row 2nd column [etat_1_3] => 3 //means 1st row 3rd column [etat_2_1] => 4 //means 2nd row 1st column and so on... [etat_2_2] => 5 [etat_2_3] => 6 [etat_3_1] => 7 [etat_3_2] => 8 [etat_3_3] => 9 [submit] => Create Table )
{ "pile_set_name": "StackExchange" }
Q: Allow users access only to their own data in Firebase database? I'm trying to have a data structure like this and to ensure that a user can only pull in their own data, since all the processing is done client side. What database security rules would I have to use so that User1 can access their own posts, but cannot access User2's posts? (I'm using Firebase web) Sample database structure: { "posts" : { "001" : { "text" : "note 1", "userID" : "User1" }, "002" : { "text" : "note 2", "userID" : "User1" }, "003" : { "text" : "note 3", "userID" : "User2" } } } Sample database query: firebase.database().ref('/posts/').once('value').then(function(snapshot) { console.log(snapshot.val()); // Returns all 3 posts }); A: In your current structure it is very easy to secure data access to each post's creator: { "rules": { "posts": { "$postid": { ".read": "data.child('userID').val() === auth.uid" } } } } This is all that is needed: now each user can only read their own posts. But there is one big problem with this approach: no-one can now read from /posts, so no-one can get a list of all posts. And to grant someone the ability to list posts, you must give them read access to /posts. And since you cannot revoke a permission on a lower level, that means that you at that point they can read all posts, not just the ones they created. This is known within Firebase as rules are not filters: you cannot use rules to filter data. We've covered it quite a bit here on Stack Overflow, so I recommend you also check out some other questions on the topic. There are quite a few solutions to the problem. Secondary index A common solution is to create a list of the post IDs that each user has access to. This is often called a (secondary) index and adds this additional data to your model: { "userPosts" : { "User1": { "001" : true, "002" : true }, "User2": { "003" : true } } } Now you secure access to the original posts as before, but then secure access to the secondary index with: { "rules": { "userPosts": { "$userid": { ".read": "$userid === auth.uid" } } } } So each user can read the list of postIDs they have access to, and can then read each individual post under /posts/postID. Store each user's posts under a separate node In your case, there is a simpler solution. I'd model that data slightly more hierarchical, with each user's posts under their own UID: { "posts" : { "User1": { "001" : { "text" : "note 1", "userID" : "User1" }, "002" : { "text" : "note 2", "userID" : "User1" }, }, "User2": { "003" : { "text" : "note 3", "userID" : "User2" } } } } Now you can secure access with: { "rules": { "posts": { "$userid": { ".read": "$userid === auth.uid" } } } } And each user can read and list their own posts. But do keep the secondary index in mind, since you're likely to need it sooner or later.
{ "pile_set_name": "StackExchange" }
Q: WP7 Itemtemplate click event fires on collapsed ExpanderView I am using the ExpanderView control available in the Silverlight Toolkit with some custom templates. It all works well, but when the ExpanderView is collapsed, and I click on the area below the Header where an item resides when the ExpanderView is expanded. The click event of that item fires. How can i fix this? Should I somehow remove the tap commands or remove the ItemPanel when the ExpanderView is collapsed and add it again when it's being expanded? <DataTemplate x:Key="CustomItemTemplate"> <Image delay:LowProfileImageLoader.UriSource="{Binding}" Width="156" Height="95" > <i:Interaction.Triggers> <i:EventTrigger EventName="Tap"> <cmd:EventToCommand Command="{Binding Storage.ImageTapCommand, Source={StaticResource Locator}}" CommandParameter="{Binding}" /> </i:EventTrigger> </i:Interaction.Triggers> </Image> </DataTemplate> <toolkit:ExpanderView Grid.Column="1" Header="{Binding}" Expander="{Binding}" IsExpanded="{Binding IsExpanded, Mode=TwoWay}" ItemsSource="{Binding Files}" HeaderTemplate="{StaticResource CustomHeaderTemplate}" ExpanderTemplate="{StaticResource CustomExpanderTemplate}" ItemTemplate="{StaticResource CustomItemTemplate}" > <toolkit:ExpanderView.ItemsPanel> <ItemsPanelTemplate> <toolkit:WrapPanel /> </ItemsPanelTemplate> </toolkit:ExpanderView.ItemsPanel> </toolkit:ExpanderView> A: You can change the IsHitTestVisible property of the root UIElement for each of your expander items every time the ExpanderView is expanded/collapsed, and also just after initially binding the ExpanderView (hooking up to ExpanderView.LayoutUpdated works fine for that purpose). Here's an example that fixed the issue for me: private void FixExpanderItemsInteractivity(ExpanderView expanderView) { foreach (var item in expanderView.Items) { ContentPresenter contentPresenter = expanderView.ItemContainerGenerator.ContainerFromItem(item) as ContentPresenter; if (contentPresenter != null) { UIElement expanderItemRootElement = VisualTreeHelper.GetChild(contentPresenter, 0) as UIElement; if(expanderItemRootElement != null) { expanderItemRootElement.IsHitTestVisible = expanderView.IsExpanded; } } } } private void Expander_Expanded(object sender, RoutedEventArgs e) { FixExpanderItemsInteractivity(sender as ExpanderView); } private void Expander_Collapsed(object sender, RoutedEventArgs e) { FixExpanderItemsInteractivity(sender as ExpanderView); } private void Expander_LayoutUpdated(object sender, EventArgs e) { FixExpanderItemsInteractivity(sender as ExpanderView); }
{ "pile_set_name": "StackExchange" }
Q: Remove node from simpleXML I'm trying to unset a node from a web.config file but it doesn't seem to be working. Anyone know what I'm doing wrong? If there's a better approche please let me know? $web_config = simplexml_load_file('web.config'); $nodes = $web_config->children(); $att_name = 'myMap'; $value = '1'; $map_node = $nodes[0]->xpath( sprintf('rewrite/rewriteMaps/rewriteMap[@name="%s"]/add[@value="%d"]', $att_name, $value) ); print_r($map_node); // this outpus the correct node if (!empty($map_node)) { unset($map_node) } else { printf('No maps with value: "%d" found', $value); } $web_config->asXML(); A: $web_config = new SimpleXMLElement('web.config',null,true); $map_node = $web_config->xpath( sprintf('//rewrite/rewriteMaps/rewriteMap[@name="%s"/add[@value="%d"]', 'myMap', 1) ); if (!empty($map_node)) { unset($map_node[0][0]); } $web_config->asXml()
{ "pile_set_name": "StackExchange" }
Q: How to use php read ppt on the web I have a simple question, the question is how could i upload a ppt file and display the ppt on the web. I have try to google the question, and i have read a page is use google doc and use the url to read my ppt file url link. So, the question is not to achieve? Thanks! A: One Google search of "PHP ppt" yields this. Edit: But it only supports .pptx.
{ "pile_set_name": "StackExchange" }
Q: Mysql query ordered by two integer columns Sorry my basic question, I am two hours searching in stackoverflow. I have a mysql table where I need to select ordering by two integer columns. partners +----+--------+----------+ | id | status | name | +----+--------+----------+ | 1 | 0 | Adam | | 2 | 1 | Charles | | 3 | 1 | Bob | | 4 | 0 | Raven | +----+--------+----------+ When I use: mysql_query("SELECT name FROM partners ORDER BY id DESC, status DESC"); The result is: Raven Bob Charles Adam But I need this result, always the status=1 on the top: Bob Charles Raven Adam Where I'm doing wrong in the query? A: mysql_query("SELECT name FROM partners ORDER BY status DESC, id DESC"); Put the things you want to sort by in the order you want them sorted.
{ "pile_set_name": "StackExchange" }
Q: When is impedance matching necessary? Isn't it much less efficient on all other aspects? I understand that discontinuities of the physical properties of conductors create reflections much like dioptres reflect a bit of light, but : in which cases does impedance matching become necessary to avoid that? Is there a simple-ish method to decide quantitatively? (1) My second concern (which probably stems from my limited knowledge of the topic) is about the efficiency of impedance matching: if all impedances including the load must be the same, doesn't that mean that the overall impedance is extremely small and hence that a lot of current is necessary? By "same", is it solely the amplitude or the phase shift as well? (2) Which implies a higher voltage source because of the voltage drops, and lots of power lost as heat? Is it a tradeoff? Let's take the following diagram as an application example. A: It took me a while, but I think I finally understand the nature of your question. You seem not to understand that you cannot measure a transmission line's Z0 with an Ohm meter. A transmission line's impedance is determined by the ratio of its electric field to its magnetic field. This is determined by the line's physical dimensions, not by the materials used to build the line. A coaxial cable's characteristic impedance, for example, is determined by the ratio of its conductor diameters and we ignore the resistance of the conductors use to build the cable (within reason). A short piece of 50 Ohm cable will typically have conductor resistance values in the micro Ohm range. We use impedance matching in circuits when we need to improve the power transfer between 2 points in the circuit. You asked "when does impedance matching become necessary", and the answer to that depends entirely on the situation. It may be the case that a high power circuit will burn out if the magnitude of the reflection coefficient is greater than 0.2, but this amount of reflection can usually be tolerated low power circuits. In response to the questions below: To research transmission line impedance, search on phrases such as microstrip, stripline, or microstrip or stripline calculator. Here is a Wikipedia article. http://en.wikipedia.org/wiki/Microstrip A simple example would be if you were to drive a 2 Ohm load with a 50 Ohm source. Without impedance matching, only 15% of the power would be deliverd to the load. You can match this load to the source with a 1/4 wave, 10 Ohm transmission line. This match will be perfect at the frequency where the transmission line is 1/4 wavelength, so 100% of the power will be delivered to the load at this particular frequency. At other frequencies, the match will be degraded. Response to 2nd question: You made 2 mistakes. First, in the calculation you made, the voltage is only 4%, but the power is proportional to V^2. But this is irrelavent because you can't calculate the power transfer this way. Think of it this way. The impedance of a free space is 377 Ohms. If we connect an antenna to this 377 Ohm source, we don't treat the 377 Ohms as a dissipative loss point, but rather an impedance that dictating the ratio of the E and H fields, nothing more. The correct way the calculate power transfer is to calculate Rho, the reflection coefficient. Rho = (Z0 - ZL)/(Z0 + ZL). For my example Rho = (50 - 2)/(50 + 2) = -0.923 Power transfer is 1 - Rho^2 = 14.8% A: It seems the confusion is coming from the fact that you think each load (ZL in your picture) must also be matched to the transmission line impedance. This is not true. Ideally, each end of the transmission line is terminated with its characteristic impedance (Z0 in your diagram). At any point along the transmission line, you see one Z0 load in each direction, for a total impedance of Z0/2. There won't be reflections when signals get to the end of the transmission line because the terminating resistor looks electrically just like more of the same transmission line. The the transmission line is multi-drop, then you have to be careful these connections in the middle of the line don't disturb the impedance. Each tap therefore ideally has infinite impedance. Since the connection from the transmission line to whatever is receiving the signal at that tap are themselves a transmission line, and that line will be terminated with infinite impedance, some of the signal can bounce back via this stub connection. This is why such taps on impedance-controlled transmission lines are physically small. They present a high impedance to not disturb the transmission line overall impedance, and are small so that the short connection between the transmission line and whatever is receiving the signal acts more like a lumped system as apposed to a transmission line. Usually 1/10 of the shortest wavelength of interest is good enough.
{ "pile_set_name": "StackExchange" }
Q: Django on_delete=models.CASCADE has no effect at SQL level My models.py file contains: class User(models.Model): email = models.CharField(max_length=100, unique=True) password = models.CharField(max_length=100) create_time = models.DateTimeField(auto_now_add=True) class Session(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) token = models.CharField(max_length=100, unique=True) When i command python manage.py makemigrations and then command python manage.py sqlmigrate <app_name> <migration_name> i don't see anything that says "ON DELETE=CASCADE" However, the migrations work without failure when i type python manage.py migrate. Now, if i go to the mysql table (using SequelPro) and try to delete a row for a user who currently has a session entry, i get the following error: "One row was not removed. Reload the table to be sure that the contents have not changed in the meantime. Check the Console for possible errors inside the primary key(s) of this table!". Now, when i go to the session table and delete the sessions of this user and then try to delete the user's row from the user table, it deletes properly. This indicates ON DELETE = CASCADE is not actually working at the MySQL level. How can i correct it? A: From the docs (emphasis mine): ForeignKey.on_delete When an object referenced by a ForeignKey is deleted, Django will emulate the behavior of the SQL constraint specified by the on_delete argument. Django does not actually set an ON DELETE clause in the database. If you need one, you can add one manually using a RunSQL operation. Be sure to use the same index name, or keep the original index, or you might run into errors later on.
{ "pile_set_name": "StackExchange" }
Q: How to automatically alternate or cycle linestyles in seaborn regplot? I want my my 21 lines of data on the same graph to be more interpretable with the Legend. For instance, perhaps I could make every other legend entry / line be displayed with dashes instead of a continuous line. My mixed use of Seaborn and Matplotlib is confusing me - I'm not sure how to get the dashes in there in an alternating way. products = list(data_cleaned.columns) print('products: \n',products) for i, product in enumerate(products): subset = data_cleaned[data_cleaned[product]>0][product] sns.distplot(subset,hist=False,kde=True,kde_kws={'linewidth':3},label=product) if i%2 == 0: plt.plot(subset,'-', dashes = [8, 4, 2, 4, 2, 4]) sns.set(rc = {'figure.figsize':(25,10)}) #sns.palplot() palette_to_use = sns.color_palette("hls", 21) sns.set_palette(palette_to_use) #cmap = ListedColormap(sns.color_palette().as_hex()) plt.legend(prop={'size': 16}, title = 'Product') plt.title('Density Plot with Multiple Products') plt.xlabel('log10 of monthly spend') plt.ylabel('Density') Here's my current output: A: The correct way to do this is to use a cycler: # added this: from itertools import cycle ls = ['-','--',':','-.','-','--',':','-.','-','--',':','-.','-','--',':','-.','-','--',':','-.','-','--',':','-.'] linecycler = cycle(ls) products = list(data_cleaned.columns) print('products: \n',products) for i, product in enumerate(products): subset = data_cleaned[data_cleaned[product]>0][product] ax = sns.distplot(subset,hist=False,kde=True,kde_kws={'linewidth':3,'linestyle':next(linecycler)},label=product) # loop through next(linecycler) sns.set(rc = {'figure.figsize':(25,10)}) #sns.palplot() palette_to_use = sns.color_palette("hls", 21) sns.set_palette(palette_to_use) #cmap = ListedColormap(sns.color_palette().as_hex()) plt.legend(prop={'size': 16}, title = 'Product') plt.title('Density Plot with Multiple Products') plt.xlabel('log10 of monthly spend') plt.ylabel('Density')
{ "pile_set_name": "StackExchange" }
Q: How to filter by property being an empty array? I have this on ng-repeat <tr ng-repeat="website in websites"> <td>{{website.url}}</td> </tr> Each website object from websites array looks like this: { url: "example.com", groups: [] } Question: How to apply filter to above loop so that it only shows elements where groups property is an empty array? Things I've tried: data-ng-repeat="website in websites | filter: {groups: []}" data-ng-repeat="website in websites | filter: !groups.length" data-ng-repeat="website in websites | filter: groups.length === 0" (no errors in console but filters out everything) data-ng-repeat="website in websites | filter: {groups: ''}" (does the opposite of what I want, and shows only items where groups is not an empty array) data-ng-repeat="website in websites | filter: {groups: null}" (if instead of [] I use null to signify there's no values, this works, but it seems really messy...because I'd need to constantly look out for groups property becoming empty, and setting it to null manually) A: I added a filter function in the controller: JS: angular.module('myApp', []) .controller('myController', ['$scope', function($scope) { $scope.friends = [{ name: 'John', phone: '555-1276', a: [1, 2, 3] }, { name: 'Mary', phone: '800-BIG-MARY', a: [1, 2, 3] }, { name: 'Mike', phone: '555-4321', a: null }, { name: 'Adam', phone: '555-5678', a: [] }, { name: 'Julie', phone: '555-8765', a: [] }, { name: 'Juliette', phone: '555-5678', a: [] }]; $scope.filterFn = function(item) { // must have array, and array must be empty return item.a && item.a.length === 0; }; } ]); In your template: <table> <tr><th>Name</th><th>Phone</th><th>array len</th></tr> <tr ng-repeat="friend in friends | filter: filterFn"> <td>{{friend.name}}</td> <td>{{friend.phone}}</td> <td>{{friend.a.length}}</td> </tr> </table> Modified Angular Filter doc plnkr A: You could use comparator parameter like this. <tr ng-repeat="website in websites | filter:{groups: []}:true"> <td>{{website.url}}</td> </tr> Angularjs official document describe the meaning of true. true: A shorthand for function(actual, expected) { return angular.equals(actual, expected)}. This is essentially strict comparison of expected and actual. jsfiddle is here.
{ "pile_set_name": "StackExchange" }
Q: Why does writing a number in scientific notation make a difference in this code? I am trying to write a code to determine when the number of milliseconds since the beginning of 1970 will exceed the capacity of a long. The following code appears to do the job: public class Y2K { public static void main(String[] args) { int year = 1970; long cumSeconds = 0; while (cumSeconds < Long.MAX_VALUE) { // 31557600000 is the number of milliseconds in a year cumSeconds += 3.15576E+10; year++; } System.out.println(year); } } This code executes within seconds and prints 292272992. If instead of using scientific notation I write cumSeconds as 31558000000L, the program seems to take “forever” to run (I just hit pause after 10 mins or so). Also notice that writing cumSeconds in scientific notation does not require specifying that the number is a long with L or l at the end. A: The reason it makes a difference is because the scientific notation number 3.1558E+10 is a double literal, whereas the literal 31558000000L is of course a long literal. This makes all the difference in the += operator. A compound assignment expression of the form E1 op= E2 is equivalent to E1 = (T) ((E1) op (E2)), where T is the type of E1, except that E1 is evaluated only once. Basically, long += long yields a long, but long += double also yields a long. When adding a double, the initial value of cumSeconds is widened to a double and then the addition occurs. The result undergoes a narrowing primitive conversion back to long. A narrowing conversion of a floating-point number to an integral type T takes two steps: In the first step, the floating-point number is converted either to a long, if T is long (snip) Otherwise, one of the following two cases must be true: The value must be too small (a negative value of large magnitude or negative infinity), and the result of the first step is the smallest representable value of type int or long. The value must be too large (a positive value of large magnitude or positive infinity), and the result of the first step is the largest representable value of type int or long. (bold emphasis mine) The result eventually is too big to be represented in a long, so the result is narrowed to Long.MAX_VALUE, and the while loop ends. However, when you use a long literal, you are continuously adding an even value to an even value, which will eventually overflow. This does not set the value to Long.MAX_VALUE, which is odd, so the loop is infinite. But instead of relying on an addition eventually yielding Long.MAX_VALUE, with Java 1.8+ you can explicitly test for overflow with Math.addExact. Returns the sum of its arguments, throwing an exception if the result overflows a long. Throws: ArithmeticException - if the result overflows a long A: The key observation is that cumSeconds < Long.MAX_VALUE where cumSeconds is a long can only be false if cumSeconds is exactly Long.MAX_VALUE. If you do the calculation with long numbers it takes a quite some time to reach this value exactly (if it is ever reached) because long arithmetic wraps around when you leave the number range. Doing the arithmetic with double numbers will yield the max value when the double value is large enough. A: @rgettman has already gone into detail about the round-off gymnastics that take place when you're using a double instead of a long. But there's more. When you repeatedly add a large number to a long, you'll eventually end up with a negative result. For example, Long.MAX_VALUE + 1L = Long.MIN_VALUE. When that happens, you'll just repeat the process indefinitely. So if you changed your code to: while (cumSeconds >= 0L) { // 31557600000 is the number of milliseconds in a year cumSeconds += 31557600000L; you'll catch where things go negative because cumSeconds rolled over.
{ "pile_set_name": "StackExchange" }
Q: Distribution of the nth order statistics If you draw n realizations from U(0,1), then will the sorting of those values (on average) be separated by $\frac{1}{n+1}$ between two consecutive values. E.g., if n=5, then the lowest one is realized at 1/6, the second lowest one at 2/6 etc. Furthermore, could I take the above values (1/6, 2/6..., 5/6) set them equal to the c.d.f. of any other distribution, then solve for x (five values in total), and it will yield the order statistics for n=5 values for that the new distribution? I'm only using intuition here, so my gut feeling might be wrong. The stuff I've read online is too technical for me. A: To my mind, the easiest way to think about this is to imagine the interval closed to a circle and to consider the point where it's broken up as an $(n+1)$-th random variable. That reveals the symmetry of the situation; it's now apparent that the intervals between any two consecutive numbers, including the case where one of them is the boundary at $0$ or $1$, are on the same footing and thus by symmetry must have mean $1/(n+1)$. And yes, you can transfer this to another distribution by equating these values to its cumulative distribution function. I'm not sure why you keep mentioning $1/6$, $2/6$, though; you wouldn't be equating those means to the cumulative distribution function but the actual realizations from $U(0,1)$.
{ "pile_set_name": "StackExchange" }
Q: Passar texto no pl/sql para maiúscula Tenho muitos scripts de create table, sequence e etc, porém tudo está em minúsculo, que o colega fez. Apagar linha a linha e colocar em maiúsculo é improdutivo, pois são mais de 1000 scripts para alterar. No SQL Server eu seleciono e dou Ctrl+Shift+U e passo o texto selecionado para maiúscula. Como eu faço isso no PL/SQL? Tenho pesquisado na NET e só vem passar os Dados(string) para maiúscula, tipo(Upper(meu_campo)) e não é isso que eu quero e sim alterar o script. Como eu faço? A: Tem como sim, Via PL/SQL acessando o meno Tools > Preferences, no grupo User Interface existe a opção PL/SQL Beautifier. Configurando o Rules File, clique em Edit..: Assim automaticamente essa modificação que deseja é feita através da opção na barra de tarefas: Também pode ser inserido um atalho, Tools > Preferences > no grupo User Interface existe a opção Key Configuration, e atribua um atalho ao comando. Outra opção, você pode carregar um Roles File, como exemplo: Copie o código abaixo e salve com a extensão .br Version=1 RightMargin=90 Indent=3 UseTabCharacter=FALSE TabCharacterSize=3 AlignDeclarationGroups=TRUE AlignAssignmentGroups=TRUE KeywordCase=1 IdentifierCase=0 UseSpecialCase=TRUE ItemList.Format=2 ItemList.Align=TRUE ItemList.CommaAfter=TRUE ItemList.AtLeftMargin=FALSE EmptyLines=2 ThenOnNewLine=FALSE LoopOnNewLine=FALSE DML.LeftAlignKeywords=FALSE DML.LeftAlignItems=FALSE DML.OnOneLineIfPossible=TRUE DML.WhereSplitAndOr=TRUE DML.WhereAndOrAfterExpression=FALSE DML.WhereAndOrUnderWhere=TRUE DML.InsertItemList.Format=2 DML.InsertItemList.Align=FALSE DML.InsertItemList.CommaAfter=TRUE DML.InsertItemList.AtLeftMargin=FALSE DML.SelectItemList.Format=2 DML.SelectItemList.Align=TRUE DML.SelectItemList.CommaAfter=TRUE DML.SelectItemList.AtLeftMargin=FALSE DML.UpdateItemList.Format=2 DML.UpdateItemList.Align=TRUE DML.UpdateItemList.CommaAfter=TRUE DML.UpdateItemList.AtLeftMargin=FALSE ParameterDeclarationList.Format=2 ParameterDeclarationList.Align=TRUE ParameterDeclarationList.CommaAfter=TRUE ParameterDeclarationList.AtLeftMargin=FALSE RecordFieldList.Format=1 RecordFieldList.Align=TRUE RecordFieldList.CommaAfter=TRUE RecordFieldList.AtLeftMargin=FALSE SplitAndOr=FALSE AndOrAfterExpression=FALSE Fonte do Exemplo de Roles File: https://community.oracle.com/thread/899336?tstart=0 Fonte da configuração: https://lalitkumarb.com/tag/oracle-plsql-developer-settings/
{ "pile_set_name": "StackExchange" }
Q: What do I need to start playing violin amplified? I play acoustic violin, but would like to start playing amplified sometimes. But when I start looking into equipment, I get lost in options. Piezos? Microphones? DI boxes? Pre-amps? Speaker systems? Amplifiers? I'm struggling to get a handle on what different pieces of equipment actually do, and what's needed to start with. My question boils down to: What is the basic equipment a string musician needs to play amplified, and what does each piece do? What are the options at each point? A: Every acoustic instrumentalist needs a means to transfer the vibrations of his/her instrument to electric signal. This can be done with a microphone or a pickup. If you wish to use a microphone, you will need to "close mike." To ensure that no other instruments are "heard" by the microphone, close-miking involves having a microphone be placed in close proximity to the instrument. Sometimes these are called "violin clips." These mics are usually unidirectional so they only pickup sound in a certain area or direction. Typically, the microphone signal can be connected to an XLR and run to either a DI box, mixing board or amp/PA. Another method would be getting a pickup. A pickup transfers the acoustic vibrations of your instrument and converts them to electric signal. Piezo pickups are used for acoustic instruments. Typically a piezo for a violin would clip to the bridge and the signal would carried to a 1/4 output jack. You could run this the same way you do a microphone, directly into a DI, mixing board or amp/PA. The next decision in your signal chain would be between a DI box or an amplifier/PA system. If you wish to be able to modify how the violin sounds directly on stage (volume, EQ, effects, etc) then you will need an amplifier/PA system. An amp/PA consists of a number of input channels with their own EQ/volume/effects settings. Additionally, their is a master EQ/volume/effects setting. This would give your sound man the option of miking your amp speaker/PA speaker or using it's line out to run to his/her board for mixing. Since most amplifiers out their are for guitars, seek an "acoustic" amplifier or a run of the mill PA system. Not my recommendation. My suggestion would be a DI box. A DI box gives you a way to plug directly into the house mixing board without an amplifier. Additionally, a DI box can allow you to send one signal to the sound person for the audience mixing, and another signal to your own mixer on stage for your own monitor/amp. You lose option of setting your own EQ, but also lose the hassle of hauling a heavy amplifier/speaker. Again, all of this depends on your set up. If the gig has a house PA system, then I'd go with a close mike or pickup with a DI box. If they don't and you are playing with a band, then an amplifier/PA would be wise so you can hear yourself in a live setting, but also be heard by others. A: You don't say if you're playing with a band using a house PA or are playing solo so my answer is going to be a little broad, but maybe that's good for the other folks. I'm not going to put any links into this answer because you can google all of these phrases to find what you need. Teh Internetz luvs to sell stuff to musicians. A DI unit, DI box, direct box, or simply DI (variously claimed to stand for direct input, direct injection or direct interface), is a device typically used in recording studios to connect a high-impedance, line level, unbalanced output signal to a low-impedance microphone level balanced input, usually via XLR connector. DIs are frequently used to connect an electric guitar or electric bass to a mixing console's microphone input. The DI performs level matching, balancing, and either active buffering or passive impedance matching/impedance bridging to minimize noise, distortion, and ground loops. Source: http://en.wikipedia.org/wiki/DI_unit DIs are mainly used when you're plugging into a venue's ("house") sound system ("PA" - Public Address). You don't need to worry too much about what high impendance or low impedance means. Just remember that if you're going to plug a quarter-inch electric guitar cord into something with an XLR socket, you're going to need a DI. XLR (mic) connectors: A mixer is what a sound engineer uses for mixing and tweaking the sound sources or channels. A mini-mixer usually refers to a mixer that has 4 or less channels. You can always play with just a mic on a boom stand that plugs into the house PA. You don't need a DI for this, because the mic is already XLR. There are a huge variety of mics so I'm not even going to go into that. Any mic you use will have an XLR plug so you won't need a DI. A combo amp is a single unit that has a mini mixer, power amplifier and speaker ("driver") in the same box. Combo amps are typically used by electric keyboard and guitar players and typically take an electric guitar cord as input, although some also have XLR inputs. An alternative to a combo amp is a powered speaker. A powered speaker has a power amplifier and a driver in the same unit. They typically have one or more XLR inputs. Powered speakers have better frequency response than guitar combo amps but are more expensive and typically require a mixer as well. If you're playing solo you will need at least a combo amp but you'll probably also want a mini mixer so that you can tweak your EQ (many bands of frequencies that break down to bass, mids and treble). The big range comes when you start talking about violin pickups. There are, as you've noted, a large variety of choices here. If you decide to go with a transducer, then it will be installed in, under or on the bridge. Have a tech install this for you. If you use a piezo transducer you will need a preamp and if you go electric, you will need a DI unless you plug into a combo amp. An electric pickup will only work if you have steel strings on your violin. This will also require a tech to install it for you unless you're comfortable, uh, fiddling with your violin. If you want to minimize changing your violin--which may affect the tone--then you may want to consider a clip-on horn mic. You attach the clip on to your chin rest or to a little piece of plastic that you glue to your chin rest. This is the solution that the fiddler in our band uses. The last choice is whether to go with a cord that plugs into the pickup or go wireless. A cord is cheaper but it definitely affects the balance of your violin and for you may affect your ergonomics. Most of the professional fiddlers I know who don't use a mic use a wireless setup. The fiddler in our band has a wireless clip on horn mic. Wireless mics transmit radio signals to a radio receiver that you plug an XLR cord into which then goes into the PA. Hope this helps. Please comment for any additional questions and I'll edit my answer here.
{ "pile_set_name": "StackExchange" }
Q: How to read csv file using tcsh script and get current value of environ variable I have this csv file (sample.csv) which looks like: variable,value var1,/value/of/var1/which/is/path I need to parse/read this csv using tcsh script. I am trying to compare current value of environment variable var1 with value given in this csv file. something like: if( current value of $var1 == /value/of/var1/which/is/path) then echo "Value matches" else echo "value does not match with current value" A: You can loop over a file like so: foreach line ( `cat a.csv` ) set field1 = `echo "$line" | cut -d, -f1` set field2 = `echo "$line" | cut -d, -f2` if ( "$field1" == "var1" ) then echo "Match -> $field1 $field2" else echo "No match -> $field1 $field2" endif end The foreach loop will loop over the file line-by-line. Inside the loop, you use the cut command to split the line by the , delimiter. You can can check these variables with an if statement. NOTE: If this is a new script, you probably don't want to use tcsh. tcsh is an old shell, and has many dangerous and ugly corners. You probably want to use a Bourne shell (/bin/sh or bash), or perhaps better yet, a "real" programming language like Python or Ruby. Parsing CSV files is not always easy (quoting styles and such differ), and both Python and Ruby have excellent csv parsing modules which handle all of this for you.
{ "pile_set_name": "StackExchange" }
Q: How do I get Boethiah's Proving? How do you find the book Boethiah's Proving? I am past level 30, but when I killed the priest of Boethiah, he didn't have Boethiah's Proving on him. A: There are several ways to find Boethiah's Proving. Although, you must have completed Dragon Rising and be higher than level 20 (For reference) At your level, it is possible to be attacked by a follower of Boethiah, who is a random follower. You'll know him when you hear someone say "Ahh! A worthy opponent!" He's an easy kill, then loot his body. You can also find it at Septimus Signus' Outpost north of Windhelm Its also possible to find it at Hob's Fall cave and the Abandoned House of Markath The first option, however, is your best bet. A follower of Boethiah is likely to attack you. A: The Skyrim Wiki gives us a great deal of information on this sort of thing. To provide context for users that may seek this answer, later, you must be at least level 30 before Boethiah's Proving will spawn, in the game. As a Drop from Boethiahs Cultist First and foremost, the book is suppose to drop from a Boethiahs Cultist. While I can not confirm the item has a 100% drop rate, I would speculate that given the nature of the cultist showing up, and the given context, you are simply experiencing a bug. Skyrim is full of them - its a trade off for having such an immersive game. Found at various locations The abandoned house, in Markarth Castle Volkihar College of Winterhold (appears you have to purchase it from the librarian) Hillgrund's Tomb Hob's Fall Cave Septimus Signus' Outpost Apocrypha Through the command line If you are playing on computer, you can add any item to your inventory with the right command line. Since you start the quest by reading the book, "cheating" it into your inventory should not disrupt this mechanic. I can not confirm what the correct command line is, so I will have to leave it up to another user to fill this in. Wait a minute, I don't really need the Boethiahs Proving, afterall! The main reason you would want Boethiahs Proving (disregarding for cosmetic reasons or for the purpose of lore) is that it initiates the quest Boethiah's Calling. Keep in mind that the book acts as a pointer to the Shrine of Boethiah. If you happen to find the shrine, by yourself, the quest will just start at the next step.
{ "pile_set_name": "StackExchange" }
Q: Why can't gravity force an object past the speed of light? I hope this question is not a duplicate (it doesn't seem to be) and that it is appropriate for this site. If we had a universe with only two bodies. One is ultra massive, and the other is very small. Why could a sufficient distance not mean that gravity could accelerate the smaller object to past the speed of light? If gravity affects all objects from all objects, gravity certainly should be acting on the smaller body (and also the smaller body on the larger). I would think, in fact, that the relative sizes don't particularly matter. Why doesn't gravity make things accelerate past the speed of light if they are far enough away? A: I would think, in fact, that the relative sizes don't particularly matter. Why doesn't gravity make things accelerate past the speed of light if they are far enough away? To be sure (just in case), please keep in mind that the (Newtonian) gravitational force is not constant with distance even though we usually approximate it as constant for elementary problems involving relatively small objects falling near the surface of Earth. Consider a gravitating, spherical body with some non-zero radius, and consider a much smaller object that falls towards that body from rest a great distance away. We can calculate the speed with which the object impacts the surface. This speed is equal to the escape velocity at the surface of the body. So, your question can be recast to something like this: Why are the no bodies with escape velocity $v_e$ greater than the speed of light? Note that such a body with $v_e \gt c$ would necessarily trap light. In the Newtonian context, where the speed $c$ isn't a speed limit, an object falling from a great distance would impact the surface of the body with speed greater than the speed of light. See, for example, Can a black hole be explained by newtonian gravity? But in the relativistic context, no massive object can have relative speed $v \ge c$. So, if a massive body contracts to the radius such that the escape velocity at the surface is $c$, the body must continue to collapse leaving an event horizon from which the 'escape' velocity is precisely $c$ However, now we're in a highly curved spacetime where it's often very difficult if not impossible to properly define concepts that were straightforward in the Newtonian context. For example, you might from the above conclude that an object falling from a great distance towards a black hole would have speed $c$ at the event horizon. However, in a curved spacetime, where the clocks and rods of different observers outside of and at rest relative to the black hole are not the same, it isn't at all clear what such a statement would mean. In fact, it turns out that no object actually reaches the event horizon according to these observers!
{ "pile_set_name": "StackExchange" }
Q: How to make a horizontal-switchable activities I have an application about debts to user and user's debts. Main activity is TabActivity for switching two activities with custom lists. It looks like (screenshot): http://i.stack.imgur.com/qts1f.png The code is: public class MainActivity extends TabActivity { public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Resources res = getResources(); // Resource object to get Drawables TabHost tabHost = getTabHost(); // The activity TabHost TabHost.TabSpec spec; // Resusable TabSpec for each tab Intent intent; // Reusable Intent for each tab TextView tv = (TextView)findViewById(R.id.newDebtHeader); tv.setBackgroundResource(R.drawable.grad); tv.setTextColor(Color.BLACK); tv.setFadingEdgeLength(3); intent = new Intent().setClass(this, DebtsToMeList.class); spec = tabHost.newTabSpec("debts_to_me").setIndicator(null, res.getDrawable(R.drawable.ic_tab_debts_to_me)).setContent(intent); tabHost.addTab(spec); intent = new Intent().setClass(this, MyDebtsList.class); spec = tabHost.newTabSpec("my_debts").setIndicator(null, res.getDrawable(R.drawable.ic_tab_my_debts)).setContent(intent); tabHost.addTab(spec); tabHost.setCurrentTab(2); } } main.xml is: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="1dp"> <TextView android:id="@+id/newDebtHeader" android:layout_height="24dip" android:layout_width="fill_parent" android:gravity="center_vertical|center_horizontal" android:textStyle="bold" android:textSize="16dip" android:text="хДолги"> </TextView> <TabHost android:id="@android:id/tabhost" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="5dp"> <TabWidget android:id="@android:id/tabs" android:layout_width="fill_parent" android:layout_height="wrap_content" /> <FrameLayout android:id="@android:id/tabcontent" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="5dp" /> </LinearLayout> </TabHost> </LinearLayout> enter code here But I don't like TabActivity, I want change this two lists by horizontal finger drag. Could you please help me change my code? A: If you look at this example, it uses the fling gesture with a view switcher and a little animation to get the effect I think that you are looking for... all that you would have to do is load your lists and then you should be able to swipe between them. Hope this helps.
{ "pile_set_name": "StackExchange" }
Q: How to plot 95 percentile and 5 percentile on ggplot2 plot with already calculated values? I have this dataset and use this R code: library(reshape2) library(ggplot2) library(RGraphics) library(gridExtra) long <- read.csv("long.csv") ix <- 1:14 ggp2 <- ggplot(long, aes(x = id, y = value, fill = type)) + geom_bar(stat = "identity", position = "dodge") + geom_text(aes(label = numbers), vjust=-0.5, position = position_dodge(0.9), size = 3, angle = 0) + scale_x_continuous("Nodes", breaks = ix) + scale_y_continuous("Throughput (Mbps)", limits = c(0,1060)) + scale_fill_discrete(name="Legend", labels=c("Inside Firewall (Dest)", "Inside Firewall (Source)", "Outside Firewall (Dest)", "Outside Firewall (Source)")) + theme_bw() + theme(legend.position="right") + theme(legend.title = element_text(colour="black", size=14, face="bold")) + theme(legend.text = element_text(colour="black", size=12, face="bold")) + facet_grid(type ~ .) + plot(ggp2) to get the following result: Now I need to add the 95 percentile and 5 percentile to the plot. The numbers are calculated in this dataset (NFPnumbers (95 percentile) and FPnumbers (5 percentile) columns). It seems boxplot() may work here but I am not sure how to use it with ggplot. stat_quantile(quantiles = c(0.05,0.95)) could work as well, but the function calculates the numbers itself. Can I use my numbers here? I also tried: geom_line(aes(x = id, y = long$FPnumbers)) + geom_line(aes(x = id, y = long$NFPnumbers)) but the result did not look good enough. geom_boxplot() did not work as well: geom_boxplot(aes(x = id, y = long$FPnumbers)) + geom_boxplot(aes(x = id, y = long$NFPnumbers)) A: There are several suitable geoms for that, geom_errorbar is one of them: ggp2 + geom_errorbar(aes(ymax = NFPnumbers, ymin = FPnumbers), alpha = 0.5, width = 0.5) I don't know if there's a way to get rid of the central line though.
{ "pile_set_name": "StackExchange" }
Q: In Hebrews 3:2 why isn't τῷ ποιήσαντι αὐτὸν translated as "to him who made him"? Westcott and Hort / [NA27 variants] πιστὸν ὄντα τῷ ποιήσαντι αὐτὸν ὡς καὶ Μωυσῆς ἐν ὅλῳ τῷ οἴκῳ αὐτοῦ. Here is the context: 3 Therefore, holy brothers, you who share in a heavenly calling, consider Jesus, the apostle and high priest of our confession, 2 who was faithful to him who appointed him, just as Moses also was faithful in all God’s house. 3 For Jesus has been counted worthy of more glory than Moses—as much more glory as the builder of a house has more honor than the house itself. 4 (For every house is built by someone, but the builder of all things is God.) 5 Now Moses was faithful in all God’s house as a servant, to testify to the things that were to be spoken later, 6 but Christ is faithful over God’s house as a son. And we are his house, if indeed we hold fast our confidence and our boasting in our hope. The Holy Bible: English Standard Version. (2016). (Heb 3:1–6). Wheaton: Standard Bible Society. BDAG suggest "to him who appointed him". Is it compelling?: ...ⓑ of divine activity, specifically of God’s creative activity create (Hes., Op. 109; Heraclitus, Fgm. 30 κόσμον οὔτε τις θεῶν οὔτε ἀνθρώπων ἐποίησεν, ἀλλʼ ἦν ἀεὶ καὶ ἔστιν καὶ ἔσται; Pla., Tim. 76c ὁ ποιῶν ‘the Creator’; Epict. 1, 6, 5; 1, 14, 10; 2, 8, 19 σε ὁ Ζεὺς πεποίηκε; 4, 1, 102; 107; 4, 7, 6 ὁ θεὸς πάντα πεποίηκεν; Ael. Aristid. 43, 7 K.=1 p. 2 D.: Ζεὺς τὰ πάντα ἐποίησεν; Herm. Wr. 4, 1. In LXX oft. for בָּרָא also Wsd 1:13; 9:9; Sir 7:30; 32:13; Tob 8:6; Jdth 8:14; Bar 3:35; 4:7; 2 Macc 7:28; Aristobulus in Eus., PE13, 12, 12 [pp. 182 and 184 Holladay]; JosAs 9:5; Philo, Sacr. Abel. 65 and oft.; SibOr 3, 28 and Fgm. 3, 3; 16; Just., A II, 5, 2 al.) w. acc. ἡ χείρ μου ἐποίησεν ταῦτα πάντα Ac 7:50 (Is 66:2). τοὺς αἰῶνας Hb 1:2 (s. αἰών 3). τὸν κόσμον (Epict. 4, 7, 6 ὁ θεὸς πάντα πεποίηκεν τὰ ἐν τῷ κόσμῳ καὶ αὐτὸν τὸν κόσμον ὅλον; Sallust. 5 p. 10, 29; Wsd 9:9; TestAbr A 10 p. 88, 21 [Stone p. 24]) Ac 17:24. τὸν οὐρανὸν καὶ τὴν γῆν (cp. Ael. Aristid. above; Gen 1:1; Ex 20:11; Ps 120:2; 145:6; Is 37:16; Jer 39:17 et al.; TestJob 2:4; Jos., C. Ap. 2, 121; Aristobulus above) Ac 4:24; 14:15b; cp. Rv 14:7. τὰ πάντα PtK 2 p. 13, 26 (JosAs 12, 2; Just., D. 55, 2; also s. Ael. Aristid. above). Lk 11:40 is classed here by many. Of the relation of Jesus to God Ἰησοῦν, πιστὸν ὄντα τῷ ποιήσαντι αὐτόν=appointed him Hb 3:2 (cp. Is 17:7).—W. a second acc., that of the predicate (PSI 435, 19 [258 B.C.] ὅπως ἂν ὁ Σάραπις πολλῷ σὲ μείζω ποιήσῃ) ἄρσεν καὶ θῆλυ ἐποίησεν αὐτούς (God) created them male and female Mt 19:4b; Mk 10:6 (both Gen 1:27c).—Pass. Hb 12:27.—ὁ ποιήσας the Creator Mt 19:4a v.l.... Arndt, W., Danker, F. W., Bauer, W., & Gingrich, F. W. (2000). A Greek-English lexicon of the New Testament and other early Christian literature (3rd ed., p. 839). Chicago: University of Chicago Press. A: Both are possible, for the verb can be used in both senses (in the sense of "appoint", "make smbd. something" we have this verb already in Classical Greek, for example in Homer's "Odyssey" I:387: μὴ σέ γ᾽ ἐν ἀμφιάλῳ Ἰθάκῃ βασιλῆα Κρονίων ποιήσειεν - "to make/appoint someone a king"; or in Thycidides Αθεναιον ποιειν τινα “make/appoint somebody an Athenian citizen” (Liddle&Scott) etc.; in patristic literature the same verb is used with the meaning of "appoint" by Athanasius (Ar. 2.8.), or in Chrysostom who explains: “εποιεσεν, τουτεστι κατεστησεν» (“made, that is to say, appointed”) (Lampe), etc.). However, here, in Hebrews 3:2, it is not entirely unequivocal (like, e.g., in Mark 3:14, where it is unequivocally "appointed" and not "created, or in Gen. 1 where it is unequivocally "created" and not "appointed"). The Vulgate preserves this equivocation by "fecit", which also can mean both actions. Probably the "appointed" is more plausible, for Paul refers to Jesus not as a creature, saying elsewhere that the Father brought into existence the entirety of creation through Him (Hebrews 1:2), thus excluding Him from this entirety. And here in the immediate sequence Jesus' glory is counterposed to the glory of Moses as that of Maker to the made, a house builder to a house, the verb κατασκευαζω applying both to Jesus and God, and the "house" is referred to Moses, as a part of this house and acting in it as a servant, and also all humans, including Paul himself (ου οικος εσμεν εμεις), but Jesus together with God is outwith the "house", a.k.a. creation, expressed by the prefix ἐπὶ which with accusative οικον means "onto", i.e. "upper surface of", as in "got up onto the horse", thus not a part of the surface of the horse, in this instance, not part of the creation. However, "created" is also plausible, for Jesus is created in His human nature and in His human nature remained faithful to God, always performing things pleasing to Him (cf. John 8:29), and having lived a life of utter obedience, up to the death on cross (Phil. 2:8). Some ancient translations, thus, e.g. a Georgian canonical translation of 11th century make this option, putting "created" unequivocally. A: Hebrews 3:2 alludes to 1 Samuel 12:6 (1 Kingdoms 12:6 in the LXX): Brenton LXX 1 Samuel 12:6And Samuel spoke to the people, saying, The Lord who appointed Moses and Aaron is witness, who brought our fathers up out of Egypt. 1 Kingdoms 12:6 καὶ εἶπεν Σαμουὴλ πρὸς τὸν λαὸν λέγων Μάρτυς Κύριος ὁ ποιήσας τὸν Μωυσῆν καὶ τὸν Ἀαρών, ὁ ἀναγαγὼν τοὺς πατέρας ἡμῶν ἐξ Αἰγύπτου. Swete, H. B. (1909). The Old Testament in Greek: According to the Septuagint (1 Kgdms 12:6). Cambridge, UK: Cambridge University Press. This in turn is a translation of the Hebrew: Douay-Rheims Bible 1 Samuel 12:6 And Samuel said to the people: It is the Lord, who made Moses and Aaron, and brought our fathers out of the land of Egypt. 12:6 Hebrew OT: Westminster Leningrad Codex וַיֹּ֥אמֶר שְׁמוּאֵ֖ל אֶל־הָעָ֑ם יְהוָ֗ה אֲשֶׁ֤ר עָשָׂה֙ אֶת־מֹשֶׁ֣ה וְאֶֽת־אַהֲרֹ֔ן וַאֲשֶׁ֧ר הֶעֱלָ֛ה אֶת־אֲבֹתֵיכֶ֖ם מֵאֶ֥רֶץ מִצְרָֽיִם׃ So the interpretation rests largely on the use of the word עָשָׂה֙. The Complete Jewish Bible with Rashi's commentary has this: CJB 1 Samuel 12:6 And Samuel said to the people, "(It is) the Lord Who made Moses and Aaron, and Who brought your forefathers up from the land of Egypt. And Rashi makes this comment: Who made Moses and Aaron: to be prepared for His mission to take your forefathers out of Egypt. http://www.chabad.org/library/bible_cdo/aid/15841#lt=primary&showrashi=true The word is in the Hiphil tense suggesting causation. For example, the Hebrew word "remember", when in the Hiphil would be "cause to remember" or "remind": http://www.becomingjewish.org/pdf/hiphil_stem-hebrew.pdf Based on this I think I would render this as "tasked" or "charged". I think one is "appointed" to an "office" but "tasked" or "charged" with a duty. The comparison is being drawn between Moses and Aaron who were tasked with bringing "your forefathers up from the land of Egypt" and Jesus who is charged with bringing "many sons to glory": NIV Hebrews 2:10 In bringing many sons and daughters to glory, it was fitting that God, for whom and through whom everything exists, should make the pioneer of their salvation perfect through what he suffered.
{ "pile_set_name": "StackExchange" }
Q: PNG shadow shows as white border on android device I am creating few icons with Drop shadow blending option in Photoshop. Then I save the icon as PNG with Transparency as 'chekced' and Matte as 'None'. The shadow looks fine on desktop but when my developer places those icons in his app, and runs on android device, all the shadows show as solid white border instead of transparent shadow. What changes should I make in my Photoshop settings to produce correct images? A: In HTML, the place where you have the dynamic image, add a border tag to 0. example - border="0"
{ "pile_set_name": "StackExchange" }
Q: How can I write UTF-8 files using JavaScript for Mac Automation? In short - what is the JavaScript for Mac Automation equivalent of AppleScript's as «class utf8» ? I have a unicode string that I'm trying to write to a text file using JavaScript for Mac Automation. When writing the string to the file, any unicode characters present become question marks in the file (ASCII char 3F). If this was an AppleScript script instead of a JavaScript one, I could have solved this by adding the as «class utf8» raw statement as explained on Takashi Yoshida's Blog (https://takashiyoshida.org/blog/applescript-write-text-as-utf8-string-to-file/). The script, however, is already written in JavaScript so I'm looking for the JavaScript equivalent to this AppleScript statement. Apple's page about raw statements addresses only AppleScript (https://developer.apple.com/library/content/documentation/AppleScript/Conceptual/AppleScriptLangGuide/conceptual/ASLR_raw_data.html). To write the file, I am using Apple's own writeTextToFile JavaScript function example (https://developer.apple.com/library/content/documentation/LanguagesUtilities/Conceptual/MacAutomationScriptingGuide/ReadandWriteFiles.html#//apple_ref/doc/uid/TP40016239-CH58-SW1). I added an as argument to the following call, according to the StandardAdditions dictionary: // Write the new content to the file app.write(text, { to: openedFile, startingAt: app.getEof(openedFile), as: "utf8" }) And tried all of the following strings (as written and also in lowercase form): Unicode text Unicode Class utf8 «class utf8» utf8 text utf8 text Apart for "text" (which resulted in the same question marks situation), using all of the above strings yielded a zero-bytes file. I understand I might be wading into uncharted waters here, but if anyone reading this has dealt with this before and is willing to provide some pointers, I will be quite grateful A: If you want to ensure your file gets written with UTF8 encoding, use NSString's writeToFile:atomically:encoding:error function, like so: fileStr = $.NSString.alloc.initWithUTF8String( 'your string here' ) fileStr.writeToFileAtomicallyEncodingError( filePath, true, $.NSUTF8StringEncoding, $() ) You would think that writing an NSString object initialized from a UTF8 string would get written out as UTF8 but I've found from experience that writeToFile:atomically does not honor the encoding of the string being written out. writeToFile:atomically:encoding:error explicitly specifies which encoding to use. On top of that, writeToFile:atomically has been deprecated by Apple since OS X 10.4.
{ "pile_set_name": "StackExchange" }
Q: AWS Cognito Token Expiring After 1 Hour I'm using the AWS Cognito JavaScript SDK to authorize and authenticate users in my React Native app. I've managed to provide and store an IdentityId for users. Users who do not log in have access to part of my app as long as we authorize them with a confirmation because of Federated Identities / IAM. This all works well. My question is, after an hour the token is expiring and their access is being limited because of it. What should be the process here? Do I retrieve new tokens, or do some sort of token refresh? What does that look like? There is so much AWS Cognito documentation out there but I haven't really been able to find exactly what I need; and on top of, that I'm finding it really confusing to tell what I need for a successful Federated Identities / IAM authorization flow vs. what I need for a successful User Pool / log in flow. A: you have the credentials... and you called credentials.get() that first time... now on a timer after 55mins call credentials.refresh()... so you will have the credentials updated before they expire (do it every time you get a new credential... in 55mins refresh)
{ "pile_set_name": "StackExchange" }
Q: C++ giving me basic_string::_S_construct null not valid error after I run my program I am a newbie to C++ and I've tried to write a simple string reverse program. When I compile it, everything is OK, but when I run it, I get the following error: terminate called after throwing an instance of 'std::logic_error' what(): basic_string::_S_construct null not valid Aborted (core dumped) What I am doing wrong? Here is my code. #include <iostream> using namespace std; string reverse_string(char* argv[], int i); int main(int argc, char* argv[]) { for (int i = 0; i < argc; i++) { cout << reverse_string(argv, i) << endl; } return 0; } string reverse_string(char* argv[], int i) { string arg = argv[i + 1]; string output; int length = arg.length(); for (int index = 1; index <= length; index++) { output += arg[length-index]; } return output; } A: This: argv[i + 1] in the construction of your reversed string should be argv[i] and the main loop should be for (i=1; i<argc; ++i) And there are simpler ways to reverse a string: std::string reverse_string(char* argv[], int i) { std::string arg = argv[i]; return std::string(arg.rbegin(), arg.rend()); }
{ "pile_set_name": "StackExchange" }
Q: Find merges in master, but not in tag I'm trying to automate the creation of a changelog during for each release. During our release process, we create tags for every release. Individual commits do not happen on master, features are merged using --no-ff, so all features have a merge commit. How do I get a list of all merge commits in master, that are not in a tag (ie, the previous release)? I tried this based on some other SO answers, but doesn't quite give me what I want: git log --pretty=oneline --all <tag>..master --merges A: Adding --ancestry-path should work: git log --pretty=oneline --all <tag>..master --merges --ancestry-path However, since you are only concerned with the flow of the a single branch, you will only care about the first parent's history. Therefore, this command will also work: git log --pretty=oneline --all <tag>..master --first-parent
{ "pile_set_name": "StackExchange" }
Q: Command Pattern In C++ So I'm trying to learn the Command Pattern for C++ and I am unsure on how to Bind my commands. My current code has my Input Handler and Commands, but I don't know how to bind them. I keep getting a "error: 'Command' is an inaccessible base of 'UpCommand'". InputHandler.h #ifndef INPUTHANDLER_H_INCLUDED #define INPUTHANDLER_H_INCLUDED #include "Command.h" class InputHandler { public: void handleInput(); //Bind Buttons Here private: Command* buttonW; Command* buttonA; Command* buttonS; Command* buttonD; }; #endif // INPUTHANDLER_H_INCLUDED And here is my Command.h Command.h #ifndef COMMAND_H_INCLUDED #define COMMAND_H_INCLUDED #include <iostream> class Command { public: virtual ~Command() {} virtual void execute() = 0; }; class UpCommand : Command { virtual void execute() {std::cout << "UP";} }; class DownCommand : Command { virtual void execute() {std::cout << "DOWN";} }; class LeftCommand : Command { virtual void execute() {std::cout << "LEFT";} }; class RightCommand : Command { virtual void execute() {std::cout << "RIGHT";} }; #endif // COMMAND_H_INCLUDED I can't figure out how to bind my pointers in InputHandler to the subCommands for direction. Can anyone explain to me how it's done? A: You need to use public inheritance instead of private. Either change class to struct everywhere or say class WTFCommand : public Command. This is what the error, "base class inaccessible," means.
{ "pile_set_name": "StackExchange" }
Q: TypeScript Build Error : Property does not exist on type 'IStateParamsService' I am using TypeScript almost entire of my client project. Currently I am facing a technical problem. In my HTML, I have an anchor tag like below : <a class="btn btn-default mrm" ui-sref="course-detail({courseId: '{{c.Id}}'})">Details</a> In my course detail controller, I am getting the values in the stateparam variable. But if I want to access my 'courseId' from that variable, it is giving me build error. Please check the image below. But if I remove the IF block, then I am getting the log in the developer console like below. I must get the course Id property and value to proceed. Otherwise I need to code this controller in pure angularjs which I don't want. Thanks. A: Typescript prefers to work on maps explicitly through brackets notation []. If you want to get a field out of a map-like object you should use square brackets instead of dot notation: if (this.param['courseId'] === "00000....0000") { //.. rest of the code That should solve the immediate issue you're facing.
{ "pile_set_name": "StackExchange" }
Q: iOS Twitter+OAuth check if user is logged in I'm developing an app in which I want to give the user the option to be logged in with Twitter. My problem is that I want to check if the user is logged before given access to certain functions. Say I have a view and I want to show different content depending on the users is logged in or not. I know I can log the user in when opening the app, but I don't want to show the login screen every time if the user choose not to log in. (I'm using the Twitter+OAuth/MGTwitterEngine framework). How can I set up a control like that? Any tips is appreciated! A: Seems like you would want to save the authorization token in NSUserDefaults. Then, on launch, you would check for that token if (![[NSUserDefaults standardUserDefaults] objectForKey:@"whatever_you_call_your_auth_token") { //send the user to twitter login } else { //set isLoggedInViaTwitter:YES } So if you have a boolean value like isLoggedInViaTwitter, and you set that to YES or NO based on whether the auth token is present in NSUserDefaults, you can use the value of that to determine what content to present in your views. I'm new but I hope this helps to some extent. If I've misunderstood your question, please let me know.
{ "pile_set_name": "StackExchange" }
Q: IBAction disabling not working If the user keeps clicking on button1 one or two , progress2.progress keeps increasing/decreasing on each click and progress1.progress keeps the same value until the user stops clicking. And in case he will surely lose , if he also keeps clicking nothing happens until he stops clicking. I don't want it to be that way since I want to hide/disable the buttons as soon as it's confirmed that he's losing to fix this issue. Any way to fix that? Here is my .m : #import "ViewController.h" @interface ViewController () @end @implementation ViewController - (BOOL)prefersStatusBarHidden { return YES; } - (void)viewDidLoad { progress1.progress=arc4random() % 11 * 0.1; count1=0; count2=0; label1.hidden = NO; gameOver.hidden = YES; score=0; [super viewDidLoad]; ; } // Do any additional setup after loading the view, typically from a nib. - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } -(void)regulator{ if(timer1) { [timer1 invalidate]; timer1 = nil; } if(timer4) { [timer4 invalidate]; timer4 = nil; } timer4 =[NSTimer scheduledTimerWithTimeInterval:1.5 target:self selector:@selector(conditioner) userInfo:nil repeats:YES]; ;} -(void)conditioner { if (fabs(progress2.progress-progress1.progress)<=0.25 ) { score=score+1; scorenumber.text= [NSString stringWithFormat:@"%i",score]; [self newGame]; ; } else{ stop1=YES; stop2=YES; gameOver.hidden=NO; stick.hidden=YES; bg.hidden=YES; progress1.hidden=YES; progress2.hidden=YES; supply.hidden=YES; demand.hidden=YES; }} -(void)newGame{ progress1.progress=arc4random() % 11 * 0.1;} - (IBAction)start:(UIButton *)sender { progress2.progress=arc4random() % 11 * 0.1; if(timer4) { [timer4 invalidate]; timer4 = nil; timer1 = [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector(regulator) userInfo:nil repeats:YES]; [self regulator]; stop1=NO; stop2=NO; label1.hidden=YES; UIButton *button1 = (UIButton *)sender; button1.enabled = NO; UIButton *button2 = (UIButton *)sender; button2.enabled = NO; } - (IBAction)button1:(UIButton *)sender { if(stop1==YES){button12.hidden = TRUE;} progress2.progress=progress2.progress-0.05; ; [self regulator]; count2=0; count1 = count1 +1; } - (IBAction)button2:(UIButton *)sender { [self regulator]; progress2.progress=progress2.progress+0.05; if(stop2==YES){button22.hidden = TRUE;} count1 =0; count2 = count2+1; } @end and my .h: #import <UIKit/UIKit.h> int count1; int count2; int score; void *regulator; void *newGame; void *conditioner; BOOL stop1; BOOL stop2; void *firstLaunch; @interface ViewController : UIViewController{ IBOutlet UILabel *scorenumber; IBOutlet UIImageView *stick; IBOutlet UILabel *label1; IBOutlet UIImageView *bg; IBOutlet UILabel *supply; IBOutlet UILabel *demand; IBOutlet UILabel *gameOver; IBOutlet UIProgressView *progress1; IBOutlet UIProgressView *progress2; IBOutlet UIButton *button12; IBOutlet UIButton *button22; NSTimer *timer1; NSTimer *timer2; NSTimer *timer3; NSTimer *timer4; } - (IBAction)button1:(UIButton *)sender; - (IBAction)button2:(UIButton *)sender; @end Thanks a lot for any help or information. I edited my question with the full code to give further explanation about the issue I'm facing. Regards. A: This is actually a coding issue. MVC basics. I believe you miss some understanding of things. So I'll explain: IBAction - It's an action sent from the view to the controller. IBOutlet - Meant for the controller to control the view. On your code you are getting the sender (which should be read only when coding right) and you are trying to set it up. I assume you need to define a new IBOutlet to represent the button then connect it on your storyboard and then inside this function to make it enable/disabled. Also a good practice would be to use "TRUE" and "FALSE" and not "YES/NO". Hope this helps.
{ "pile_set_name": "StackExchange" }
Q: gdal2tiles not referencing properly I'm trying to overlay satellite images in a web page, images are properly georeferenced. When I use gdal2tiles by default it throws an error: gdal2tiles.py --zoom 0-5 --s_srs EPSG:4326 20173481715B02G16.tif dist/ ERROR 6: EPSG PCS/GCS code 900913 not found in EPSG support files. Is this a valid EPSG coordinate system? ERROR 6: No translation for an empty SRS to PROJ.4 format is known. Traceback (most recent call last): File "/usr/bin/gdal2tiles.py", line 2278, in <module> gdal2tiles.process() File "/usr/bin/gdal2tiles.py", line 482, in process self.open_input() File "/usr/bin/gdal2tiles.py", line 856, in open_input self.out_ds.SetMetadataItem('NODATA_VALUES','%i %i %i' % (self.in_nodata[0],self.in_nodata[1],self.in_nodata[2])) IndexError: list index out of range However if I use a profile such as raster or geodetic it works fine but when I overlay that layer over an OpenStreetMap layer, using leaflets or openlayers the result its a mess, the layer goes to Antarctica. gdal2tiles.py --zoom 0-5 --profile geodetic --no-kml 20173481715B02G16.tif dist/ gdal2tiles.py --zoom 0-5 --profile raster --no-kml 20173481715B02G16.tif dist/ This example it's using the raster profile, with geodetic the result it's pretty similar. I did try to force projection with --s_srs EPSG:4326 parameter, change the projection in openlayers but nothing seams to work. What am I doing wrong or how can I fix this? A: Turns out that I was using and old version of the script. With the new version all problems are gone. Thanks @user30184.
{ "pile_set_name": "StackExchange" }
Q: Как загрузить в структуру только часть изображения? У меня есть изображение (png) со спрайтами, которое я хочу разделить и загрузить в отдельные текстуры. Как это правильно сделать?. К примеру, у меня есть png 600х300, котороя я хочу использовать в качестве двух текстур 300х300. На данный момент я просто загружаю ее полностью glActiveTexture(GL_TEXTURE7); glGenTextures(1, &glTexture_); glBindTexture(GL_TEXTURE_2D, glTexture_); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, pictureSize.width(), pictureSize.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, fromRGBA32.pixels()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glBindTexture(GL_TEXTURE_2D, 0); а уже в шейдере разделяю, но хочу использовать нововведение es 3 когда мы можем объединять несколько текстур и потом в шейдере определять какую имеено брать (все же лучше передать одну переменную, нежели 2 двумерные координаты, определяющие нужный нам квдрат). A: Можно было бы копировать участки изображения в отдельные двухмерные массивы и загружать их через glTexImage2D как обычно, но это неинтересно и медленно. Можно сделать так: GLint old_row_length; glGetIntegerv(GL_UNPACK_ROW_LENGTH, &old_row_length); glPixelStorei(GL_UNPACK_ROW_LENGTH, pictureSize.width()); // Здесь делаем `glTexImage2D(...)` столько раз, сколько нужно. // Все как обычно, только в качестве размера указываем размер участка изображения, // а в качестве адреса передаем адрес первого пикселя этого участка. glPixelStorei(GL_UNPACK_ROW_LENGTH, old_row_length); Обычно OpenGL считает, что строки пикселей в передаваемом массиве упакованы плотно, но с glPixelStorei(GL_UNPACK_ROW_LENGTH, ...) можно заставить его игнорировать лишние байты между строками.
{ "pile_set_name": "StackExchange" }
Q: wcf iis private msmq I have a wcf service hosted in iis. I have many clients connected to it via basicHttpBinding. On the same server I also have other service that is doing the business logic. The business service puts the messages on local private queue. The wcf service in a separate thread waits for a change in the private queue and if it sees new message it takes and remembers the message. Everything works as expected. The business server puts the message on the private queue and the wcf service takes the message and serves the clients. The problem begins when I restart the server. Then the clients does not receive the data they expect. If I restart the iis everything goes to normal. Can someone tell me what could be the problem? Regards A: The problem might be that the iis service is starting before the business service, you could establish a dependency so that the business service will always start before the iis service.
{ "pile_set_name": "StackExchange" }
Q: Returning specific rows or all rows result I developed this function to help me quickly grab information from the database about my user(s). My question is what would be the best wasy to have it return only a single row when I only need one user or the result when I need more than one. /** * get_users function. * * @access public * @param array $params (default: array();) * possible array keys: * user_ids - either an array of user ids or a single user id * user_status_ids - either an array of user status ids or a single user status id * user_role_ids - either an array of user role ids or a single user role id * * @return object/NULL * Should return user_id, username, CONCAT(users.first_name, users.last_name, email_address, lock_date, user_status_name, user_role_name */ public function get_users($params = array()) { $this->db->select('users.user_id'); $this->db->select('users.username'); $this->db->select('CONCAT(users.first_name, " ", users.last_name) AS full_name', FALSE); $this->db->select('users.password'); $this->db->select('users.password_hash'); $this->db->select('users.email_address'); $this->db->select('users.lock_date'); $this->db->select('users.user_status_id'); $this->db->select('users.user_role_id'); $this->db->select('user_statuses.user_status_name'); $this->db->select('user_roles.user_role_name'); $this->db->from('users'); $this->db->join('user_statuses', 'user_statuses.user_status_id = users.user_status_id'); $this->db->join('user_roles', 'user_roles.user_role_id = users.user_role_id'); //checking to see if any $params are attempting to be passed if (count($params) > 0) { //start title specific selection if (isset($params['user_ids'])) { //if you only have one integer. if (is_numeric($params['user_ids'])) { $this->db->where('users.user_id', $params['user_ids']); } else { if (is_array($params['user_ids'])) { $a = 0; foreach($params['user_ids'] as $user_id) { if ($a == 0) { $this->db->where('users.user_id', $user_id); } else { $this->db->or_where('users.user_id', $user_id); } $a++; } } } } //start title specific selection if (isset($params['usernames'])) { //if you only have one integer. if (is_string($params['usernames'])) { $this->db->where('users.username', $params['usernames']); } else { if (is_array($params['usernames'])) { $a = 0; foreach($params['usernames'] as $username) { if ($a == 0) { $this->db->where('users.usernames', $username); } else { $this->db->or_where('users.username', $username); } $a++; } } } } //start title specific selection if (isset($params['user_status_ids'])) { //if you only have one integer. if (is_numeric($params['user_status_ids'])) { $this->db->where('users.user_status_id', $params['user_status_ids']); } else { if (is_array($params['user_status_ids'])) { $a = 0; foreach($params['user_status_ids'] as $user_status_id) { if ($a == 0) { $this->db->where('users.user_status_id', $user_status_id); } else { $this->db->or_where('users.user_status_id', $user_status_id); } $a++; } } } } //start title specific selection if (isset($params['user_role_ids'])) { //if you only have one integer. if (is_numeric($params['user_role_ids'])) { $this->db->where('users.user_role_id', $params['user_role_ids']); } else { if (is_array($params['user_role_ids'])) { $a = 0; foreach($params['user_role_ids'] as $user_role_id) { if ($a == 0) { $this->db->where('users.user_role_id', $user_role_id); } else { $this->db->or_where('users.user_role_id', $user_role_id); } $a++; } } } } } $query = $this->db->get(); return $query->result(); } A: Better split your function into several functions doing small easy to correct understand queries. Create a function with one argument for example: /** * Return user that match given ids * @param array $uids the list of users to find * @return array - the users that matched the list * @throws DatabaseException on any error */ public function listUserById( array $uids ) { } /** * Return user that match given id and status * @param array $uids the list of users to find * @param array $status the list of user status to also find * @return array - the users that matched the list * @throws DatabaseException on any error */ public function listUserByIdAndStatus( array $uids, array $status ) { ///code } Or you can update documentation to indicate that your function return "lists" ( always arrays ).
{ "pile_set_name": "StackExchange" }
Q: Dying with a SMILE In the One-Piece manga and anime they showed that when a devil-fruit user dies, his ability leaves his body, and materialises in a nearby fruit (like how Smiley's axolotl or and Ace's flare-flare ability became a new fruit). But what happens if you dye with a SMILE in your body? Does that become a proper devil fruit, a new SMILE fruit or it never becomes a new fruit? A: SMILE fruits are man-made Devil-fruits. That being said they have not been completely talked about in depth. We saw previews of side-effects from users. The episodes on Punk-Hazard with Caeser Clown talk a little about SMILEs but nothing about when a person dies. We see that a side-effect may be the fact that the person has a hard time transforming back to a human and may also get black horns (Island of Zou, Zou Arc). Because the fruits are man-made and have obvious flaws, I would guess that they would not be recreated if someone died, otherwise there would be no need for multiple factories. That is an opinion though.
{ "pile_set_name": "StackExchange" }
Q: zle backward-char not working as expected I'm writing a simple ZLE widget to quickly create subshells with <C-j>. Here's what I have: function zle_subshell { zle -U '$()' zle .backward-char } # register as widget zle -N zle_subshell # create kbd bindkey '^j' zle_subshell However, it appears that zle .backward-char isn't working. What makes matters more confusing is that if I modify the script to be: function zle_subshell { zle -U '$(' zle -U ')' zle .backward-char } I get output like )$(... It appears that zle_subshell function is being evaluated in reverse. Are there some obvious gotchas with ZLE widgets that I'm unaware of? A: The zle -U usage is the pecial case. It seems that the behavior is intended: zle -U string ... This pushes the characters in the string onto the input stack of ZLE. After the widget currently executed finishes ZLE will behave as if the characters in the string were typed by the user. As ZLE uses a stack, if this option is used repeatedly the last string pushed onto the stack will be processed first. However, the characters in each string will be processed in the order in which they appear in the string. -- zshzle(1), ZLE BUILTINS, zle -U So, zsh will behave as if ) and $( were typed after the zle_subshell finishes. We could modify the (R)BUFFER to change the editor buffer directly like this: function zle_subshell { RBUFFER='$()'"$RBUFFER" repeat 2 do zle .forward-char; done # ((CURSOR=CURSOR+2)) # We could do this instead. }
{ "pile_set_name": "StackExchange" }
Q: Python TypeError: Object of type 'ndarray' is not JSON serializable I was having some problem when trying to extract data out from array in Python. I got this array: [array([ 349.11759027]), array([ 306.51289706]), array([ 387.37637654]), array([ 348.15424288]), array([ 386.3088823]), array([ 356.0820971]), array([ 446.37942998]), array([ 394.73726333]), array([ 434.91548947]), array([ 507.92351186]), array([ 435.48301334]), array([ 652.74389728])] I am trying to extract out the value, and then add to firebase. The expected output as such: Jan: 349.11759027 Feb: 306.51289706 Mar: 387.37637654 ... Dec: 652.74389728 And my code: month = 0 for t in p: month = month + 1 result = firebase.post('/profit', {month : t}) Any ideas? Thanks! A: The input you mention - [array([ 349.11759027]), array([ 306.51289706]), array([ 387.37637654]), array([ 348.15424288]), array([ 386.3088823]), array([ 356.0820971]), array([ 446.37942998]), array([ 394.73726333]), array([ 434.91548947]), array([ 507.92351186]), array([ 435.48301334]), array([ 652.74389728])]` is a list with each element being a numpy array. Changing your code to this should work - for t in predictions: month = month + 1 result = firebase.post('/forecastProfit', {formatMonth(month) : t[0]})
{ "pile_set_name": "StackExchange" }
Q: adding long key and arraylist value to hashmap I am getting db data this way userid, dates 1125,3-05-2013 1125,4-05-2013 1125,5-05-2013 200,23-05-2013 200,24-05-2013 I need to add these to hashmap as hashmap(userid,dates).. i.e: hashmap(long,arraylist(string of dates)) and send to front end. I mean long value in hashmap should be unique, which is an key to retrive list of all dates for a particular user id, so if I try hashmap.get(1125) == i should get list of all dates for user 1125 like 3-05-2013,4-05-2013,5-05-2013 then if I try hashmap.get(200) == i should get list of all dates for user 200 like 23-05-2013,24-05-2013 I tried this way , but I am getting all the dates for single userid like, users200 dates[3-05-2013, 4-05-2013, 5-05-2013, 23-05-2013, 24-05-2013] Here is my code, // TODO Auto-generated method stub List<User> myEmpls = new ArrayList<User>(); User User1 = new User(); User1.setEmpcode((long) 1125); User1.setDate("3-05-2013"); myEmpls.add(User1); User User2 = new User(); User2.setEmpcode((long) 1125); User2.setDate("4-05-2013"); myEmpls.add(User2); User User5 = new User(); User5.setEmpcode((long) 1125); User5.setDate("5-05-2013"); myEmpls.add(User5); User User3 = new User(); User3.setEmpcode((long) 200); User3.setDate("23-05-2013"); myEmpls.add(User3); User User4 = new User(); User4.setEmpcode((long) 200); User4.setDate("24-05-2013"); myEmpls.add(User4); long prevUser=0; int cnt=1; long users =0; ArrayList<ArrayList<String>> lists = new ArrayList<ArrayList<String>>(); HashMap<Long, ArrayList> finalmap = new HashMap<>(); ArrayList<String> dates = new ArrayList<>(); for(User time : myEmpls) { if(prevUser==time.getEmpcode()) { users = time.getEmpcode(); System.out.println("users"+users); dates.add(time.getDate()); } else { dates.add(time.getDate()); } System.out.println("dates"+dates); finalmap.put(users, lists); prevUser = time.getEmpcode(); cnt++; } can some one help me in this issue? A: Map<Long,ArrayList<String>> map=new HashMap<Long,ArrayList<String>>(); public void addToMap(long id,String blaa) { ArrayList<String> ar=map.get(id) if(ar==null) { ar=new ArrayList<String>(); map.put(id,ar); } ar.add(blaa); } is this what you want? just call this for each row you receive A: Do like that... it is more simple : class User { private Long id; private String date; public User(Long id, String date) { this.id = id; this.date = date; } public Long getId() { return this.id; } public String getDate() { return this.date; } } List<User> listUsers = new ArrayList<User>(); listUsers.add(new User(new Long(2500), "03/05/2013")); listUsers.add(new User(new Long(2500), "04/05/2013")); listUsers.add(new User(new Long(2500), "05/05/2013")); listUsers.add(new User(new Long(200), "10/05/2013")); listUsers.add(new User(new Long(200), "18/05/2013")); HashMap<Long, ArrayList> map = new HashMap<Long, ArrayList>(); for(User user : listUsers) { if(map.containsKey(user.getId())) { map.get(user.getId()).add(user.getDate()); } else { ArrayList<String> dates = new ArrayList<String>(); dates.add(user.getDate()); map.put(user.getId(), dates); } } //just to check System.out.println("number of keys : " + map.size()); System.out.println("number of dates for 2500 : " + map.get(new Long(2500)).size()); System.out.println("number of dates for 200 : " + map.get(new Long(200)).size());
{ "pile_set_name": "StackExchange" }
Q: Can't do outdoor training for 150km and only on trainer. Can anyone give tips and training plan? I just found out that I cannot train outdoors for the next two months for the 150 km I was planning to attempt in the first week of September. Now I am planning to buy the In'ride 100 trainer from btwin as that's what my budget allows me to buy. Will I still be able to prepare for the 150km on the trainer. What are the advantages and disadvantages? Can anyone give me a training plan and tips. Thanks. A: Should be possible, but you are going to need a lot of dedication to sitting for hours on the trainer. You'll also need to put a trainer specific or cheap disposable tire on you bike as trainers wear rear tires fast. The training you need to do depends on what your level of fitness is, the max distance you can currently ride and the nature of the 180km event (flat or lots of hills) and how you want to ride it (target average speed, how long you plan to take overall, number of stops etc.) It will be easy to search for and build a training plan with 30-60 minute sessions 3-4 times a week that will increase your overall fitness and strength substantially. This should be your initial priority. The difficulty will replicating the duration and nature of the long ride. I would not try to replicate the distance of the ride on the trainer, but instead the duration and intensity. Estimate the time duration you will take for the event and plan sessions on the trainer working up to about 75% of that, at your planned intensity. A: This is possibly a duplicate of Training for 150 km race in 2 months practicality and plan , so you can still follow the advice in those answers to good effect. There are some advantages with a trainer because you can control the effort you make on the indoor trainer, and target specific goals with more structure. There are lots of training plans published online. If you use a heart rate (HR) monitor you can set some personalised HR zones and spend time in specific zones to train endurance, power etc. separately as you wish You can still do shorter, intense rides during the week and longer, steady rides at the weekend, building the distances up week by week like you would have outside. Tip: using a specific indoor trainer tyre on the trainer because it will not be damaged by the extra heat. Don't then use that indoor tyre outside! Disadvantages, you need to buy extra stuff and indoor riding can get boring and hot (no airflow). Resistance to pedalling is pretty linear, so estimating speed or distance is not representative of outside, but you can see how well you ride in week one, and build up time, speed, distance week by week.
{ "pile_set_name": "StackExchange" }
Q: C++ Overload operator << with something other than ostream I'm trying to show off variety on a piece of course work and hoped to use the << operator to easily add variables to a list. Eg: UpdateList<string> test; test << "one" << "two" << "three"; My problem, is that EVERY SINGLE example of the << operator is to do with ostream. My current attempt is: template <class T> class UpdateList { ...ect... UpdateList<T>& operator <<(T &value) { return out; } } Does anyone know how I can achieve this, or is it actually not possible inside C++? A: You should use const T& value. Following fragment of code should work fine UpdateList<T>& operator << (const T& value) { // push to list return *this; } or UpdateList<T>& operator << (T value) { // push to list return *this; } in C++11 (thanks to rightfold)
{ "pile_set_name": "StackExchange" }
Q: Ruby on Rails on Azure for a new project we have been given the option to use Microsoft's Azure for free. We are mainly working in rails. Are there any ways to make rails work on Azure and talk to MSSQL? A: You can use activerecord-sqlserver-adapter, from NougakuDo, for the rails environment on x64 Windows. NougakuDo can be found here, http://www.artonx.org/data/nougakudo/.
{ "pile_set_name": "StackExchange" }
Q: Weird postfix logs At the mail.log this appears ~2 times per minute. I didn't care what they were, but now I want to find the reason. Oct 2 20:27:02 sa-pd postfix/cleanup[31255]: 13A342CE301C: message-id=<[email protected]> Oct 2 20:27:02 sa-pd postfix/bounce[30816]: E41782CE301B: sender non-delivery notification: 13A342CE301C Oct 2 20:27:02 sa-pd postfix/qmgr[6230]: 13A342CE301C: from=<>, size=2454, nrcpt=1 (queue active) Oct 2 20:27:02 sa-pd postfix/qmgr[6230]: E41782CE301B: removed Oct 2 20:27:02 sa-pd postfix/virtual[27222]: 13A342CE301C: to=<www-data@sapd@CENSORED(itsmymaindomain)>, relay=virtual, delay=0.18, delays=0.09/0/0/0.09, dsn=5.1.1, status=bounced (unknown user: "www-data@sapd@CENSORED(itsmymaindomain)") Oct 2 20:27:02 sa-pd postfix/qmgr[6230]: 13A342CE301C: removed Oct 2 20:28:02 sa-pd postfix/pickup[24018]: 18D432CE301B: uid=33 from=<www-data> Oct 2 20:28:02 sa-pd postfix/cleanup[31255]: 18D432CE301B: message-id=<20121002182802.18D432CE301B@CENSORED(itsmymaindomain)> Oct 2 20:28:02 sa-pd postfix/qmgr[6230]: 18D432CE301B: from=<www-data@sapd@CENSORED(itsmymaindomain)>, size=657, nrcpt=1 (queue active) Oct 2 20:28:02 sa-pd postfix/virtual[30817]: 18D432CE301B: to=<www-data@sapd@CENSORED(itsmymaindomain)>, orig_to=<www-data>, relay=virtual, delay=0.33, delays=0.26/0/0/0.07, dsn=5.1.1, status=bounced (unknown user: "www-data@sapd@CENSORED(itsmymaindomain)") Does someone have an idea? A: There's some program running as the www-data user that's sending an email every minute. If it's indeed trying to send email to www-data@[email protected], the email address is ill-formed, presumably mistyped in some configuration file. Look for that address somewhere in your configuration files (try grep -r www-data@sapd@ /etc ~www-data/).
{ "pile_set_name": "StackExchange" }
Q: How do I clean up machines in a dying state? I was doing some experimentation with a test charm on Juju on AWS and managed to get my service in to a completely hung state. juju service returns the following. environment: amazon machines: "0": agent-state: started agent-version: 1.16.5 dns-name: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx instance-id: i-7c2f4c52 instance-state: running series: precise hardware: arch=amd64 cpu-cores=1 cpu-power=100 mem=1740M root-disk=8192M "5": agent-state: down agent-state-info: (started) agent-version: 1.16.5 instance-id: i-9cb9cbb2 instance-state: missing series: precise hardware: arch=amd64 cpu-cores=1 cpu-power=100 mem=1740M root-disk=8192M services: metest: charm: local:precise/metest-0 exposed: false life: dying relations: cluster: - metest units: metest/0: agent-state: down agent-state-info: (started) agent-version: 1.16.5 life: dying machine: "5" open-ports: - 80/tcp public-address: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx (I've removed the DNS names just in case!). The instance-id for machine 5 has been terminated according to the AWS management console. None of "destroy-unit metest/0", "destroy-service metest" and "destroy-machine 5" clear the problem, and I can't redeploy the service with it in this state. juju resolve seems to have no effect either. Googling the issue, the only resolution I can find is to completely blow away my environment - which is not a great option. Is there any way I can clear the problem otherwise? What is the general method for debugging this kind of issue? The root cause of the problem: We use Chef for most of our orchestration and have found that occasional failure between Chef and AWS API leave orphaned instances around. Since all the instances we launch from Chef are tagged with a name, and these orphaned instances are unnamed, to avoid giving Amazon money needlessly we added code to our knife plugins to terminate unnamed instances. I'm sure you can see where this is going ... Is there any way to clean up machines once they are in this state (--force doesn't help) - and I would also like to know if there are any plans to allow instances to be named so they are identifiable in the EC2 management console (something like juju-- would be ideal)? Things I have tried: destroy-machine --force doesn't seem to clean things up. I don't get an error, but it appears like nothing changed in the status. A: You could try: juju destroy-machine --force 5 The --force option of destroy-machine is available since 1.16.5 and should remove the hung machine and all units on it. Then you should be able to redeploy your service, but if it says "service already exists" just deploy it with a different name. If all else fails, juju destroy-environment -e <name> is always an option. I'm not sure if it did support --force as well in 1.16.5.
{ "pile_set_name": "StackExchange" }
Q: How to make a progress bar work in Zenity? I am writing a simple basic bash script (starting with #! /bin/bash, file format is .sh) and I am trying to make a progress bar work: #!/bin/bash echo "You are running in LXDE mode. Answer 'yes' or 'no' on the following question to continue (or not) in LXDE mode." zenity --question --text='Do you want to continue in LXDE mode?' --ok-label=Yes --cancel-label=No echo "Please enter your username and password to continue because the following command needs root privileges." zenity --password --username echo "Please enter today's date:" zenity --calendar --text="Please enter today's date:" echo "Please enter your name:" zenity --entry --text='Please enter your name on the text entry below:' echo "Analyzing data..." zenity --info --text='Now begin analyzing data. If it takes more than 40 seconds, click on "Cancel".' zenity --progress --title='Analyzing data...' --pulsate I have tried to make it move from 0% to 100%, and nothing happened. It was stuck at 0%. I have also tried to make it pulsate by using the --pulsate option, still at 0% doing nothing. Can anyone please help me? Any help would be appreciated. A: The Zenity docs have a small code snippet that should do exactly what you're looking for. #!/bin/sh ( echo "10" ; sleep 1 echo "# Updating mail logs" ; sleep 1 echo "20" ; sleep 1 echo "# Resetting cron jobs" ; sleep 1 echo "50" ; sleep 1 echo "This line will just be ignored" ; sleep 1 echo "75" ; sleep 1 echo "# Rebooting system" ; sleep 1 echo "100" ; sleep 1 ) | zenity --progress \ --title="Update System Logs" \ --text="Scanning mail logs..." \ --percentage=0 if [ "$?" = -1 ] ; then zenity --error \ --text="Update canceled." fi First try just copying in the code that's there and running it and confirming that it works as intended, then modify to add in your code where appropriate. If your progress bar is stuck at zero, make sure to by-pass any sections of the script that may be hanging and making you think that it's actually working! Edit: As stated in the Answer below, the reason it's not working is because zenity expects the progress to be echoed to it, like in the code sample.
{ "pile_set_name": "StackExchange" }
Q: How to remove the arrow from the default wordpress permalink Probably a basic question but how do I remove the arrow in the top tab on a wordpress permalink? ie: A: The separator is the first argument of wp_title(). You can use something like <title><?php wp_title(''); ?></title> to remove the separator.
{ "pile_set_name": "StackExchange" }
Q: Can one replace double chainrings with a singlespeed chainring on the original crankset/spider? I have an 80's Peugeot steel-framed racer that I want to convert to singlespeed/fixed. It has a standard square-taper BB with a double chainring chainset. I can see the chainrings are removable from the spider and was thinking I could just remove them and replace with a singlespeed ring to save having to buy a whole new crankset. Will this work or will I have problems with chainline? I have yet to buy a new rear wheel with singlespeed hub so it's quite hard to check if chainline will work. I presume it is possible to bolt the chainring to either side of the spider giving me a choice of two positions. Beyond this I'm not sure how I would adjust chainline if it's out. Any general help/tips regarding conversion appreciated. The bike has horizontal dropouts (not a track rear fork) so tensioning shouldn't be an issue. Here's a (not very good) picture: A: Indeed It should work. The other answers address chainline correction options for the rear end, but I know at least two for the front side: Usually two big rings in a triple, or the two rings of a double would be fixed by a single set of bolts. That means the chainring bolts are long enough to hold 2 chainrings. When going to hold only one, it may happen that the bolts are too long, so you may need spacers to fill up the gap. These spacers can be used to fine tune the chainline in the front side. Also, if the bottom bracket is the old cup and bearing style, the cups can be adjusted to provide a few millimeters of adjustment. Shimano sealed BB's can be shifted side to side by adding Bottom Bracket spacers, which should be really cheap. I will mention a third one, although the OP clearly states that wants a lean budget approach, chainline can also be altered by swapping the bottom bracket cartridge or axle. (Can be done low budget if using secondhand parts). A: Yes it will work. Should you want a different sized chainring you'll just need to find one with the same BCD (bolt circle diameter) and you'll need shorter chainring bolts and/or chainring bolt spacers. In terms of chainline, you have a multitude of options. The easiest one is to buy a cassette style wheel and a single speed conversion kit which consists of a cog and a bunch of spacers. By adjusting how many spacers you use to the inside vs the outside of the cog, you should be able to dial in your chain line with no further adjustments. The cheapest but trickiest and most time consuming option is to re-space your existing rear wheel's axle and re-dish the wheel accordingly, and then slap a single speed freewheel on in place of the multi speed one that's on there now. Finally, if you want a nice clean look and the extra security of a bolt on axle, you can buy a rear wheel with a single speed or flip flop hub and adjust your chain line with a different length bottom bracket- assuming the chain ling isn't good enough as is when you change the wheel.
{ "pile_set_name": "StackExchange" }
Q: A probability sum that I can't seem to solve Two friends, Adam and Eve are throwing rocks at a mountain. Each rock thrown has a probability of hitting the mountain equal $p$ If both of them are throwing rocks at the same time and each thrown rock is independence from the other, what are the chances that Adam hits the mountain $k$ times by the time Eve hits the mountain for the first time? My idea: Let $B_n$ be the event that Eve hit the mountain for the first time on the try number $n$ Let $A_n$ be the event that Adam hit the mountain $k$ times in $n$ tries. $A_n \sim B(n,p)= {n \choose k}p^k(1-p)^{n-k}$ $B_n \sim G(p)=(1-p)^{n-1}p$ $P(A_n \cap B_n)=P(A_n)P(B_n) $ (since they're independent) Now, I am looking to solve this summation: $$\sum_{n=1}^{\infty}{n \choose k}p^k(1-p)^{n-k}(1-p)^{n-1}p=\sum_{n=1}^{\infty}{n \choose k}p^{k+1}(1-p)^{2n}(1-p)^{-k-1}=\frac{p}{1-p}^{k+1}\sum_{n=1}^{\infty}{n \choose k}(1-p)^{2n}$$ The problem is I have no idea how to solve the sum, it looks like I can solve it by considering the power series $x^n$ however I need to derivative $k$ times? $k$ is a number, the series index is $n$. A: The game can be analyzed more straightforwardly in the following way: Ignore turns in which both Adam and Eve miss the mountain. Of the cases where at least one of them hits the mountain, Adam wins the turn with probability $\frac{1-p}{2-p}$; Eve wins or ties the turn with probability $\frac{1}{2-p}$. Adam has to win at least $k$ times before Eve wins or ties, so the probability is just $$ \left(\frac{1-p}{2-p}\right)^k $$ If the rules are interpreted such that Adam can hit the mountain for the $k$th time at the same time Eve hits the mountain for the first time and still win the game, then the probability is instead $$ \left(\frac{1-p}{2-p}\right)^{k-1}\left(\frac{1}{2-p}\right) $$ which is just the same expression, but with the last "Adam wins the turn" term replaced by an "Adam wins or ties the turn" term. I've verified this by simulation (assuming I've properly understood the rules), incidentally. Feel free to ask about details.
{ "pile_set_name": "StackExchange" }
Q: Replace Backgroundworkerclass with worker thread I have used Backgroundworkerclass to update my user interface element (Label). Please! Check my code below. It is working fine. But I am trying to replace Backgroundworkerclass and use simple worker thread to achieve the same goal. How can I achieve that? Any suggestion will be appreciated. Thanks public void backgroundWorker_ProgressChanged(object sender, ProgressChangedEventArgs e) { label1.Text = e.ProgressPercentage.ToString(); } private void button1_Click(object sender, EventArgs e) { if (!backgroundWorker1.IsBusy) backgroundWorker1.RunWorkerAsync(); else label1.Text = "Busy Processing"; } private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e) { for (int i = 0; i <= 100; i++) { Thread.Sleep(100); backgroundWorker1.ReportProgress(i); if (backgroundWorker1.CancellationPending) { e.Cancel = true; backgroundWorker1.ReportProgress(0); return; } } } private void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e) { label1.Text = e.ProgressPercentage.ToString(); } A: delegate void DelegateDoWork(int number); private void button1_Click(object sender, EventArgs e) { new Thread(DoWork).Start(); } public void ProgressBar(int i) { if (label1.InvokeRequired) { var d = new DelegateDoWork(ProgressBar); this.Invoke(d, i); } else label1.Text = i.ToString(); } public void DoWork() { for (int i = 0; i <= 100; i++) { Thread.Sleep(100); ProgressBar(i); } }
{ "pile_set_name": "StackExchange" }
Q: Routing to a specific page within a lazy-loaded module in angular 2+ I have the following in my main app router: { path: 'users', loadChildren: 'app/modules/users/users.module#UsersModule', canLoad: [AuthGuard] } When the user goes to http://localhost:4200/users/1234 to see their profile, I try to save the full url (including the user ID above) so that I would route back to that page once they're logged in. The problem is, the Route parameter in the canLoad function only has a path field that does not include the user ID above, only the path users. Is there a way I can achieve this? EDIT AFTER FIRST COMMENT The users routing module does have a canActivate guard, but this never gets called except from the login component, since the first canLoad returned false and routed the caller to the login component previously. EDIT AFTER FIRST ANSWER canLoad(route: Route) { if (!AuthService.isAuthenticated()) { this.router.events.takeWhile(event => !(event instanceof NavigationCancel)) .subscribe(event => { console.log(event); if (event instanceof NavigationCancel) { AuthService.setRedirectUrl(event.url); } }); return false; } return true; } So I tried the above, but I think I'm doing something wrong still, since console never logs... how do I unsubscribe or stop receiving NavigationCancel events after I set store the redirect URL? A: Since canLoad is called during the construction of the router state it doesn't get activated route and the whole router state. It gets only the route. You can inject a router into a guard, subscribe to NavigationCancel event where you can access the URL. how do I unsubscribe or stop receiving NavigationCancel events after I set store the redirect URL? Since router.events is a Subject, you can unsubscribe from it like this: // get the subscription reference const subscription = this.router.events.takeWhile(event => !(event instanceof NavigationCancel)) .subscribe(event => { console.log(event); if (event instanceof NavigationCancel) { AuthService.setRedirectUrl(event.url); // use it to unsubscribe subscription.unsubscribe(); <---------------------------- } });
{ "pile_set_name": "StackExchange" }
Q: Uploading products from excel file I was wondering what is the best way of importing products into database. Product names have unique sku's. The Excel file may contain existing sku's. One way of doing import is: Read record from excel Check sku for existence in database table if already exists, update it or if not found, insert it Second way: 1. Read record from excel 2. Check sku for existence in database table if already exists, delete it (will surely change the create_data, auto_id) or if not found, insert it If I upload say 1000 records, then there will be 1000 x 2 (update/delete + insert) queries fired on database. Is there any other efficient solution? Thanks A: The most performant way would be option 1. As Joe R already mentioned option 2 causes unnecessary database calls. You could make it an option though, in a lot of cases deleting all products even is a probability. You could have a DELETE or UPDATE option available for the one uploading the data. For example deleting would be favourable if you have a lot of excess data in the database that you want to remove.
{ "pile_set_name": "StackExchange" }
Q: Does ng-app wait for document.ready? I've got a small ng-app/ng-controller block inside a large HTML file. I thought ng-app would be triggered after its div is loaded but in my case it waits for document.ready event. When does angular instantiate ng-app? A: No, it waits for DOM content to load. See the diagram from this documentation https://docs.angularjs.org/guide/bootstrap. You can wait for dom ready with ready(). angular.module("Foo") .controller("Bar", function () { angular.element(document).ready(function () { }); });
{ "pile_set_name": "StackExchange" }
Q: Inicializar una lista Lazy en hibernate usando un criterio con un SetAlias parametrizado Tengo el siguiente código: criterio = session.createCriteria(Tecnico.class) .add(Restrictions.eq("expediente", usuario)) .add(Restrictions.eq("password", contrasenia)) .createAlias("empresa", "empresa", JoinType.LEFT_OUTER_JOIN) .setFetchMode("empresa", FetchMode.JOIN) .createAlias("grupos", "grupos", JoinType.LEFT_OUTER_JOIN, Restrictions.eq("grupos.estatus", true)) .setFetchMode("grupos", FetchMode.JOIN) Cuando recupero el valor de la lista de grupos no se ha inicializado y el de empresa sí. Por otro lado, si quito la condición Restrictions.eq("grupos.estatus", true), sí, inicializa la lista grupos. ¿Qué puedo hacer para poder inicializar la lista sin tener que quitar la restricción en la cláusula? Clase Tecnico: @Entity @Table(name = "CAT_TECNICOS", schema = "SATEC", uniqueConstraints = @UniqueConstraint(columnNames = "EXPEDIENTE")) public class Tecnico implements java.io.Serializable { private static final long serialVersionUID = 325857236592549406L; @Id @Column(name = "IDTECNICO", unique = true, nullable = false, precision = 8, scale = 0) private int idtecnico; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "IDEMPRESA", nullable = false) private Empresa empresa; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "IDESTATUS_OCUPACION", nullable = false) private EstatusOcupacion estatusOcupacion; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "IDHABILIDAD", nullable = false) private Habilidad habilidad; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "IDHORARIO", nullable = false) private Horario horario; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "IDPERFIL", nullable = false) private Perfil perfil; @Column(name = "EXPEDIENTE", unique = true, nullable = false, length = 16) private String expediente; @Column(name = "PASSWORD", nullable = false, length = 8) private String password; @Column(name = "NOMBRE", nullable = false, length = 200) private String nombre; @Column(name = "ACTIVO", nullable = false, length = 1) private char activo; @Column(name = "AGENDA", nullable = false, precision = 8, scale = 0) private int agenda; @SuppressWarnings({ "unchecked", "rawtypes" }) @OneToMany(fetch = FetchType.LAZY, mappedBy = "tecnico") private Set<EstadoAsignacion> estadoAsignaciones = new HashSet(0); @SuppressWarnings({ "unchecked", "rawtypes" }) @OneToMany(fetch = FetchType.LAZY, mappedBy = "tecnico") private Set<InventarioDispositivo> inventarioDispositivos = new HashSet(0); @SuppressWarnings({ "unchecked", "rawtypes" }) @OneToMany(fetch = FetchType.LAZY, mappedBy = "tecnico") private Set<Biometrico> biometricos = new HashSet(0); @SuppressWarnings({ "unchecked", "rawtypes" }) @OneToMany(fetch = FetchType.LAZY, mappedBy = "tecnico") private Set<Grupo> grupos = new HashSet(0); Clase Grupo: @Entity @Table(name = "TB_GRUPOS", schema = "SATEC", uniqueConstraints = { @UniqueConstraint(columnNames = "IDTECNICO"), @UniqueConstraint(columnNames = "IDGRUPO")} ) public class Grupo implements java.io.Serializable { private static final long serialVersionUID = 8241096129146860864L; @EmbeddedId @AttributeOverrides({ @AttributeOverride(name = "idGrupo", column = @Column(name = "IDGRUPO", unique = true, nullable = false, precision = 22, scale = 0)), @AttributeOverride(name = "idtecnico", column = @Column(name = "IDTECNICO", unique = true, nullable = false, precision = 22, scale = 0))}) private GrupoPk grupoPk; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "IDTECNICO", unique = true, nullable = false, insertable = false, updatable = false) private Tecnico tecnico; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "IDGRUPO", unique = true, nullable = false, insertable = false, updatable = false) private GrupoMensaje grupoMensaje; @Temporal(TemporalType.DATE) @Column(name = "FEC_ACTUALIZACION", nullable = false, length = 7) private Date fecActualizacion; @Column(name = "ESTATUS", nullable = false, length = 1) private boolean estatus; A: Usando el método: Hibernate.initialize(Object obj); Con el cual inicializar el objeto que necesitas cuando esté lazy. También puedes cambiar el tipo de fetchtype de "LAZY" a "EAGER". Debería cargártelo en memoria, pero no es lo más eficiente porque te lo cargaría siempre que llames a un objeto Tecnico. Como última opción, podrías lanzar una query para cada Técnico, que te devolviera la colección de sus grupos. Select t.grupos from Tecnico t where t.id =?1; Fuentes: Hibernate: best practice to pull all lazy collections
{ "pile_set_name": "StackExchange" }
Q: Why do PowerShell not sort strings correctly I'm trying to sort a list of strings, but the result is not sorted at all: PS C:\Windows\system32> "bbb", "aaa", "ccc" | sort bbb ccc aaa What am I doing wrong? This is actually taken directly from "Windows PowerShell In Action" where it works. A: Try to be sure to use the good CmdLet: "bbb", "aaa", "ccc" | Sort-Object Works for me: aaa bbb ccc Then try to verify your alias: PS> Get-Alias sort CommandType Name Version Source ----------- ---- ------- ------ Alias sort -> Sort-Object
{ "pile_set_name": "StackExchange" }
Q: Jersey Test Framework I have a non maven project. I would like to test my jersey rest services using the Jersey Test Framework. The jersey docs only relate to maven for the jersey test framework. Is it possible to add a jar or library to the project to use this framework? A: Using Jersey client instead of Jersey Test Framework have two advantages: It's well documented and only needs the jersey-client JAR The written code is standard and can be use by the Java clients of your services
{ "pile_set_name": "StackExchange" }
Q: ESX servers in a DMZ I have two ESX 3.5 servers in a DMZ. I can access these servers on any port from my LAN via a VPN. Servers in the DMZ are unable to initiate connections back to the LAN, for obvious reasons. I have a vCenter server on my LAN and can initially connect to the ESX servers fine. However, the ESX servers then try to send a heartbeat back to the vCenter server on UDP/902 - obviously this will not get back to the vCenter server, which then marks the ESX servers as not responding and disconnects. There are two broad solutions I can think of: 1) Try to tell vCenter to ignore not getting heartbeats. The best I can do here is delay the disconnect by 3 mins. 2) Try some clever network solution. However, again I am at loss. Note: The vCenter server is on a LAN, and cannot be given a public IP, so firewall rules back will not work. Also, I cannot setup a VPN from the DMZ to the LAN. **I am adding the following, explanation that I added to the comments OK, maybe this is the bit that I not explaining well. The DMZ is on a remote site, an entirely independent network (network 1). The vCenter server is on our office LAN (network 2). Network 2 can connect to any machine on any port on network 1. But network 1 is not allowed to initiate a connection to network 2. Any traffic destined to network 2 from network 1 gets dropped by the firewall as it is traffic to a non-routable address. The only solution I can think of is setting up a VPN from network 1 to network 2, but this is not acceptable. So any clever folk out there any ideas? J A: James, why not configure the ESX hosts at the remote location so that their guests are in a DMZ, but the ESX service console etc are in a back-zone subnet that you can establish a VPN with? That way, your hosts are isolated from web connectivity (a good thing) but your guests can continue to operate front-facing. As for the remote site problem... you really need a site-to-site VPN link going on here, between your internal LAN and the remote (non-DMZ if possible) subnet.
{ "pile_set_name": "StackExchange" }
Q: Phase shifting the common mode signal from an instrumentation amplifier I was looking at the AD620 instrumentation amplifier data sheet. I'm looking at the ECG practical application on there (circuit shown below). Pins 1 and 8 of the instrumentation amp go through a resistor circuit that allows us to use the common mode signal. The voltage at the node between R2 and R3 are fed into some sort of phase shifter/amplifier. This phase shifter is supposed to shift the common mode signal out of phase so it can be used as a feedback into the body. My question is: How does this phase shifter work? What does each resister do, and what is the point of the capacitor? I tried simulating this in pspice, but changing the value of the capacitor didn't seem to affect the phase shift at all. So in short, how does that phase shifter/amplifier work? What components control what exactly? Thanks in advance! A: You didn't show the all-important value of C1, but it together with R1 looks more like a low pass rolloff than a deliberate phase shift. Without knowing C1, we can't tell what the rolloff frequency is, and therefore whether it produces significant phase shift over the valid frequency range or is just there to cut down high frequency drive to the right leg. The basic principle is to drive the right leg so as to null out the common mode variations in the signal accross the heart. The body is going to pick up whatever the ambient electric fields are, particular the power line "hum". The signal to noise ratio of skin voltages is horrendously bad. The common mode power line hum can easily be a few orders of magnitude higher than the signal you are trying to measure. A ideal inamp amp will eliminate that, but nobody has made one of those yet. They all have some real upper limit on common mode rejection and common mode range. By trying to null out the common mode part of the signal, it helps the inamp do its job better. If nothing else, it cuts down on the common mode range that the inamp must be able to handle, even if common mode rejection ratio isn't the main issue.
{ "pile_set_name": "StackExchange" }
Q: unreported exception handling I tried a simple code where user has to input a number. If user inputs a char it l produce numberformatexecption. That works fine. Now when i remove try catch block it shows error. What is the meaning of the error The code and error as follows import java.io.*; class execmain { public static void main(String[] args) { //try //{ int a; BufferedReader br=new BufferedReader(new InputStreamReader(System.in)); a=Integer.parseInt(br.readLine());// ---------error-unreported exception must be caught/declared to be thrown System.out.println(a); //} //catch(IOException e) //{ //System.out.println(e.getMessage()); //} } } Why this error comes? A: The meaning of the error is that your application has not caught the IOException that might be thrown when you try to read characters from the input stream. An IOException is a checked exception, and Java insists that checked exceptions must either be caught or declared in the signature of the enclosing method. Either put the try ... catch stuff back, or change the signature of the main method by adding throws IOException.
{ "pile_set_name": "StackExchange" }
Q: Cache list size seems limited to 128 items It looks like the CD object cache in our implementation is only allowed to contain 128 items. The logging below suggests that as soon the broker tries to cache #129, an old item is removed before the new item is added. This is confirmed by other things we see: if we start the app and request page 1, the page and all its dependencies (about 80 items in total) are cached. If we then request page 2, that is also cached but page 1 is no longer in the cache. Questions: Is it really the case that the number of items in the cache is by default limited to 128 items? If so, how can we override this number? The logging below is taken from our web application server. 16:42:41.832 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - LRUModel: listSize = 129 memSize = 465310 16:42:41.832 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - LRUPolicy.processPut: maximum list size exceeded 16:42:41.832 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - Removing node with key 319:/system/assets/js/Lib/jquery.min.js 16:42:41.832 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - LRUPolicy.processPut: reduced list size to 128 16:42:41.832 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - LRUModel: listSize = 129 memSize = 476518 16:42:41.833 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - LRUPolicy.processPut: maximum list size exceeded 16:42:41.833 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - Removing node with key 319:173730:true 16:42:41.833 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - LRUPolicy.processPut: reduced list size to 128 16:42:41.835 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - LRUModel: listSize = 129 memSize = 476257 16:42:41.835 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - LRUPolicy.processPut: maximum list size exceeded 16:42:41.835 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - Removing node with key 319:/system/assets/js/default.min.js 16:42:41.836 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - LRUPolicy.processPut: reduced list size to 128 16:42:41.836 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - LRUModel: listSize = 129 memSize = 476188 16:42:41.836 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - LRUPolicy.processPut: maximum list size exceeded 16:42:41.836 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - Removing node with key 319:174439:true 16:42:41.836 [WebContainer : 1] DEBUG com.tridion.cache.LRUPolicy - LRUPolicy.processPut: reduced list size to 128 16:42:41.837 [WebContainer : 1] DEBUG com.tridion.cache.CacheController - Adding a dependency from Object [319:172694] in Region [/com_tridion_linking_ComponentLinkInfo] to Object [false:false:319:-1:172694:-1:null:] in Region [/com.tridion.linking.ComponentLink] A: The default maximum is indeed 128 queue events. You can change this by specifying a different value for Queuesize attribute in the element RemoteSynchronization in the cd_storage_conf.xml file. Be generous with the queue size. Especially in a multilingual implementation. This number can easily exceed 1000000 and beyond. To figure out a good Queuesize number set an unrealistic high Queuesize value, then run in DEBUG mode for a while (e.g. a day or two), or better yet run a crawler to crawl the entire site, and then check the log. The listSize grows quick when there is high traffic on the website and when no publishing activities are going on (no cache invalidations). Check the latest listSize's. Add to it 10000. Then you have a decent Queuesize. A: Quirijn, documentation say that the size can be configured when defining the policy itself http://docs.sdl.com/LiveContent/content/en-US/SDL%20Web-v1/GUID-D10BB04E-192D-432D-A00D-01D74182A260 Example: <Policy Type="LRU" Class="com.tridion.cache.LRUPolicy"> <Param Name="Size" Value="128" /> <Param Name="MemSize" Value="32M" /> </Policy>
{ "pile_set_name": "StackExchange" }
Q: Directory list program not opening correctly Here is the code (for the whole project) File: directoryReader.cpp // // directoryReader.cpp // appBetaServer // // Created by Ethan Laur on 3/21/14. // Copyright (c) 2014 Ethan Laur. All rights reserved. // #include "directoryReader.h" #include <stdlib.h> #include <syslog.h> #include <string.h> #include <stdio.h> directoryReader::directoryReader() { dir = NULL; syslog(LOG_NOTICE, "directoryReader spawned with no args!"); } directoryReader::directoryReader(char *d) { dir = NULL; setFileMode(S_IFREG); setDirectory(d); } directoryReader::directoryReader(char *d, mode_t m) { dir = NULL; setFileMode(m); setDirectory(d); } void directoryReader::setDirectory(char * newDir) { strcpy(dirName, newDir); if (dir != NULL) closedir(dir); dir = NULL; reset(); } void directoryReader::setFileMode(mode_t mode) { fileMode = mode; } void directoryReader::reset() { if (dir != NULL) closedir(dir); dir = opendir(dirName); } char * directoryReader::getNext() { struct stat st; char buf[1024]; if (dir == NULL) { printf("Error opening %s! Will try again\n", dirName); setDirectory(strdup(dirName)); if (dir == NULL) { printf("\tCould not! FAILED!\n"); return NULL; } } while ((ent = readdir(dir)) != NULL) { sprintf(buf, "%s/%s", dirName, ent->d_name); if (strstr(buf, "/.") == buf + (strlen(buf) - 1)) continue; if (strstr(buf, "/..") == buf + (strlen(buf) - 2)) continue; stat(buf, &st); if (st.st_mode & fileMode) return strdup(buf); } return NULL; } File: directoryReader.h // // directoryReader.h // appBetaServer // // Created by Ethan Laur on 3/21/14. // Copyright (c) 2014 Ethan Laur. All rights reserved. // #ifndef __appBetaServer__directoryReader__ #define __appBetaServer__directoryReader__ #include "dirent.h" #include <sys/stat.h> class directoryReader { protected: DIR *dir; struct dirent *ent; mode_t fileMode; char dirName[1024]; public: directoryReader(); directoryReader(char *); directoryReader(char *, mode_t); void setDirectory(char *); void setFileMode(mode_t); void reset(); char *getNext(); }; #endif /* defined(__appBetaServer__directoryReader__) */ File: main.cpp // // main.cpp // fdup // // Created by Ethan Laur on 5/9/14. // Copyright (c) 2014 Ethan Laur. All rights reserved. // #include <stdio.h> #include <sys/stat.h> #include <string.h> #include <stdlib.h> #include "directoryReader.h" char goodpath(char *p) { if (*(p + strlen(p) - 1) == '.') return 0; return 1; } void p_getfiles(char *basepath, FILE *f, char *filename) //filename is to ignore { directoryReader *dirr = new directoryReader(basepath, S_IFREG | S_IFDIR); char *tmppath = NULL; struct stat st; while ((tmppath = dirr->getNext()) != NULL) { if (strcmp(tmppath, filename) == 0) continue; if (goodpath(tmppath)) { stat(tmppath, &st); if (S_ISDIR(st.st_mode)) { if (strcmp(tmppath, filename) == 0) printf("uh oh...\n"); p_getfiles(tmppath, f, filename); } else if (S_ISREG(st.st_mode)); //fprintf(f, "%s\n", tmppath); } free(tmppath); } delete dirr; } void getfiles(char *basepath, char *filename) { FILE *f;// = fopen(filename, "w"); p_getfiles(basepath, f, filename); //fflush(f); //fclose(f); } int main(int argc, char * * argv) { getfiles(argv[1], argv[2]); } The issue is in directoryReader::getNext() or in p_getfiles(char *, FILE *, char *). What happens, is this (output); Error opening //.DocumentRevisions-V100/PerUID/501/83! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/84! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/85! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/86! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/87! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/88! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/89! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/8a! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/8b! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/8c! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/8d! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/8e! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/8f! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/9! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/90! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/91! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/92! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/93! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/94! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/95! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/96! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/97! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/98! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/99! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/9a! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/9b! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/9c! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/9d! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/9e! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/9f! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/a! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/a0! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/a1! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/a2! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/a3! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/a4! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/a5! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/a6! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/a7! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/a8! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/a9! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/aa! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/b! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/b0! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/b1! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/b2! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/b3! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/b5! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/b6! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/b8! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/b9! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/ba! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/bb! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/bc! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/bd! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/be! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/bf! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/c! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/c1! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/c2! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/c3! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/c4! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/c5! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/c6! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/c7! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/c8! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/c9! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/ca! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/cb! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/cc! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/cd! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/ce! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/cf! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/d! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/d0! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/d1! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/d2! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/d3! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/d4! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/d5! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/d6! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/d7! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/d8! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/d9! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/da! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/db! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/dc! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/dd! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/de! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/e! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/e0! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/e2! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/e3! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/e4! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/e5! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/e6! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/e7! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/e8! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/e9! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/ea! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/eb! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/ec! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/ed! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/ee! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/ef! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/f! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/f0! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/f1! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/f2! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/f3! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/f4! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/f5! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/f6! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/f7! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/f8! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/f9! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/fc! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/fd! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/fe! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/PerUID/501/ff! Will try again Could not! FAILED! Error opening //.DocumentRevisions-V100/staging! Will try again Could not! FAILED! Error opening //.fseventsd! Will try again Could not! FAILED! Error opening //.Spotlight-V100! Will try again Could not! FAILED! Error opening //.Trashes! Will try again Could not! FAILED! Error opening //.vol! Will try again Could not! FAILED! Error opening //Applications! Will try again Could not! FAILED! Error opening //bin! Will try again Could not! FAILED! Error opening //cores! Will try again Could not! FAILED! Error opening //dev! Will try again Could not! FAILED! Error opening //efi! Will try again Could not! FAILED! Error opening //etc! Will try again Could not! FAILED! Error opening //home! Will try again Could not! FAILED! Error opening //Library! Will try again Could not! FAILED! Error opening //net! Will try again Could not! FAILED! Error opening //Network! Will try again Could not! FAILED! Error opening //opt! Will try again Could not! FAILED! Error opening //private! Will try again Could not! FAILED! Error opening //sbin! Will try again Could not! FAILED! Error opening //System! Will try again Could not! FAILED! Error opening //tmp! Will try again Could not! FAILED! Error opening //Users! Will try again Could not! FAILED! Error opening //usr! Will try again Could not! FAILED! Error opening //usr0! Will try again Could not! FAILED! Error opening //var! Will try again Could not! FAILED! Error opening //Volumes! Will try again Could not! FAILED! Now, I don't know much about why this isn't working, though I do know it isn't because of the "//" at the beginning. If anybody could help me diagnose (at least) or fix this problem, that would be great. If I am missing any information, please leave a comment and I will make edits. Edit 1: Arguments passed are / and blarg (blarg since the file is never written to nor opened) A: I would change the line: sprintf(buf, "%s/%s", dirName, ent->d_name); to if (strcmp(dirname, "/") == 0 ) { sprintf(buf, "/%s", ent->d_name); } else { sprintf(buf, "%s/%s", dirName, ent->d_name); } That made a difference in my testing. Also, if (strcmp(tmppath, filename) == 0) continue; will result in a memory leak. I would change that to: if (strcmp(tmppath, filename) == 0) { free(tmppath); continue; } Everything else looks good to me. Update It will be good to have a destructor of directoryReader that will close the open directory. directoryReader::~directoryReader() { if (dir != NULL) closedir(dir); } Also, p_getfiles can be reorganized so that: You open a directory, gather all the files and sub-directories of the directory, close the directory, and then process the files and subdirectories. That way, you don't have to worry about too many open directories. You can create an instace of directoryReader on the stack instead of on the heap. Here's a little refactored version of p_getfiles. void p_getfiles_and_directories(char const* basepath, char const* filename, //filename is to ignore std::vector<std::string>& files, std::vector<std::string>& directories) { // This function does not recurse directories. // It just returns the file and sub-directories in the given // basepath. directoryReader dirr(basepath, S_IFREG | S_IFDIR); char *tmppath = NULL; struct stat st; while ((tmppath = dirr.getNext()) != NULL) { if (strcmp(tmppath, filename) == 0) continue; if (goodpath(tmppath)) { stat(tmppath, &st); if (S_ISDIR(st.st_mode)) { directories.push_back(tmppath); } else if (S_ISREG(st.st_mode)) { files.push_back(tmppath); } } free(tmppath); } } void p_getfiles(char const* basepath, FILE *f, char const* filename) //filename is to ignore { // Get all the files and sub-directories in the given basepath. std::vector<std::string> files; std::vector<std::string> directories; p_getfiles_and_directories(basepath, filename, files, directories); // Recurse directories. std::vector<std::string>::iterator iter = directories.begin(); std::vector<std::string>::iterator end = directories.end(); for ( ; iter != end; ++iter ) { if (strcmp((*iter).c_str(), filename) == 0) printf("uh oh...\n"); p_getfiles((*iter).c_str(), f, filename); } // Process files. iter = files.begin(); end = files.end(); for ( ; iter != end; ++iter ) { fprintf(stdout, "%s\n", (*iter).c_str()); } }
{ "pile_set_name": "StackExchange" }
Q: Adding Dictionary Key to a constructor in F# Being as how I am new to F#, this may seem like some kind of elementary question. But here goes. I have a class with a constructor using the following code: new () = { _index = 0; _inputString = ""; _tokens = new Dictionary<string, string>() { {"key", "value"} } } Everything works except F# doesn't seem to allow me to add tokens to my dictionary. I can initialize it with a new Dictionary<> object, but if I try to populate, it throws an error. I also can't do it with the .Add member. I have seen examples of F# constructors initializing field values, but is there no way to execute other code maybe? A: Because Dictionary has a constructor taking an IDictionary instance, you can use the built-in dict function to help you here: open System.Collections.Generic type Foo = val _index : int val _inputString : string val _tokens : Dictionary<string, string> new () = { _index = 0 _inputString = "" _tokens = Dictionary(dict [("fooKey", "fooValue")]) } However, it's also possible to execute non-trivial code before or after your constructor's object initializer: type Bar = val _index : int val _inputString : string val _tokens : Dictionary<string, string> new () = let tokens = Dictionary() tokens.Add ("barKey", "barValue") { _index = 0 _inputString = "" _tokens = tokens } type Baz = val _index : int val _inputString : string val _tokens : Dictionary<string, string> new () as this = { _index = 0 _inputString = "" _tokens = Dictionary() } then this._tokens.Add ("bazKey", "bazValue") A: Ildjarn already answered your question, but let me just add a note about coding style - I think that most of the F# programs these days prefer the implicit constructor syntax, where you define one implicit constructor as part of the type declaration. This usually makes code a lot simpler. You could write something like: type Bah() = let index = 0 let inputString = "" let tokens = new Dictionary<string, string>() do tokens.Add("bazKey", "barValue") member x.Foo = "!" This defines a parameter-less constructor and private fields (e.g. index). In your sample, this doesn't make much sense (because all fields are immutable, so index will always be zero). I suppose you probably have other constructor, in which case you can write something like: type Baf(index:int, inputString:string, tokens:Dictionary<string, string>) = new() = let tokens = new Dictionary<string, string>() tokens.Add("bazKey", "barValue") Baf(0, "", tokens) Here, you get two constructors - one parameter-less and one that takes three parameters. You can also make the implicit constructor private and expose only more specific cases: type Baf private (index:int, inputString:string, tokens:Dictionary<string, string>) = // (...) As a side-note, I also changed the naming from _index to index, because I don't think that F# guidelines recommend using underscores (although, it may make sense for fields declared using val) A: In F#, everything is an expression, so you can initialize _tokens like so: open System.Collections.Generic type Foo = val _index : int val _inputString : string val _tokens : Dictionary<string, string> new () = { _index = 0 _inputString = "" _tokens = let _tokens = Dictionary() _tokens.Add ("key", "value") _tokens } The light syntax can trick you into thinking that let bindings and sequential expressions are statements, but if we write out the full verbose syntax for those expressions it's clear: ... _tokens = let _tokens = Dictionary() in _tokens.Add ("key", "value") ; _tokens ...
{ "pile_set_name": "StackExchange" }
Q: Access previous elements in lapply/sapply I´m trying to replace a for loop with a sapply function. Inside the loop I do some optimization and therefore need the result of one optimization for the next loop. I figured out how to use the sapply to run the optimization but the problem is that I need to access the previous results from within the sapply. The following is just a random example of what I´m trying to achieve. sapply(1:4, function(y){ r<-y if(y!=1){z<-r[y-1]} else{z<-9} return(z) }) [1,] 9 2 NA NA What I expected to get was something like: [1,] 9 1 2 3 What am I doing wrong? Or isn´t there any way to access previous results of iterations in sapply? A: Here is an example perhaps closer to the OP's use case: f = function(x) x^2 g = function(x) abs(x)+rnorm(1) yvec = 1:4 Here's the Reduce approach mentioned by @Andrie: set.seed(1) Reduce(function(z,y) if (is.na(z)) f(y) else g(z), yvec,init=NA_real_,accumulate=TRUE)[-1] # [1] 1.0000000 0.3735462 0.5571895 -0.2784391 And here's a common-sense loop that everyone would use (mentioned by @digEmAll): set.seed(1) res <- rep(NA_real_,length(yvec)) for (i in seq_along(yvec)) res[i] = if (i==1) f(yvec[i]) else g(res[i-1]) res # [1] 1.0000000 0.3735462 0.5571895 -0.2784391 The results are the same, so, Reduce just hides the loop, as asserted by @Roland. A: You cannot access the previous result with the apply family of functions. They are wrappers for for loops so there is no reason to avoid the loops explicitly if that is what you are after. To your question "What am I doing wrong?". With your function: sapply(1:4, function(y){ r<-y if(y!=1){z<-r[y-1]} else{z<-9} return(z) }) In the expression r[y-1], NA's are produced after the first two loops. When 1 is passed through, it goes to the else statement and z is assigned 9. When 2 is passed through, it goes to the expression r[y-1]. In that iteration r is equal to 2 and so is y. So it is equivalent to 2[2-1], which simplifies to 2[1]. That can be read as "the first element of the vector 2. Answer being 2. On the next round, r equals 3 and so does y. The expression is now 3[3-1]. Simplified to 3[2]. That's a problem because what is the 2nd element of the vector 3? There is none, it only has one element. So NA is returned. That same effect happens for the rest of the loop.
{ "pile_set_name": "StackExchange" }
Q: Как сделать тест, в конце выводился бы правильный результат в виде профессии? Хочу сделать тест на профориентацию, в котором были бы вопросы и в конце в виде результата выводилась бы профессия. Я хочу выводить профессии в соответствии с максимальным из тех шести значений. Но проблема заключается в том, что мне нужно найти максимальное значение из шести значений. Но я не знаю, как можно сравнить и найти максимальное значение из этих шести, если они находятся в разных активити. Вместе с этим, эти значения всегда разые, то есть изменяемые. A: Для того чтобы получить данные по итогу, я могу предложить использование синглтона. Создаете объект: object GeneralClass { var answers:ArrayList<Int> = ArrayList() } в этом классе будет массив с вашими цифрами. Другой вариант это использовать HashMap где ключ это будет активность, а значение это результат вашего ответа: var answers = HashMap<Int, Int>() Для записи используем: GeneralClass.answers.add(ваш ответ) Дальше используя Collections можно извлечь максимальное значение из сохраненного массива: Collections.max(GeneralClass.answers); P.S. Чтобы людям было проще дать вам ответ, предоставляйте хоть какой-то код. Например что и как вы делаете в активности.
{ "pile_set_name": "StackExchange" }
Q: Fine Grained CRUD with Subsonic's SimpleRepository Let' say I have a TestClass in my C# app with property A and property B. I change the value of B by my code, I leave property A unchanged. I update TestClass in database by SimpleRepository's Update method. As I see it updates also property A value in the database. It is easy to test: I change value A in my database outside my app ('by hand'), then I make the update from my app. Value of property A changes back to its value according to TestClass's state in my app. So, my question: is it possible to make updates only to some properties, not for the whole class by SimpleRepository? Are there some 'IgnoreFields' possibilities? A: What you need is optimistic concurrency on your UPDATE statement, not to exclude certain fields. In short what that means is when updating a table, a WHERE clause is appended to your UPDATE statement that ensures the values of the fields in the row are in fact what they were when the last SELECT was run. So, let's assume in your example I selected some data and the values for A and B were 1 and 2 respectively. Now let's assume I wanted to update B (below statement is just an example): UPDATE TestClass SET B = '3' WHERE Id = 1; However, instead of running that statement (because there's no concurrency there), let's run this one: UPDATE TestClass SET B = '3' WHERE Id = 1 AND A = '1' AND B = '2'; That statement now ensures the record hasn't been changed by anybody. However, at the moment it doesn't appear that Subsonic's SimpleRepository supports any type of concurrency and so that's going to be a major downfall. If you're looking for a very straight forward repository library, where you can use POCO's, I would recommend Dapper. In fact, Dapper is used by Stackoverflow. It's extremely fast and will easily allow you to build in concurrency into your update statements because you send down parameterized SQL statements, simple. This Stackoverflow article is an overall article on how to use Dapper for all CRUD ops. This Stackoverflow article shows how to perform inserts and updates with Dapper. NOTE: with Dapper you could actually do what you're wanting to as well because you send down basic SQL statements, but I just wouldn't recommend not using concurrency.
{ "pile_set_name": "StackExchange" }
Q: Windows Forms - Single Instance - Include Statement When I try to use #include "CFIS_Main.h" statement in form "For_Student_Details.h", Its not accepting...Anybody can point me the mistake? Thanks for the helps.. MyProject.cpp // MyProject.cpp : main project file. #include "stdafx.h" #ifndef CFIS_Main_h #define CFIS_Main_h #include "CFIS_Main.h" #endif using namespace MyProject; [STAThreadAttribute] int main(array<System::String ^> ^args) { Application::EnableVisualStyles(); Application::SetCompatibleTextRenderingDefault(false); // Create the main window and run it Application::Run(gcnew CFIS_Main()); return 0; } My Codes from MdiParent //CFIS_Main.h IsMdiContainer = True #include "For_Student_Detials" private: System::Void button1_Click(System::Object^ sender, System::EventArgs^ e) { For_Student_Detials^ MyStudentDet= For_Student_Detials::GetForm(true,this); MyStudentDet->MdiParent=this; MyStudentDet->FormBorderStyle=System::Windows::Forms::FormBorderStyle::None; MyStudentDet->Dock=DockStyle::Fill; MyStudentDet->Show(); } My Codes From MdiChild For_Student_Details #include "CFIS_Main.h" Why Not included...????? public: static For_Student_Details^ For_Student_Details::_instance = nullptr; public: static For_Student_Details^ For_Student_Details::GetForm(bool^ IsMDIChild, CFIS_Main^ MyInstFrm) { if (_instance == nullptr) _instance = gcnew For_Student_Details(); if (_instance->IsDisposed) _instance = gcnew For_Student_Details(); if (IsMDIChild) _instance->MdiParent = MyInstFrm; return _instance; } Receiving The Below errors error C2061: syntax error : identifier 'CFIS_Main' error C2065: 'MyInstFrm' : undeclared identifier error C2660: 'CashFlow_InformationsSystem::For_Loan_Details::GetForm' : function does not take 2 arguments From the above code, Its not including CFIS_Main, I can't identify my mistake, Does anybody can point me? Thanks For The Helps A: You have a circular header reference: "For_Student_Details" includes "CFIS_Main.h" "CFIS_Main.h" includes "For_Student_Details" You will need to resolve this circular dependency. The easiest way to do so is to leave only the function declaration for button1_Click() in "CFIS_Main.h" and move the definition into "MyProject.cpp", where you also include "For_Student_Details". You will also have to define (or include the right header) the type CFIS_Main referenced in For_Student_Details::GetForm() (this might be resolved once you fix the circular include problem) Also, place the include guards in your header files, not the .cpp files
{ "pile_set_name": "StackExchange" }
Q: Simple programming practice (Fizz Buzz, Print Primes) I want to practice my skills away from a keyboard (i.e. pen and paper) and I'm after simple practice questions like Fizz Buzz, Print the first N primes. What are your favourite simple programming questions? A: I've been working on http://projecteuler.net/ A: Problem: Insert + or - sign anywhere between the digits 123456789 in such a way that the expression evaluates to 100. The condition is that the order of the digits must not be changed. e.g.: 1 + 2 + 3 - 4 + 5 + 6 + 78 + 9 = 100 Programming Problem: Write a program in your favorite language which outputs all possible solutions of the above problem. A: If you want a pen and paper kind of exercises I'd recommend more designing than coding. Actually coding in paper sucks and it lets you learn almost nothing. Work environment does matter so typing on a computer, compiling, seeing what errors you've made, using refactor here and there, just doesn't compare to what you can do on a piece of paper and so, what you can do on a piece of paper, while being an interesting mental exercise is not practical, it will not improve your coding skills so much. On the other hand, you can design the architecture of a medium or even complex application by hand in a paper. In fact, I usually do. Engineering tools (such as Enterprise Architect) are not good enough to replace the good all by-hand diagrams. Good projects could be, How would you design a game engine? Classes, Threads, Storage, Physics, the data structures which will hold everything and so on. How would you start a search engine? How would you design an pattern recognition system? I find that kind of problems much more rewarding than any paper coding you can do.
{ "pile_set_name": "StackExchange" }
Q: How to translate “end-to-end encryption”? How to translate “end-to-end encryption” to Esperanto? This term is missing in Komputeko. Can I translate this literally as “fin-al-fina ĉifrado”? Is that even an correct compound word? Or is there are more suitable or common way to translate this term? Kiel oni traduku la anglan terminon “end-to-end encryption” al Esperanto? Ĉi tiu termino mankas en Komputeko. Ĉu mi povas traduki ĝin laŭvorte kiel “fin-al-fina ĉifrado”? Ĉu tio eĉ estas ĝusta kunmetaĵo? Aŭ ĉu estas pli taŭga aŭ ofta maniero por traduki ĉi tiun terminon? A: Celante esti komprenata, mi ne laŭvortere tradukus la Anglan vorton. Se vi volas esprimi, ke la mesaĝo estas ĉifrita tutvoje de la komenco al la fino de trareta sendado, mi uzus "tutvoja ĉifrado" aŭ "tutvoje ĉifrita". "fin(o)-al-fin(o)", (kiel "end-to-end") estas konfuziga. Se oni iras de la fino al la fino, kie estas la komenco? Ĉu estas komenco? Tiu teknika vorto estas komprenebla nur de teknikuloj, kiuj komprenas ke temas pri "endpoint" de reto, la lokoj kie informoj povas aliri aŭ eliri de reto. Oni verŝajne povas uzi "fino" por tio, sed laŭ mi "ekstremo" estas pli klara. En Komputeko, vi povas trovi "finpunkto" kiel tradukon de "endpoint". Mi konsideras tiun tradukon malbona, ĉefe ĉar "finpunkto" jam estas uzata multege por io alia, kio ne temas pri reto. Ĝi estas uzata por la punkto de la lasta frazo de teksto, aŭ por la lasta etapo de projekto. Mi ne trovis sufiĉe da uzoj de "finpunkto" kiel "endpoint" por pensi ke Komputeko tie havas priskriban valoron pri uzado. Ajna pli klara vorto estus pli bona laŭ mi. Sed tio estas alia problemo, kaj tute eblas ne mencii ian tradukon de "endpoint". Tial mi elektus "tutvoja ĉifrado", kie la vojo estas de unu "endpoint" al alia.
{ "pile_set_name": "StackExchange" }
Q: Rendering Diffuse Color pass in Cycles Can I render just the Diffuse Color Pass in Cycles? I tried to check just Diffuse Color Pass, but it rendered the whole image anyway. A: To the best of my knowledge, it is not possible to directly render only a single color pass in Cycles. However, after rendering your image, you can access the different color passes by using the compositor. Before rendering, make sure to go to the Render>Layers menu and select "Direct", "Indirect", and "Color" for the material type you want to single out. This makes sure that you have all the necessary color passes when you get to the compositor. In the compositor, instead of using "Image" as input to the Composite node, you want to combine in some way the three color passes "Direct", "Indirect", and "Color" for your material type. In the example below, I used two Color Overlay nodes as shown in my compositor setup. My original image: Diffuse Color Passes only: Glossy Color Passes only: My Compositor Setup:
{ "pile_set_name": "StackExchange" }
Q: SQL Server 2008 - Replacing csv substrings in view-row using data from other table Using SQL server 2008, is there a way to perform SELECT query in a view replaces a row containing comma-separated values with their corresponding text value from another table? STRING_SPLIT and STRING_AGG is not available in 2008 version. EDIT: Added create and insert script CREATE TABLE Data( Id int, Value1 varchar(50) NULL, Value2 int NULL, Value3 datetime ) GO CREATE TABLE CodeValue( Id int, Code varchar(50) NULL ) GO INSERT [dbo].[Data] ([Id], [Value1], [Value2], [Value3]) VALUES (1, N'0;1;2', 43, CAST(N'2020-07-09T00:00:00.000' AS DateTime)) GO INSERT [dbo].[Data] ([Id], [Value1], [Value2], [Value3]) VALUES (2, N'0;2;3', 652, CAST(N'2020-07-03T00:00:00.000' AS DateTime)) GO INSERT [dbo].[Data] ([Id], [Value1], [Value2], [Value3]) VALUES (3, N'2', 1234, CAST(N'2020-07-02T00:00:00.000' AS DateTime)) GO INSERT [dbo].[CodeValue] ([Id], [Code]) VALUES (0, N'Apple') GO INSERT [dbo].[CodeValue] ([Id], [Code]) VALUES (1, N'Orange') GO INSERT [dbo].[CodeValue] ([Id], [Code]) VALUES (2, N'Banana') GO INSERT [dbo].[CodeValue] ([Id], [Code]) VALUES (3, N'Dogmeat') GO Consider my view contains data from two tables; Data and CodeValue, that would look like this: Data Id | Value | Value2 | Value 3 ============================== 1| 0;1;2| some other data 2| 0;2;3| 3| 2 | CodeValue Id | Code ============= 0| Apple 1| Orange 2| Banana 3| Dogmeat So the actual output from the SELECT query in my view would be: View Id | Value ============ 1| Apple, Orange, Banana 2| Apple, Banana, Dogmeat 3| Banana I've messed around with stored procedures and functions, but can't wrap my head around those and how to actually implement this. EDIT 2: Tried using STUFF() using the following template: WITH CTE_TableName AS ( SELECT FieldA, FieldB FROM TableName) SELECT t0.FieldA , STUFF(( SELECT ',' + t1.FieldB FROM CTE_TableName t1 WHERE t1.FieldA = t0.FieldA ORDER BY t1.FieldB FOR XML PATH('')), 1, LEN(','), '') AS FieldBs FROM CTE_TableName t0 GROUP BY t0.FieldA ORDER BY FieldA; However I can't seem to join codeValues on split values using homebrew split_string function: CREATE FUNCTION dbo.tvf_SplitString (@stringToSplit VARCHAR(100)) RETURNS @returnList TABLE(Id VARCHAR(5)) AS BEGIN DECLARE @splitValue VARCHAR(5) DECLARE @post INT WHILE CHARINDEX(';', @stringToSplit) > 0 BEGIN SELECT @pos = CHARINDEX(';', @stringToSplit) SELECT @splitValue = SUBSTRING(@stringToSplit, 1, @pos-1) INSERT INTO @returnList SELECT @splitValue SELECT @stringToSplit = SUBSTRING(@stringToSplit, @pos+1, LEN(@stringToSplit -@pos) END INSERT INTO @returnList SELECT @stringToSplit RETURN END A: As a joke (but it works nevertheless): WITH cte1 AS ( SELECT id, ';'+value1+';' value1, value2, value3 FROM data ), cte2 AS ( SELECT id, ';'+CAST(id AS VARCHAR)+';' sid, ';'+code+';' code FROM codevalue ), cte3 AS ( SELECT cte1.id, REPLACE(cte1.value1, cte2.sid, cte2.code) value1, cte1.value2, cte1.value3, cte2.id cid FROM cte1 JOIN cte2 ON cte2.id = 0 UNION ALL SELECT cte3.id, REPLACE(cte3.value1, cte2.sid, cte2.code) value1, cte3.value2, cte3.value3, cte2.id FROM cte3 JOIN cte2 ON cte2.id = cte3.cid + 1 ) SELECT id, SUBSTRING(value1, 2, LEN(value1) - 2) value1, value2, value3 FROM cte3 WHERE cid = ( SELECT MAX(id) FROM codevalue ) ORDER BY id fiddle Needs CodeValue.id have no gaps. If not then add ROW_NUMBER() column to cte2 and use it for next codevalue row selection (do not forget to alter starting value in static part from 0 to 1).
{ "pile_set_name": "StackExchange" }
Q: How to create a multiple page invoice in asp.net c#? I am thoroughly confused with something I want to do and am looking for some advice. One of my client has to produce monthly invoice detailing all of the company expenditure, and two other such invoices. The client is sure that he only needs these invoices - and they are extremely simple enough to produce as far as logic is concerned. Now, to make the actual invoice, I don't really want to use reporting solutions like Telerik, SSRS etc.. as I think they are an overkill for my purpose. At the same time, I am not sure how I can get the printer to print the invoices in a neat pages without cutting off anything. I am very tempted to just give the output in a webpage and ask my client to print them off from there. Am I not looking at this the right way? Is this possible? I could use ITextSharp or something to produce pdf's.. In fact, I think I will go ahead with this if it isn't possible to just output to html page and get the printer to recognize the page breaks somehow. Because this is a very small job, I don't want to spend too much time on it as the cost of this freelance project is minimal too. The reason printing to a new page is important is that my client has a few shops he deals with and he would want to print each of his customers their own invoices. I can get him to produce each customer's invoice separately and print them but it is not ideal way to deal with it. thanks A: SSRS has a drag and drop interface for designing reports and has a PDF output option. If the data is in a SQL server database then even with the learning curve it should be easier to do SSRS reports.
{ "pile_set_name": "StackExchange" }
Q: getting illegal argument exception on adding JSpinner in JTable so I am adding a JSpinner inside the cell of a Jtable using AbstractCellEditor and TableCellEditor classes mt SpinnerEditor class is pretty simple and code is below : public class SpinnerEditor extends AbstractCellEditor implements TableCellEditor { final JSpinner spinner; SpinnerEditor(){ spinner = new JSpinner(); } @Override public Object getCellEditorValue() { return spinner.getValue(); } @Override public Component getTableCellEditorComponent(JTable table, Object value, boolean isSelected, int row, int column) { spinner.setValue(value); return spinner ; } @Override public boolean isCellEditable(EventObject evt){ return true; } } The problem is i am getting an illegal argument Exception when i try to edit the cell by clicking on it as : Exception in thread "AWT-EventQueue-0" java.lang.IllegalArgumentException: illegal value at javax.swing.SpinnerNumberModel.setValue(SpinnerNumberModel.java:443) at javax.swing.JSpinner.setValue(JSpinner.java:354) at timetablemgmt.SpinnerEditor.getTableCellEditorComponent(SpinnerEditor.java:39) at javax.swing.JTable.prepareEditor(JTable.java:5778) at javax.swing.JTable.editCellAt(JTable.java:3512) at javax.swing.plaf.basic.BasicTableUI$Handler.adjustSelection(BasicTableUI.java:1108) at javax.swing.plaf.basic.BasicTableUI$Handler.mousePressed(BasicTableUI.java:1038) at java.awt.AWTEventMulticaster.mousePressed(AWTEventMulticaster.java:280) at java.awt.Component.processMouseEvent(Component.java:6530) at javax.swing.JComponent.processMouseEvent(JComponent.java:3324) at java.awt.Component.processEvent(Component.java:6298) at java.awt.Container.processEvent(Container.java:2237) at java.awt.Component.dispatchEventImpl(Component.java:4889) at java.awt.Container.dispatchEventImpl(Container.java:2295) at java.awt.Component.dispatchEvent(Component.java:4711) at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4889) at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4523) at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4467) at java.awt.Container.dispatchEventImpl(Container.java:2281) at java.awt.Window.dispatchEventImpl(Window.java:2746) at java.awt.Component.dispatchEvent(Component.java:4711) at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:760) at java.awt.EventQueue.access$500(EventQueue.java:97) at java.awt.EventQueue$3.run(EventQueue.java:709) at java.awt.EventQueue$3.run(EventQueue.java:703) at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:80) at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:90) at java.awt.EventQueue$4.run(EventQueue.java:733) at java.awt.EventQueue$4.run(EventQueue.java:731) at java.security.AccessController.doPrivileged(Native Method) at java.security.ProtectionDomain$JavaSecurityAccessImpl.doIntersectionPrivilege(ProtectionDomain.java:80) at java.awt.EventQueue.dispatchEvent(EventQueue.java:730) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:205) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:116) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:105) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:101) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:93) at java.awt.EventDispatchThread.run(EventDispatchThread.java:82) I am not able to figure out what is the problem! thanks in advance A: You Should try: public Component getTableCellEditorComponent(JTable table, Object value, boolean isSelected, int row, int column) { spinner.setValue(value); return spinner ; } public Component getTableCellEditorComponent(JTable table, Object value, boolean isSelected, int row, int column) { if(value == null) spinner.setValue(0); else spinner.setValue(value); return spinner ; }
{ "pile_set_name": "StackExchange" }
Q: Is this good design for Web User Control? I am coming from an ASP.Net MVC background, so I like to design things well. I just started getting back into ASP.Net web forms again because my job requires it. I have a user control which I would like to use to encapsulate an entire component (layout and code). Depending on certain settings for this control the layout may change, or I may call events, etc... Is it good practice to do something like this? <%# if(this.SomeProperty) .... %> or <%# if(this.something) this.runSomeMethod()) %> I have no clue what is good design regarding web user controls. A: Since this is rather subjective I'll give you my subjective opinion: Except in very few cases, I prefer to use codebehind to drive control and page logic. I don't mind "tag soup" per se, but I think it's far clearer to just do everything in proper code. When you have codebehind you also get the benefit of having a concrete class to refer to, which is useful in many ways, especially in larger projects. That said, there's nothing fundamentally wrong with what you're doing.
{ "pile_set_name": "StackExchange" }
Q: What can I do in order to inform users of potential errors in my software in order to minimize liability? I'm an independent software developer that's spent the last few months creating software for viewing and searching map data. The software has some navigation functionality as well (mapping, directions,etc). The eventual goal is to sell it in mobile app markets. I use OpenStreetMap as my data source. I'm concerned about liability for erroneous map data / routing instructions, etc that might result when someone uses the application. There are a lot of stories on the internet where someone gets into an accident or gets stuck or gets lost because of their GPS unit/Google Maps/mapping app... I myself have come across incorrect map data as well in a GPS unit I have in my car. While I try to make my own software as bug free as possible, no software is truly bug free. And moving beyond what I can control, OpenStreetMap data (and street map data in general) is prone to errors as well. What steps can I take to clearly inform the user that results from the software aren't always perfect, and to minimize my liability? A: Fairly straightforward steps: Include boilerplate generic warnings, as you might see in many other pieces of software, and include them as part of a license agreement for users to agree to before they use your software Add a specific clause to the license agreement, informing users that you are not responsible for the map data that you're providing, and that it should not be used for emergency or other uses that require high availability and correctness Form a limited liability or other corporation, and publish the software as that entity rather than under your own name.
{ "pile_set_name": "StackExchange" }
Q: How to read the excel file from cloud or server? I have following code: Wbnwsheet = pd.read_excel(r"C:/Users/SHI/ingrendient.xlsm", sheetname="Step1", sep='\t') this code is direct read from my desktop, how about if I want read the excel file from cloud or server that the path are start with" \ingredient\....\ ? Any idea about this? A: If you read the documentation of Pandas.read_excel here, you will notice that io parameter also accepts the follow parameters http urls ftp url Parameters: io : string, path object (pathlib.Path or py._path.local.LocalPath), file-like object, pandas ExcelFile, or xlrd workbook. The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. For instance, a local file could be file://localhost/path/to/workbook.xlsx hence, you could use pd.read_excel() in the following way to achieve your purpose. pd.read_excel('http://yourwebsite.com/path_to_excel_doc/excel_doc.xlsm') Cheers!
{ "pile_set_name": "StackExchange" }
Q: Emulator frontend for controllers? I’m working on building a MAME cabinet-type project. This is my first endeavour of this type and I am still relatively new to using Ubuntu. I have an Zotac MAG HD-ND01 (Intel Atom/2gbRAM/160gbhdd) that I’ve installed Lubuntu 14.04 on. My biggest issue is that I’m looking for an emulator front-end that works well with many emulator types and is great with using a USB controller (PS2/PS3 or generic controller). Researching online doesn’t really provide me with the information that Im looking for without all the comments being 5-8 years old. I plan on putting this Zotac in my entertainment system where the hope is that I could just turn it on, and grab a controller without needing a mouse/keyboard. Any help would be greatly appreciated. A: Just to note, I ended up using AntiMicro. It can autoload controller profiles, launch in tray, and was stupid easy to set-up and integrate. https://github.com/Ryochan7/antimicro
{ "pile_set_name": "StackExchange" }