text
stringlengths
64
81.1k
meta
dict
Q: Getting availability from datepicker for x months for a site i am trying to get the availability/price for each day in airbnb by clicking the next button in the datepicker calendar but with no luck. My current code is something like: def handle(self, *args, **options): def airbnb(): display = Display(visible=0, size=(1024, 768)) display.start() driver = webdriver.Firefox() driver.maximize_window() driver.get("https://www.airbnb.pt/rooms/265820") # wait for the check in input to load wait = WebDriverWait(driver, 10) elem = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div.book-it-panel input[name=checkin]"))) elem.click() # wait for datepicker to load wait.until( EC.visibility_of_element_located((By.CSS_SELECTOR, '.ui-datepicker:not(.loading)')) ) days = driver.find_elements_by_css_selector(".ui-datepicker table.ui-datepicker-calendar tr td") for cell in days: day = cell.text.strip() if not day: continue if "ui-datepicker-unselectable" in cell.get_attribute("class"): status = "Unavailable" else: status = "Available" price = "n/a" if status == "Available": # hover the cell and wait for the tooltip ActionChains(driver).move_to_element(cell).perform() price = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, '.datepicker-tooltip'))).text print(day, status, price) They both work but only for 1 month. I want to be able to set X months instead. For example for homeaway i tried with self.driver.find_element_by_css_selector('.ui-datepicker-next.ui-corner-all').c‌​lick() right after the first open calendar click but i got a ElementNotVisibleException Thanks in advance A: First of all, I would locate the "next month" button with a.ui-datepicker-next CSS selector which is both readable and reliable. Here is the implementation - processing as many months as the MONTH_COUNT variable defines: from selenium import webdriver from selenium.webdriver import ActionChains from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC MONTH_COUNT = 3 driver = webdriver.Firefox() driver.maximize_window() driver.get("https://www.airbnb.pt/rooms/265820") # wait for the check in input to load wait = WebDriverWait(driver, 10) elem = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "div.book-it-panel input[name=checkin]"))) elem.click() # iterate over the month count for month in range(MONTH_COUNT): # wait for datepicker to load wait.until( EC.visibility_of_element_located((By.CSS_SELECTOR, '.ui-datepicker:not(.loading)')) ) # getting current month for displaying purposes current_month = driver.find_element_by_css_selector(".ui-datepicker-month").text print(current_month) # iterate over days days = driver.find_elements_by_css_selector(".ui-datepicker table.ui-datepicker-calendar tr td") for cell in days: day = cell.text.strip() if not day: continue if "ui-datepicker-unselectable" in cell.get_attribute("class"): status = "Unavailable" else: status = "Available" price = "n/a" if status == "Available": # hover the cell and wait for the tooltip ActionChains(driver).move_to_element(cell).perform() price = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, '.datepicker-tooltip'))).text print(day, status, price) print("-----") # click next month driver.find_element_by_css_selector("a.ui-datepicker-next").click() driver.close() Prints: Maio (u'1', 'Unavailable', 'n/a') (u'2', 'Unavailable', 'n/a') (u'3', 'Unavailable', 'n/a') ... (u'30', 'Unavailable', 'n/a') (u'31', 'Unavailable', 'n/a') ----- Junho (u'1', 'Unavailable', 'n/a') (u'2', 'Unavailable', 'n/a') (u'3', 'Unavailable', 'n/a') ... (u'28', 'Unavailable', 'n/a') (u'29', 'Unavailable', 'n/a') (u'30', 'Unavailable', 'n/a') ----- Julho (u'1', 'Unavailable', 'n/a') (u'2', 'Unavailable', 'n/a') (u'3', 'Unavailable', 'n/a') ... (u'29', 'Unavailable', 'n/a') (u'30', 'Available', u'\u20ac36') (u'31', 'Available', u'\u20ac36') -----
{ "pile_set_name": "StackExchange" }
Q: Speed up ListLinePlot I am trying to use Mathematica to analyse some raw data. I'd like to be able to dynamically display the range of data I'm interested in using Manipulate and ListLinePlot, but the plot rendering is extremely slow. How can I speed it up? Here are some additional details. An external text file stores the raw data: the first column is a timestamp, the second, third and fourth columns are data readings, for example: 1309555993069, -2.369941, 6.129157, 6.823794 1309555993122, -2.260978, 6.170018, 7.014479 1309555993183, -2.070293, 6.129157, 6.823794 1309555993242, -1.988571, 6.238119, 7.123442 A single data file contains up to 2·106 lines. To display, for example, the second column, I use: x = Import["path/to/datafile"]; ListLinePlot[x[[All, {1, 2}]]] The execution time of this operation is unbearably long. To display a variable range of data I tried to use Manipulate: Manipulate[ListLinePlot[Take[x, numrows][[All, {1, 2}]]], {numrows, 1, Length[x]}] This instruction works, but it quickly crawls when I try to display more than few thousand lines. How can I speed it up? Some additional details: MATLAB displays the same amount of data on the same computer almost instantaneously, thus the raw data size shouldn't be an issue. I already tried to turn off graphics antialiasing, but it didn't impact rendering speed at all. Using DataRange to avoid Take doesn't help. Using MaxPlotPoints distorts too much the plot to be useful. Not using Take in Manipulate doesn't help. The rendering seems to take huge amount of time. Running Timing[ListLinePlot[Take[x,100000][[All, {1, 2}]]]] returns 0.33: this means that the evaluation of Take by itself is almost instantaneous, is the plot rendering that slows everything down. I am running Mathematica on Ubuntu Linux 11.10 using the fglrx drivers. Forcing Mathematica to use mesa drivers didn't help. Any hint? A: If your goal is to just visualize your data quickly but properly, you can use the following trick, which I am constantly using. I partition the data into a number of blocks corresponding roughly to the resolution of my screen (usually 1000 or less), more detail cannot be displayed anyway. Then I determine the Min and Max of each block, and draw a zig-zag line from min to max to min to max... The result will look exactly like the original data. You can however not "zoom in", as you would then see the zig-zag line (e.g. when exporting to high-res pdf). Then you need to use a larger number of blocks. rv = RandomVariate[ExponentialDistribution[2], 100000]; ListLinePlot[rv, PlotRange -> All] (* original, slow *) ListLinePlot[rv, PlotRange -> All, MaxPlotPoints -> 1000] (* fast but distorted *) numberOfBlocks = 1000; ListLinePlot[Riffle @@ Through[{Min /@ # &, Max /@ # &}[ Partition[rv,Floor[Length[rv]/numberOfBlocks]]]], PlotRange -> All] You can add the DataRange->{...} option to label the x-axis appropriately. Hope this helps! EDIT: See also this similar question on Mathematica Stackexchange: https://mathematica.stackexchange.com/q/140/58 A: I haven't tested extensively this on my machine (I have a Mac, so I can't rule out Linux-specific issues). but a couple of points occur to me. The following was pretty quick for me, but obviously slower than if the data set was smaller. You are plotting hundreds of thousands of data points. data = Accumulate@RandomVariate[NormalDistribution[], 200000]; Manipulate[ListLinePlot[Take[data, n]], {n, 1, Length[data]}] In a Manipulate, you are allowing the amount of data shown with Take to vary arbitrarily. Try only incrementing numrows every 100 or so points, so there is less to render. Try using the ContinuousAction->False option (see documentation) (I see @Szabolcs had the same idea as I was typing. I was about to suggest MaxPlotPoints, but instead try the PerformanceGoal ->"Speed" option. (see documentation) A: I also noticed that occasionally Mathematica will take too long to render graphics. Actually it must be some translation step from a Mathematica Graphics expression to some other representation that takes long because once rendered, resizing (and thus re-rendering) the graphic is much faster. Pre-version-6 graphics rendering used to be faster for many examples (but also lacks a lot of functionality that 6+ have). Some ideas about what you could do: Use the MaxPlotPoints option of ListLinePlot to reduce the data before plotting. It might not make a difference in looks if its downsampled. The Method option should choose the downsample algorithm, but I can't find any docs for it (anyone?) Use ContinuousAction -> False in Manipulate to stop it from recomputing everything in real time as you drag the sliders.
{ "pile_set_name": "StackExchange" }
Q: jQuery Selector issue finding href with text category1 in it i have a set of hyperlinks with href = javascript:method('category1') and likewise category2 category3 ... I want to select hyperlink with href containing category1 so i have written jQuery(a[href*='category1']) but dont know why it also selects hyperlinks with category10 category11 category12 ... also I understand that category1 is common in all of them but 'category1' should'nt be do i need to put ' with escape charaters. A: jQuery("a[href=javascript:method('category1')]"); Just do = if you only want category1, and define the entire href. Also, you're missing the outer quotation marks. EDIT: Alternatively, you could use the 'attribute ends with' selector if you want to shorten it up a bit. jQuery("a[href$=('category1')]");
{ "pile_set_name": "StackExchange" }
Q: Executing external program in Java I am trying to run "cut" inside a java program, but I am lost in terms of how to split the array of commands. My program in the command line is the following: cut file.txt -d' ' -f1-2 > hits.txt And I am trying to run it inside java like this Runtime rt = Runtime.getRuntime(); Process pr = rt.exec(new String[]{"file.txt"," -d' ' -f1-2 "," > hits.txt"}); pr.waitFor(); But I get the following runtime error Exception in thread "main" java.io.IOException: Cannot run program "cut file.txt": java.io.IOException: error=2, No such file or directory I attribute this error to the array of Strings I am using as exec commands. Any ideas on how to do this? Also any known documentation on the issue. Thanks A: Make either an script for bash: "/bin/bash" "-c" "cut file.txt -d' ' -f1-2 > hits.txt" or split "cut" "file.txt" "-d" "' '" "-f" "1-2" The error message clearly says: Cannot run program "cut file.txt" so it interprets "cut file.txt" as a single programname with a blank inside. Your problem starts with the redirection, because you can't redirect the output that way: "cut" "file.txt" "-d" "' '" "-f" "1-2" ">" "hits.txt" You have to handle input and output streams. It might be a better idea to implement cut in Java, to get a portable solution, or call a script which the user may specify on commandline or in a config file, so that it can be adapted for Windows or other platforms. Calling /bin/bash and redirecting there should work - on unix like systems.
{ "pile_set_name": "StackExchange" }
Q: dc.js add up only some values in numberDisplay For the numberDisplay I only see examples in which all values are summed. But I would like to apply a filter so that I only see the values for a certain type. In the example JSFIDDLE there is now a numberDisplay where all values are summed. Is it also possible to only summarize the values of which for example the type is "cash"? So in this case I would like to see the number 6 if nothing is selected. Thank you in advance for your help. HTML <div id="demo1"> <h2>Markers with clustering, popups and single selection</h2> <i>Renewable energy plants in Bulgaria in 2012</i> <div class="lineChart"></div> <div class="pie"></div> <h5>I want here onlny the total number for type "cash" (nothing selected 6)</h5> <div class="number_cash"></div> </div> <pre id="data">date type value 1/1/2020 cash 1 1/1/2020 tab 1 1/1/2020 visa 1 1/2/2020 cash 2 1/2/2020 tab 2 1/2/2020 visa 2 1/3/2020 cash 3 1/3/2020 tab 3 1/3/2020 visa 3 </pre> JavaScript function drawMarkerSelect(data) { var xf = crossfilter(data); const dateFormatSpecifier = '%m/%d/%Y'; const dateFormat = d3.timeFormat(dateFormatSpecifier); const dateFormatParser = d3.timeParse(dateFormatSpecifier); data.forEach(function(d) { var tempDate = new Date(d.date); d.date = tempDate; }) var groupname = "marker-select"; var facilities = xf.dimension(function(d) { return d.date; }); var facilitiesGroup = facilities.group().reduceSum(d => +d.value); var typeDimension = xf.dimension(function(d) { return d.type; }); var typeGroup = typeDimension.group().reduceSum(d => +d.value); var dateDimension = xf.dimension(function(d) { return d.date; }); var dateGroup = dateDimension.group().reduceSum(d => +d.value); var minDate = dateDimension.bottom(1)[0].date; var maxDate = dateDimension.top(1)[0].date; //numberDisplay cash var all = xf.groupAll(); var ndGroup = all.reduceSum(function(d) { return d.value; //6 }); //numbers var number_cash = dc.numberDisplay("#demo1 .number_cash", groupname) .group(ndGroup) .valueAccessor(function(d) { return d; }); var pie = dc.pieChart("#demo1 .pie", groupname) .dimension(typeDimension) .group(typeGroup) .width(200) .height(200) .renderLabel(true) .renderTitle(true) .ordering(function(p) { return -p.value; }); var lineChart = dc.lineChart("#demo1 .lineChart", groupname) .width(450) .height(200) .margins({ top: 10, bottom: 30, right: 10, left: 70 }) .dimension(dateDimension) .group(dateGroup, "total spend") .yAxisLabel("Transaction spend") .renderHorizontalGridLines(true) .renderArea(true) .legend(dc.legend().x(1200).y(5).itemHeight(12).gap(5)) .x(d3.scaleTime().domain([minDate, maxDate])); lineChart.yAxis().ticks(5); lineChart.xAxis().ticks(4); dc.renderAll(groupname); } const data = d3.tsvParse(d3.select('pre#data').text()); drawMarkerSelect(data); CSS pre#data { display: none; } .marker-cluster-indiv { background-color: #9ecae1; } .marker-cluster-indiv div { background-color: #6baed6; } A: Crossfilter is a framework for mapping and reducing in JavaScript. The dimension and group key functions determine the mapping to group bins, and the group reduce functions determine how to add or remove a row of data to/from a bin. When you use a groupAll, there is just one bin. With that in mind, you can count cash rows at face value, and count other rows as zero, by writing your reduceSum function as var ndGroup = all.reduceSum(function(d) { return d.type === 'cash' ? d.value : 0; }); Fork of your fiddle.
{ "pile_set_name": "StackExchange" }
Q: How to iterate through 'nested' dataframes without 'for' loops in pandas (python)? I'm trying to check the cartesian distance between each set of points in one dataframe to sets of scattered points in another dataframe, to see if the input gets above a threshold 'distance' of my checking points. I have this working with nested for loops, but is painfully slow (~7 mins for 40k input rows, each checked vs ~180 other rows, + some overhead operations). Here is what I'm attempting in vectorialized format - 'for every pair of points (a,b) from df1, if the distance to ANY point (d,e) from df2 is > threshold, print "yes" into df1.c, next to input points. ..but I'm getting unexpected behavior from this. With given data, all but one distances are > 1, but only df1.1c is getting 'yes'. Thanks for any ideas - the problem is probably in the 'df1.loc...' line: import numpy as np from pandas import DataFrame inp1 = [{'a':1, 'b':2, 'c':0}, {'a':1,'b':3,'c':0}, {'a':0,'b':3,'c':0}] df1 = DataFrame(inp1) inp2 = [{'d':2, 'e':0}, {'d':0,'e':3}, {'d':0,'e':4}] df2 = DataFrame(inp2) threshold = 1 df1.loc[np.sqrt((df1.a - df2.d) ** 2 + (df1.b - df2.e) ** 2) > threshold, 'c'] = "yes" print(df1) print(df2) a b c 0 1 2 yes 1 1 3 0 2 0 3 0 d e 0 2 0 1 0 3 2 0 4 A: Here is an idea to help you to start... Source DFs: In [170]: df1 Out[170]: c x y 0 0 1 2 1 0 1 3 2 0 0 3 In [171]: df2 Out[171]: x y 0 2 0 1 0 3 2 0 4 Helper DF with cartesian product: In [172]: x = df1[['x','y']] \ .reset_index() \ .assign(k=0).merge(df2.assign(k=0).reset_index(), on='k', suffixes=['1','2']) \ .drop('k',1) In [173]: x Out[173]: index1 x1 y1 index2 x2 y2 0 0 1 2 0 2 0 1 0 1 2 1 0 3 2 0 1 2 2 0 4 3 1 1 3 0 2 0 4 1 1 3 1 0 3 5 1 1 3 2 0 4 6 2 0 3 0 2 0 7 2 0 3 1 0 3 8 2 0 3 2 0 4 now we can calculate the distance: In [169]: x.eval("D=sqrt((x1 - x2)**2 + (y1 - y2)**2)", inplace=False) Out[169]: index1 x1 y1 index2 x2 y2 D 0 0 1 2 0 2 0 2.236068 1 0 1 2 1 0 3 1.414214 2 0 1 2 2 0 4 2.236068 3 1 1 3 0 2 0 3.162278 4 1 1 3 1 0 3 1.000000 5 1 1 3 2 0 4 1.414214 6 2 0 3 0 2 0 3.605551 7 2 0 3 1 0 3 0.000000 8 2 0 3 2 0 4 1.000000 or filter: In [175]: x.query("sqrt((x1 - x2)**2 + (y1 - y2)**2) > @threshold") Out[175]: index1 x1 y1 index2 x2 y2 0 0 1 2 0 2 0 1 0 1 2 1 0 3 2 0 1 2 2 0 4 3 1 1 3 0 2 0 5 1 1 3 2 0 4 6 2 0 3 0 2 0
{ "pile_set_name": "StackExchange" }
Q: c++ GetModuleHandle путь Скажите, в чем моя ошибка ? HMODULE client = GetModuleHandle ("D:\mylib\file.dll"); Выдает 0(false). Информации, и простейшего примера работы этой функции не нашел. P.S. Допустим, если ввести в аргумент lpModuleName значение kernel32.dll, то все хорошо. A: Может сначала надо его загрузить с помощью LoadLibraryEx, а потом выпрашивать у WinApi указатель на него?
{ "pile_set_name": "StackExchange" }
Q: Getting Incompitable Types with the Navigation Drawer I am trying to create Navigation drawer in Android Studio. My code is as follows: public class NavigationActivity extends AppCompatActivity { @SuppressWarnings("deprecation") private ActionBarDrawerToggle mDrawerToggle; private DrawerLayout mDrawerLayout; private ListView mList; private ArrayList<com.zaptech.webdata.model.MenuItem> listMenu; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_navigation); findViews(); Toolbar mTool = (Toolbar) findViewById(R.id.toolbar); setSupportActionBar(mTool); getSupportActionBar(). setDisplayHomeAsUpEnabled(true); getSupportActionBar().setHomeButtonEnabled(true); //noinspection deprecation mDrawerToggle = new ActionBarDrawerToggle(NavigationActivity.this, mDrawerLayout, R.drawable.ic_drawer, R.string.app_name, R.string.app_name) { public void onDrawerClosed(View view) { invalidateOptionsMenu(); } public void onDrawerOpened(View drawerView) { invalidateOptionsMenu(); } }; mDrawerLayout.setDrawerListener(mDrawerToggle); //mList.setAdapter(new CustomAdapter(NavigationActivity.this,)); //mList.setOnItemClickListener(this); FloatingActionButton floatingActionButton = (FloatingActionButton) findViewById(R.id.fab); floatingActionButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { //noinspection SpellCheckingInspection Snackbar.make(view, "Developed By Bandish", Snackbar.LENGTH_LONG) .setAction("Action", null).show(); } }); } @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.menu_navigation, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { int id = item.getItemId(); return id == R.id.action_settings || super.onOptionsItemSelected(item); } private void findViews() { mDrawerToggle = (DrawerLayout) findViewById(R.id.drawer_layout); mList = (ListView) findViewById(R.id.list_slidermenu); } When I import the following: import android.support.v4.app.ActionBarDrawerToggle; import android.support.v4.widget.DrawerLayout; I get the these errors: Required import android.support.v4.widget.DrawerLayout; Found import android.support.v4.app.ActionBarDrawerToggle; Does anybody know what the problem is? Thanks! A: My friend you have declared private DrawerLayout mDrawerToggle; Instead you should use ActionBarDrawerToggle and your reference to drawerlayout is wrong private void findViews() { mDrawerToggle = (DrawerLayout) findViewById(R.id.drawer_layout); it should be mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout);
{ "pile_set_name": "StackExchange" }
Q: sql server query to Linq to entites conversion doesn't retrieve records I have table called "AttendanceTracker"... below is the snapshot of the table and SQL query to retrieve records from this table is as follows select EmployeeId, Dates, cast(datediff(hour, INTIME, OUTIME) as varchar)+':'+ cast(datediff(minute, INTIME, OUTIME) - datediff(hour, INTIME, OUTIME) * 60 as varchar)+':'+ cast(datediff(second, INTIME, OUTIME) - (datediff(minute, INTIME, OUTIME) * 60) as varchar) TotalTime FROM ( SELECT INN.EmployeeId AS EmployeeId,Convert(DATE,INN.Time,101) AS Dates,MIN(INN.Time) AS INTIME, MAX(OUTT.Time) AS OUTIME FROM [dbo].[AttendanceTrackers] AS INN, [dbo].[AttendanceTrackers] AS OUTT WHERE INN.Type = 'IN' AND OUTT.Type = 'OUT' AND INN.EmployeeId = 1 AND OUTT.EmployeeId = INN.EmployeeId AND Convert(DATE,INN.Time,101) = Convert(DATE,OUTT.Time,101) GROUP BY INN.EmployeeId, INN.TenantID, CONVERT(DATE,INN.Time,101) ) X; Query Output is as follows LINQ to entities query is as follows var query = from X in ( (from INN in db.AttendanceTrackers from OUTT in db.AttendanceTrackers where INN.Type == "IN" && OUTT.Type == "OUT" && INN.EmployeeId == 1 && OUTT.EmployeeId == INN.EmployeeId && (DateTime)INN.Time == (DateTime)OUTT.Time group new {INN, OUTT} by new { INN.EmployeeId, INN.TenantId, Column1 = (DateTime?)(DateTime)INN.Time } into g select new { EmployeeId = g.Key.EmployeeId, Dates = g.Key.Column1, INTIME = (DateTime?)g.Min(p => p.INN.Time), OUTIME = (DateTime?)g.Max(p => p.OUTT.Time) })) select new { X.EmployeeId, X.Dates, TotalTime = (SqlFunctions.StringConvert((Double)SqlFunctions.DateDiff("hour",X.INTIME,X.OUTIME)) + ":" + SqlFunctions.StringConvert((Double)(SqlFunctions.DateDiff("minute",X.INTIME,X.OUTIME) - SqlFunctions.DateDiff("hour",X.INTIME,X.OUTIME) * 60)) + ":" + SqlFunctions.StringConvert((Double)(SqlFunctions.DateDiff("second",X.INTIME,X.OUTIME) - (SqlFunctions.DateDiff("minute",X.INTIME,X.OUTIME) * 60)))) } The LINQ query doesnt return any records... can someone please help ??? A: So, so, so... First, your first query (and the linq version) should use the join syntax, not the old "cartesian product with a where clause". This makes things much easier to understand. Than, you can use EntityFunctions.TruncateTime to get the date part of a DateTime in linq to entities. Then, I would enumerate before doing the totalTime display part, as it's much easier to do in linq2 objects than linq 2 entities. (Datetime2 - DateTime1), which gives you a Timestamp that you can easily format. Which would do something like that (sorry for comment formatting, looks like it's not recognized as c#) var query = (from inn in db.AttendanceTrackers.Where(m => m.EmployeeId == 1 && m.Type == "IN") //of course, you could put the where clause on employeeId in another place join outt in db.AttendanceTrackers.Where(m => m.Type == "OUT") //join on EmployeeId and Date part of the datetime on new { inn.EmployeeId, time = EntityFunctions.TruncateTime(inn.Time) } equals new{ outt.EmployeeId, time = EntityFunctions.TruncateTime(outt.Time) } group new{inn,outt} //group by EmployeeId, TenantId and Date part of the DateTime by new { inn.EmployeeId, inn.TenantId, day = EntityFunctions.TruncateTime(Time) } into g //select first inTime and last outTime for the day + employeeId and day select new { employeeId = g.Key.EmployeeId, day = g.Key.day, inTime = g.Min(x => x.inn.Time), outTime = g.Max(x => x.outt.Time) }) //enumerate, as we filtered all we needed .ToList() //easier way to format the desired values. .Select(x => new { x.employeeId, day = x.day, TotalTime = (x.outTime - x.inTime).ToString("hh:mm:ss") });
{ "pile_set_name": "StackExchange" }
Q: not able to add custom block in cart page above coupon block i am adding the block through xml on cart page. i searching a lot but could not find the find the solution where i am getting wrong. <checkout_cart_index> <reference name="checkout.cart"> <block type="test/test" before="coupon" template="test/cart_test.phtml" /> </reference> </checkout_cart_index> when i and the block is added to cart page but i want to add before the coupon block any help will be appreciated. A: Try like this, <reference name="checkout.cart"> <block type="test/test" before="coupon" name="custom-block" template="test/cart_test.phtml" /> </reference> call your custom block before coupon in app\design\frontend\package\theme\template\checkout\cart.phtml, <?php echo $this->getChildHtml('custom-block'); ?> <?php echo $this->getChildHtml('coupon') ?> A: that was work for me <?php echo Mage::app()->getLayout() ->createBlock('test/test') ->setTemplate('test/test.phtml') ->toHtml(); ?>
{ "pile_set_name": "StackExchange" }
Q: Collaborative installable IDE I am looking for a collaborative real-time writing IDE that can be installed in my own server machine. The machine is an Ubuntu 14.04 server, any dependencies that might be needed will be installed. I want a solution with a free license and open-source if possible. It is basically needed only for collaboratively writing code, executing will be handled outside the IDE. I am working with only 1 other partner, so scalability is not that big of an issue. I want to use this software on a semi-professional project, which is however mostly done for educational reasons. Therefore, real-time code writing is a core part of the software I am asking for, since we will be writing, examining and correcting code together, exactly in order to share each other's code writing methods and patterns and make changes on the spot, while both of us are watching the project "live". And we want to install that said IDE on our own server, 1.because we already have a number of files from the project there 2.we don't want to use a host system given by a free account on a Web IDE, which will unsurprisingly be of limited capabilities, but instead use our own server system which has the capabilities we need and which we have customised. The desired languages to be supported by the IDE will be C,Python,Perl and Java. However, all that we want about these languages is the syntax highlighting, and from my experience non-specific code editors have such support for most of the actually used langauges. A: https://docs.c9.io/run_your_own_workspace.html This tutorial should be pretty helpful for a simple solution, you can "install" a Cloud9 workspace on your system. You can then proceed to use that workspace through your web browser from your account, just like any other Cloud9 workspace, but the files created, edited, deleted etc. will all be on your server, on the specified path. The system terminal you will get in the Web IDE will also be that of your server system. Cloud9 is an online platform-IDE that lets you have acccess to a project -named "workspace"- from anywhere with an internet access, and you can add other users to those projects to work together with. The owner of the project decides the rights of the collaborators. The features of the Cloud9 IDE involve editing your code real-time together with your other partners, in the fashion of Google Docs. The basic features are available for free, but you can pay subscriptions for extended features. The "Default" option in C9 is that the system you work on is hosted by Cloud9, but the tutorial above gives you the ability to use the capabilities of C9 with your own system's resources and special features.
{ "pile_set_name": "StackExchange" }
Q: boost::lockfree::spsc_queue allocator maximum size? I need to buffer a bunch of incoming 10GigE data so that I can write it to disk later. Doing so sequentially is an issue since the device that I am reading from will overflow if I don't service it quickly enough. In search of a solution, I stumbled across boost::lockfree::spsc_queue. In particular, I like the fact that I can pre-allocate the memory for the queue, since resizing during a push() could potentially cause a slowdown that could lead to an overflow. However, given the data rate, I need a LARGE buffer. As a result, I'm wondering what is the maximum size that I can allocate for the queue (in both # of items and in terms of bytes). The system that I am planning on deploying on has 24GB available, so I was hoping to allocate as much as 16GB to ensure that the queue never fills up. A final note is that the code will reside on a Linux machine (x86-64 architecture), so if there's any required kernel parameters to alter this size, that would be good to know as well. Thanks in advance for the help. A: After experimenting with the queue, I was able to allocate (dynamically) a huge queue. There appears to be no limit. Statically though, you are limited and I was receiving errors when I was creating large, statically allocated buffers. I didn't play around with it enough to find the exact value though.
{ "pile_set_name": "StackExchange" }
Q: accidently deleted /usr/local/bin/youtube-dl I accidently deleted /usr/local/bin/youtube-dl folder. Now I am unable to neither install nor update youtube-dl. How do I fix it or undo it ? I tried : sudo wget https://yt-dl.org/downloads/2016.02.05.1/youtube-dl -O /usr/local/bin/youtube-dl sudo chmod a+rx /usr/local/bin/youtube-dl Now the /usr/local/bin/youtube-dl folder has come back but i am unable to download any videos using youtube-dl. A: Try - sudo mkdir -v -p /usr/local/bin sudo curl https://yt-dl.org/downloads/2016.02.05.1/youtube-dl -o /usr/local/bin/youtube-dl sudo chmod a+rx /usr/local/bin/youtube-dl
{ "pile_set_name": "StackExchange" }
Q: Calculation of $ \frac{\partial^2 g(f) }{\partial x_1 \partial x_2} $ Let $f : \mathbb R^2 \to \mathbb R$. $ g : \mathbb R \to \mathbb R$. Assume that all partial derivatives exist. Then is this statement right? $$ \frac{\partial^2 g\circ f }{\partial x_1 \partial x_2} = \frac{\partial}{\partial x_1} \left( \frac{\partial g}{\partial f} \frac{\partial f}{\partial x_2} \right) = \frac{\partial g}{\partial f} \frac{\partial^2 f}{\partial x_1 \partial x_2} + \frac{\partial^2 g}{\partial f^2} \frac{\partial f}{\partial x_1} \frac{\partial f}{\partial x_2} $$ A: While the equation is in principle correct the notation you are using is kind of unusual (I did see it before, though), which may cause irritations, as you may infer from Siminore's comment. I'd prefer (abbreviating $x=(x_1, x_2))$ one (any) of the following: $$ \frac{\partial^2 (g(f(x)))}{\partial x_1 \partial x_1} = \frac{\partial^2 (g\circ f)}{\partial x_1 \partial x_1} (x) $$ for the left hand side of your identity and assuming $g=g(u)$ $$ \frac{dg}{du}(f(x))\frac{\partial^2 f(x)}{\partial x_1 \partial x_1} + \frac{d^2g}{du^2}(f(x)) \frac{\partial f(x)}{\partial x_1} \frac{\partial f(x)}{\partial x_2} $$ or $$ g^{\prime}(f(x))\frac{\partial f^2}{\partial x_1 \partial x_1} (x)+ g^{\prime \prime}(f(x)) \frac{\partial f}{\partial x_1}(x) \frac{\partial f}{\partial x_2} (x) $$ for the right hand side. However only if $g$ depends only on one variable. In that case $\frac{\partial g}{\partial u}$ is not wrong but not common. There are other notation conventions in use.
{ "pile_set_name": "StackExchange" }
Q: What happens in an empty microwave oven? What it we don't put any food in a microwave oven? I.e. nothing to absorb the microwaves? Would the standing microwave modes in the 3D cavity be reinforced? Would there be too much energy in the microwave? A: From Quora: This is what happens when you run it without a load. At first, everything is fine because the cavity, the door, the tray (or shelf) absorb the excess energy. Things will start to get hot, but it's not critical yet. After a few minutes, something will start to get really hot and the excess energy? It starts reflecting back into the magnetron (the device that provides the microwave energy) and that starts to heat up too. If it is designed well enough, it will trip a thermal fuse before anything actually breaks. The thermal fuse can be resettable, but usually not. You will need to replace this fuse if the oven no longer works. If the oven is designed poorly, you'll experience what is called "thermal runaway". At this point, some random location in the oven will begin to superheat. And by super heat, I mean hot enough to liquefy porcelain. This will continue until something finally gives out. It could be a breaker tripping, a standard fuse blowing, or in the worst case, the insulation on the wiring or the plastic components might literally burst into flames consuming everything flammable until the oven finally loses power. Don't do this. Some ovens should not be operated when empty. Refer to the instruction manual for your oven. I would not ignore this instruction, issued by the FDA. Also you would be wasting money, and I would guess there is some kind of limiter built in as hyportnex says, BUT it's not worth the risk. But you could do this experiment instead, and eat the results. Image source: Measure the speed of light using chocolate Use a bar of chocolate to check that the speed of light is 300,000 km/s, rather than let all that energy go to waste. Apologies if you know this already. Measure the distance between the melted spots, after they have formed and then double it to get the wavelength of the microwave radiation. The wave frequency is around 2.45 gigahertz. Velocity of light = wavelength x frequency The distance between each melted spot should be around 6 cm. 6 x 2 x 2450000000 = 29400000000 cm/s, pretty close to the speed of light. A: Re: chocolate experiment and microwaves. All the microwave ovens I have ever come across, dismantled or repaired have always included a "stirrer" which is in the waveguide path between the magnetron and the oven cavity. Usually it is in the form of a metal rotating shape like a fan. This chops up the otherwise nicely formed e-m microwaves into a jumble and hence there can be virtually no standing waves inside the oven. The reason for this is simple. To ensure that your chocolate is NOT melted/vaporized in little spots but heated evenly. Pretty well all the owners of microwave ovens I know prefer evenly heated food as opposed to generally still frozen food with black burnt blobs in it.
{ "pile_set_name": "StackExchange" }
Q: jQueryMobile alternative for windows phone 7? i have a problem, i'm using jquerymobile for some android apps and it runs very nice. after trying it on windows phone which dont uses the webkit, it relly disapointed me. when e.g. you click a button its looks very ugly. do someone know a framework which is quite equal to jquerymobile but optimized for windows phone? A: Have you tried using the jQuery Mobile Metro theme? It is, as far as I know, the only HTML5 mobile framework optimised for WP7.
{ "pile_set_name": "StackExchange" }
Q: Redirecting subprocesses' output (stdout and stderr) to the logging module I'm working on a Python script and I was searching for a method to redirect stdout and stderr of a subprocess to the logging module. The subprocess is created using the subprocess.call() method. The difficulty I faced is that with subprocess I can redirect stdout and stderr only using a file descriptor. I did not find any other method, but if there is one please let me know! To solve this problem I wrote the following code which basically creates a pipe and uses a thread to read from the pipe and generates a log message using the Python logging method: import subprocess import logging import os import threading class LoggerWrapper(threading.Thread): """ Read text message from a pipe and redirect them to a logger (see python's logger module), the object itself is able to supply a file descriptor to be used for writing fdWrite ==> fdRead ==> pipeReader """ def __init__(self, logger, level): """ Setup the object with a logger and a loglevel and start the thread """ # Initialize the superclass threading.Thread.__init__(self) # Make the thread a Daemon Thread (program will exit when only daemon # threads are alive) self.daemon = True # Set the logger object where messages will be redirected self.logger = logger # Set the log level self.level = level # Create the pipe and store read and write file descriptors self.fdRead, self.fdWrite = os.pipe() # Create a file-like wrapper around the read file descriptor # of the pipe, this has been done to simplify read operations self.pipeReader = os.fdopen(self.fdRead) # Start the thread self.start() # end __init__ def fileno(self): """ Return the write file descriptor of the pipe """ return self.fdWrite # end fileno def run(self): """ This is the method executed by the thread, it simply read from the pipe (using a file-like wrapper) and write the text to log. NB the trailing newline character of the string read from the pipe is removed """ # Endless loop, the method will exit this loop only # when the pipe is close that is when a call to # self.pipeReader.readline() returns an empty string while True: # Read a line of text from the pipe messageFromPipe = self.pipeReader.readline() # If the line read is empty the pipe has been # closed, do a cleanup and exit # WARNING: I don't know if this method is correct, # further study needed if len(messageFromPipe) == 0: self.pipeReader.close() os.close(self.fdRead) return # end if # Remove the trailing newline character frm the string # before sending it to the logger if messageFromPipe[-1] == os.linesep: messageToLog = messageFromPipe[:-1] else: messageToLog = messageFromPipe # end if # Send the text to the logger self._write(messageToLog) # end while print 'Redirection thread terminated' # end run def _write(self, message): """ Utility method to send the message to the logger with the correct loglevel """ self.logger.log(self.level, message) # end write # end class LoggerWrapper # # # # # # # # # # # # # # # Code to test the class # # # # # # # # # # # # # # # logging.basicConfig(filename='command.log',level=logging.INFO) logWrap = LoggerWrapper( logging, logging.INFO) subprocess.call(['cat', 'file_full_of_text.txt'], stdout = logWrap, stderr = logWrap) print 'Script terminated' For logging subprocesses' output, Google suggests to directly redirect the output to a file in a way similar to this: sobprocess.call( ['ls'] stdout = open( 'logfile.log', 'w') ) This is not an option for me since I need to use the formatting and loglevel facilities of the logging module. I also suppose that having a file open in write mode but two different entities is not permitted / not a sane thing to do. I would now like to see your comments and enhancement proposal. I would also like to know if there is already a similar object in the Python library since I found nothing to accomplish this task! A: Great idea. I was having the same problem and this helped me solve it. Your method for doing cleanup though is wrong (as you mentioned it might be). Basically, you need to close the write end of the pipes after passing them to the subprocess. That way when the child process exits and closes it's end of the pipes, the logging thread will get a SIGPIPE and return a zero length message as you expected. Otherwise, the main process will keep the write end of the pipe open forever, causing readline to block indefinitely, which will cause your thread to live forever as well as the pipe. This becomes a major problem after a while because you'll reach the limit on the number of open file descriptors. Also, the thread shouldn't be a daemon thread because that creates the risk of losing log data during process shutdown. If you properly cleanup as a described, all the threads will exit properly removing the need to mark them as daemons. Lastly, the while loop can be simplified using a for loop. Implementing all of these changes gives: import logging import threading import os import subprocess logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.INFO) class LogPipe(threading.Thread): def __init__(self, level): """Setup the object with a logger and a loglevel and start the thread """ threading.Thread.__init__(self) self.daemon = False self.level = level self.fdRead, self.fdWrite = os.pipe() self.pipeReader = os.fdopen(self.fdRead) self.start() def fileno(self): """Return the write file descriptor of the pipe """ return self.fdWrite def run(self): """Run the thread, logging everything. """ for line in iter(self.pipeReader.readline, ''): logging.log(self.level, line.strip('\n')) self.pipeReader.close() def close(self): """Close the write end of the pipe. """ os.close(self.fdWrite) # For testing if __name__ == "__main__": import sys logpipe = LogPipe(logging.INFO) with subprocess.Popen(['/bin/ls'], stdout=logpipe, stderr=logpipe) as s: logpipe.close() sys.exit() I used different names in a couple of spots, but otherwise it's the same idea, except a little cleaner and more robust. Setting close_fds=True for the subprocess call (which is actually the default) won't help because that causes the file descriptor to be closed in the forked (child) process before calling exec. We need the file descriptor to be closed in the parent process (i.e. before the fork) though. The two streams still end up not being synchronized correctly. I'm pretty sure the reason is that we're using two separate threads. I think if we only used one thread underneath for the logging, the problem would be solved. The problem is that we're dealing with two different buffers (pipes). Having two threads (now I remember) gives an approximate synchronization by writing the data as it becomes available. It's still a race condition, but there are two "servers", so it's normally not a big deal. With only one thread, there's only one "server" so the race condition shows up pretty bad in the form of unsynchronized output. The only way I can think to solve the problem is to extend os.pipe() instead, but I have no idea how feasible that is.
{ "pile_set_name": "StackExchange" }
Q: displaying values from database on the basis of dropdown menu selction hi i am creating a JSF application . In fact I made a dropdown list and want to display the results according to the value selected from the dropdown. if somebody can help.... thanks here is my drop down <h:form> <h:commandButton action="sample?faces-redirect=true" value="submit"> <h:selectOneMenu id="sampleSearch" value="#{cBean.id}"> <f:selectItem id="id" itemLable="idText" itemValue="By Text" /> <f:selectItem id="idnumeric" itemLable="idNumeric" itemValue="Number" /> <f:selectItem id="product" itemLable="Product" itemValue="Main Product" /> <f:selectItem id="lonumber" itemLable="loNumber" itemValue="LoNumber" /> <f:selectItem id="formula" itemLable="formula" itemValue="By Formula" /> </h:selectOneMenu> </h:commandButton> </h:form> A: First, you're not allowed to nest <h:selectOneMenu> component(s) within <h:commandButton>! Here's a proper structure of your <h:form> <h:form> <h:commandButton action="sample?faces-redirect=true" value="submit" /> <h:selectOneMenu id="sampleSearch" value="#{cBean.id}"> <f:selectItem id="id" itemLable="idText" itemValue="By Text" /> <f:selectItem id="idnumeric" itemLable="idNumeric" itemValue="Number" /> <f:selectItem id="product" itemLable="Product" itemValue="Main Product" /> <f:selectItem id="lonumber" itemLable="loNumber" itemValue="LoNumber" /> <f:selectItem id="formula" itemLable="formula" itemValue="By Formula" /> </h:selectOneMenu> </h:form> Then, in order to get the dropdown list options from the database, you can consider using the <f:selectItems> component (and get rid of those <f:selectItem>s) and pass a List<T> from the managed bean to the components value property. The selectOneMenu would then look like this: <h:selectOneMenu value="#{cBean.id}"> <f:selectItems value="#{cBean.values}" var="item" itemLabel="#{item.label}" itemValue="#{item.value}"/> </h:selectOneMenu> As for the managed-bean, it's now supposed to provide a public List<T> getValues() method, which will return a list with the objects that will populate the dropdown. When T is a complex Java object, such as Item which has a String property of label and value, then you could use the var attribute to get hold of the iteration variable which you in turn can use in itemValue and/or itemLabel attribtues (if you omit the itemLabel, then the label becomes the same as the value). Let's say: @ManagedBean @RequestScoped public class CBean { public List<Item> getValues() { List<Item> result = new ArrayList<Item>(); //..call-back to web-service, db, etc. and populate the result variable. return result; } } The Item class would look like this: public class Item { private String label; private String value; //getters, setters. } You can read more here: How to prepopulate a from a DB?
{ "pile_set_name": "StackExchange" }
Q: Centering text in a trapezium div which uses unsorted list trying to get each trapezium to have text centered in them - but not sure how to! http://jsfiddle.net/fYkq4/ HTML structure follows: <ul id="menu"> <ul><div class="fade-div" id="trapezium1"> HOME </div></ul> <ul><div class="fade-div" id="trapezium2"> HOME </div></ul> <ul><div class="fade-div" id="trapezium3"> HOME </div></ul> <ul><div class="fade-div" id="trapezium4"> HOME </div></ul> <ul></ul> </ul> I use the CSS border fidgeting technique to get the trapeziums, however as the shapes get inverted I have to change between using border-bottom: 80px solid blue; and border-top: 80px solid blue; - i.e. if I use top the text is below the trap, if i use bottom the text is inside (but really below) the trap. How can I make it so all the HOME's are consistently inside each trapezium? Do I need to move the text out of the div and make the divs float? Or put the text into another div and make it float? A: You could do it like this fiddle: http://jsfiddle.net/xonium/PqXbu/ Where the important part is the blocker <div id="menu"> <ul> <li class="fade-div" id="trapezium1"> HOME1 </li> <li class="fade-div" id="trapezium2"> HOME2 </li> <li class="fade-div" id="trapezium3"> HOME3 <div class="blocker"></div> </li> <li class="fade-div" id="trapezium4"> HOME4 <div class="blocker"></div> </li> </ul> </div> which acts as just like your upper two trapezium, just inverse colors. .blocker { background-color: blue; height: 0; width:0; right:0; border-bottom: 80px solid red; border-left: 40px solid transparent; border-right: 0px solid transparent; float: right; }
{ "pile_set_name": "StackExchange" }
Q: How to scale data for a log graph I'm writing a program to "printer plot" some data (printing "*" characters), and I'd like to scale the data on the vertical axis logarithmically (ie, semi-log chart). I have data values ranging from zero to about 10000, and I'd like to scale the data to fit on 30-40 lines. For a linear plot it's fairly simple -- divide 10000 by 30 (number of lines) and then, for each row being printed, calculate row# * (10000/30) and (row# + 1) * (10000/30), then print a "*" if the value in the data vector is between those two values. How do I do the same conceptual thing with log scaling? (I realize that on a log chart you never really have zero, but let's say I want to plot the data on a 2 decade chart spread over the 30 lines.) I've got the following C code: float max = 10000; int xDim = 200; int yDim = 30; for (int yIndex = yDim - 1; yIndex >= 0; yIndex--) { --<clear "buffer">-- float top = exp((float) yDim); float upper = exp((float) yIndex + 1); float lower = exp((float) yIndex); float topThresh = (max / top) * upper; float bottomThresh = (max / top) * lower; for (int xIndex = 0; xIndex < xDim; xIndex++) { if (data[xIndex] > bottomThresh && data[xIndex] <= topThresh) { buffer[xIndex] = '*'; } } --<print "buffer">-- } This does print something resembling a semi-log chart, but all the data (five decades, more or less) is compressed into the top 12 or so lines (out of 30), and obviously I've not factored in the number of decades I want to display (since I can't figure out how to do that). So how do I scale to plot two decades? Update I've discovered that if I divide the arguments to the three exp calls with a value of about 6.5 (or multiply by about 0.15), I get close to two decades on the graph. But I have no idea how to compute this value. It kind of made sense to do log(pow(10,2)) (ie, natural log of 10 raised to the number of decades desired), but that yields a value near 4 (and it's moving in the wrong direction). How should I compute the value? Update #2 If I multiply the arguments to the exp calls times the value (log(10) * decades)/yDim I get the right result. A: If I multiply the arguments to the exp calls times the value (log(10) * decades)/yDim I get the right result.
{ "pile_set_name": "StackExchange" }
Q: If output contains "File Not Found", then log statement A, otherwise statement B (batch file) I have a batch file that is checking the parition to see if there is a hiberation file. If the directory listing finishes and the file is not found, it will output: File Not Found, but for some reason that text is not going into my log as I wish. If there is way to have that go to my log, let me know. Otherwise, I am looking to have some code that will basically say, if the output contains "Files Not Found" then log the text "File Not Found" in my output log. If it does, it automatically logs the info I need, so nothing needed. set output=mylog.txt echo.=================================>> "%output%" echo.Performing Hibernation file Check... echo.Hibernation File Check: >> "%output%" C: cd / dir /a /s hiberfil.sys >> "%output%" echo. echo. echo.=================================>> "%output%" A: Change your dir command in the batch file to the following to also output stderr to the same file you are outputting stdout to. dir /a /s hiberfil.sys >> "%output%" 2>>&1
{ "pile_set_name": "StackExchange" }
Q: How do I authenticate to a WCF service via ACS integration with Windows Live ID? I have a WCF service that uses UserName authentication via ACS. This works great when I'm using Service Identities but when I try to use my Windows Live ID credentials I get the following error: System.ServiceModel.FaultException: ACS10002: An error occurred while processing the SOAP body. ACS50012: Authentication failed. ACS50026: Principal with name '[email protected]' is not a known principal. Unfortunately I've yet to find an example of how one uses Windows Live ID with a WCF service. The only examples I could find seem to be focused on integrating multiple identity providers with ASP.NET or MVC websites. Any help in this regard would be greatly appreciated.... A: ACS won't authenticate your Live ID username and password directly. ACS acts as a federation provider for Live ID, it's a go-between, so it will only consume tokens issued by Windows Live ID. ACS supports Live ID authentication out of the box in passive (browser redirect) based scenarios but for a WCF service you might consider using Live Connect APIs instead. To use LiveID with your service, your client first authenticates itself to LiveID, and then presents a LiveID-issued token to your WCF service. Brace yourself though, there would be some hoops to jump through to set all of this up. To use the Live Connect APIs, you would register your WCF service as an application with Live ID. Clients that consume your WCF service would then need to be capable of handling the web based login page and user consent pages that Live ID will prompt. The docs below are a good start http://msdn.microsoft.com/en-us/library/hh243641.aspx http://msdn.microsoft.com/en-us/library/hh243647.aspx http://msdn.microsoft.com/en-us/library/windows/apps/hh465098.aspx The next problem is the token you'll get from Live Connect will be in JWT (JSON Web Token) format. I'm not sure if you can request a different token format from live connect, but if your WCF service authentication is WIF based, it most likely expects SAML tokens. JWT is a rather new token format that WIF doesn't yet support so you would have to configure a WIF SecurityTokenHandler on your service that understands JWT tokens. The third link above has some code for reading JWTs, which is a start at least.
{ "pile_set_name": "StackExchange" }
Q: How to combine date and time from different MySQL columns to compare to a full DateTime? Column d is DATE, column t is time, column v is, for example, INT. Let's say I need all the values recorded after 15:00 of 01 Feb 2012 and on. If I write SELECT * FROM `mytable` WHERE `d` > '2012-02-01' AND `t` > '15:00' all the records made before 15:00 at any date are going to be excluded from the result set (as well as all made at 2012-02-01) while I want to see them. It seems it would be easy if there were a single DATETIME column, but there are separate columns for date and time instead in the case of mine. The best I can see now is something like SELECT * FROM `mytable` WHERE `d` >= '2012-02-02' OR (`d` = '2012-02-01' AND `t` > '15:00') Any better ideas? Maybe there is a function for this in MySQL? Isn't there something like SELECT * FROM `mytable` WHERE DateTime(`d`, `t`) > '2012-02-01 15:00' possible? A: You can use the mysql CONCAT() function to add the two columns together into one, and then compare them like this: SELECT * FROM `mytable` WHERE CONCAT(`d`,' ',`t`) > '2012-02-01 15:00' A: The TIMESTAMP(expr1,expr2) function is explicitly for combining date and time values: With a single argument, this function returns the date or datetime expression expr as a datetime value. With two arguments, it adds the time expression expr2 to the date or datetime expression expr1 and returns the result as a datetime value. This resulting usage is just what you predicted: SELECT * FROM `mytable` WHERE TIMESTAMP(`d`, `t`) > '2012-02-01 15:00' A: Here's a clean version that doesn't require string operations or conversion to to UTC timestamps across time zones. DATE_ADD(date, INTERVAL time HOUR_SECOND)
{ "pile_set_name": "StackExchange" }
Q: Using FilePath to access workspace on slave in Jenkins pipeline I need to check for the existence of a certain .exe file in my workspace as part of my pipeline build job. I tried to use the below Groovy script from my Jenkinsfile to do the same. But I think the File class by default tries to look for the workspace directory on jenkins master and fails. @com.cloudbees.groovy.cps.NonCPS def checkJacoco(isJacocoEnabled) { new File(pwd()).eachFileRecurse(FILES) { it -> if (it.name == 'jacoco.exec' || it.name == 'Jacoco.exec') isJacocoEnabled = true } } How to access the file system on slave using Groovy from inside the Jenkinsfile? I also tried the below code. But I am getting No such property: build for class: groovy.lang.Binding error. I also tried to use the manager object instead. But get the same error. @com.cloudbees.groovy.cps.NonCPS def checkJacoco(isJacocoEnabled) { channel = build.workspace.channel rootDirRemote = new FilePath(channel, pwd()) println "rootDirRemote::$rootDirRemote" rootDirRemote.eachFileRecurse(FILES) { it -> if (it.name == 'jacoco.exec' || it.name == 'Jacoco.exec') { println "Jacoco Exists:: ${it.path}" isJacocoEnabled = true } } A: Had the same problem, found this solution: import hudson.FilePath; import jenkins.model.Jenkins; node("aSlave") { writeFile file: 'a.txt', text: 'Hello World!'; listFiles(createFilePath(pwd())); } def createFilePath(path) { if (env['NODE_NAME'] == null) { error "envvar NODE_NAME is not set, probably not inside an node {} or running an older version of Jenkins!"; } else if (env['NODE_NAME'].equals("master")) { return new FilePath(path); } else { return new FilePath(Jenkins.getInstance().getComputer(env['NODE_NAME']).getChannel(), path); } } @NonCPS def listFiles(rootPath) { print "Files in ${rootPath}:"; for (subPath in rootPath.list()) { echo " ${subPath.getName()}"; } } The important thing here is that createFilePath() ins't annotated with @NonCPS since it needs access to the env variable. Using @NonCPS removes access to the "Pipeline goodness", but on the other hand it doesn't require that all local variables are serializable. You should then be able to do the search for the file inside the listFiles() method.
{ "pile_set_name": "StackExchange" }
Q: What is the easiest way to plot a function and its tangent lines at the turning points? I want to plot graphs like this and embed them as Tikz or Pstricks code in my handouts. I know that there are software such as Ipe, LaTeXDraw etc. But none of them can satisfy me. Because Ipe doesn't give me code and LaTeXDraw doesn't give smooth curves. What is the easiest way to plot graphs like this? Please keep in mind that the smoothness of the curve is very important to me. A: I don’t know how exact the curve should be but if the shapt isn’t that important you could use the bend or the in and out options to draw. The first thing is to set up a TikZ environment and draw the axes. Use \draw to draw a line, ad the tip with -> and the label with a node. \documentclass{article} \usepackage{tikz} \begin{document} \begin{tikzpicture} % Axes \draw [->] (-1,0) -- (11,0) node [right] {$x$}; \draw [->] (0,-1) -- (0,6) node [above] {$y$}; % Origin \node at (0,0) [below left] {$0$}; \end{tikzpicture} \end{document} The next thing could be the start, end and extreme points using coordinates % Points \coordinate (start) at (1,-0.8); \coordinate (c1) at (4,3); \coordinate (c2) at (6,2); \coordinate (c3) at (8,4); \coordinate (end) at (10.5,-0.8); % show the points \foreach \n in {start,c1,c2,c3,end} \fill [blue] (\n) circle (1pt) node [below] {\n}; Then join the single points with \draw and the to construction, where you can give the in and out angles to reach a point. % join the coordinates \draw [thick] (start) to[out=70,in=180] (c1) to[out=0,in=180] (c2) to[out=0,in=180] (c3) to[out=0,in=150] (end); Now add the dashed lines and the tangents using a \foreach loop through c1, c2 and c3. The letoperation allows to use components of a coordinate, but need the calc library (add \usetikzlibrary{calc} to the preamble). % add tangets and dashed lines \foreach \c in {c1,c2,c3} { \draw [dashed] let \p1=(\c) in (\c) -- (\x1,0); \draw ($(\c)-(0.75,0)$) -- ($(\c)+(0.75,0)$); } An as the last thing add the labels using nodes again. \foreach \c in {1,2,3} { \draw [dashed] let \p1=(c\c) in (c\c) -- (\x1,0) node [below] {$c_\c$}; \draw ($(c\c)-(0.75,0)$) -- ($(c\c)+(0.75,0)$) node [midway,above=4mm] {$f'(c_\c)=0$}; } To get a and b use the intersections library and name the x axis and the curve with name path. Then use the intersection to add the nodes as shown in the following full example. \documentclass{article} \usepackage{tikz} \usetikzlibrary{calc,intersections} \begin{document} \begin{tikzpicture} % Axes \draw [->, name path=x] (-1,0) -- (11,0) node [right] {$x$}; \draw [->] (0,-1) -- (0,6) node [above] {$y$}; % Origin \node at (0,0) [below left] {$0$}; % Points \coordinate (start) at (1,-0.8); \coordinate (c1) at (3,3); \coordinate (c2) at (5.5,1.5); \coordinate (c3) at (8,4); \coordinate (end) at (10.5,-0.8); % show the points % \foreach \n in {start,c1,c2,c3,end} \fill [blue] (\n) % circle (1pt) node [below] {\n}; % join the coordinates \draw [thick,name path=curve] (start) to[out=70,in=180] (c1) to[out=0,in=180] (c2) to[out=0,in=180] (c3) to[out=0,in=150] (end); % add tangets and dashed lines \foreach \c in {1,2,3} { \draw [dashed] let \p1=(c\c) in (c\c) -- (\x1,0) node [below] {$c_\c$}; \draw ($(c\c)-(0.75,0)$) -- ($(c\c)+(0.75,0)$) node [midway,above=4mm] {$f'(c_\c)=0$}; } % add a and b \path [name intersections={of={x and curve}, by={a,b}}] (a) node [below left] {$a$} (b) node [above right] {$b$}; \end{tikzpicture} \end{document} The shape of the curve may be improved by using the controls construction instead of to, e.g. \draw [thick,name path=curve] (start) .. controls +(70:1) and +(180:0.75) .. (c1) .. controls +(0:0.75) and +(180:1) .. (c2) .. controls +(0:1) and +(180:1) .. (c3) .. controls +(0:1) and +(150:1) .. (end); Have a look at the TikZ manual for mor information ;-) … It is also possible to use the plot operation as Harish Kumar shows but in this cas you can’t be sure that f'(c_n) = 0 and it needs mor manual calculations etc. to get the right points … \draw [thick, name path=curve] plot[smooth, tension=.7] coordinates{(start) (c1) (c2) (c3) (end)}; A: \documentclass{article} \usepackage{tikz} \begin{document} \begin{tikzpicture} \draw [->] (-1,0) -- (11,0) node [right] {$x$}; \draw [->] (0,-1) -- (0,6) node [above] {$y$}; \node at (0,0) [below left] {$0$}; \draw plot[smooth, tension=.7] coordinates{(1.5,-0.5) (3,3) (5,1.5) (7.5,4) (10,-1)}; \node at (1.75,-0.25) {$a$}; \node at (9.5,-0.25) {$b$}; \draw[dashed] (3.2,3.05) -- (3.2,0); \draw[dashed] (4.9,1.5) -- (4.9,0); \draw[dashed] (7.3,4.05) -- (7.3,0); \node at (3.2,-0.25) {$c_{1}$}; \node at (4.9,-0.25) {$c_{2}$}; \node at (7.3,-0.25) {$c_{3}$}; \draw (2.5,3.05) -- (4,3.05); \draw (4,1.5) -- (6,1.5); \draw (6.5,4.05) -- (8.25,4.05); \node at (3.2,3.5) {$f'(c_{1})=0$}; \node at (4.9,2.2) {$f'(c_{2})=0$}; \node at (7.3,4.5) {$f'(c_{3})=0$}; \node at (9.5,2.5) {$y=f(x)$}; \end{tikzpicture} \end{document} A: It needs the newest pst-eucl.sty which is version 1.49. \documentclass[pstricks,border=12pt]{standalone} \usepackage{pstricks-add,pst-eucl} \def\f(#1){((#1-1)*(#1-2)*(#1-4)*(#1-7)*(#1-9)/80+2)} \def\fp(#1){Derive(1,\f(x))}% first derivative \begin{document} \begin{pspicture}[algebraic,saveNodeCoors,PointNameSep=7pt,PointSymbol=none,PosAngle=-90,CodeFig=true](-0.75,-0.75)(9,4.5) % Determine the x-intercepts \pstInterFF[PosAngle=-45]{\f(x)}{0}{0}{a} \pstInterFF[PosAngle=-135]{\f(x)}{0}{8}{b} % Determine the abscissca of critical points \pstInterFF{\fp(x)}{0}{1.5}{c_1} \pstInterFF{\fp(x)}{0}{3}{c_2} \pstInterFF{\fp(x)}{0}{5.5}{c_3} % Determine the turning points \pstGeonode [ PointName={f'(c_1)=0,f'(c_2)=0,f'(c_3)=0}, PosAngle=90, PointNameSep={7pt,16pt,7pt}, ] (*N-c_1.x {\f(x)}){C_1} (*N-c_2.x {\f(x)}){C_2} (*N-c_3.x {\f(x)}){C_3} % Draw auxiliary dashed lines \bgroup \psset{linestyle=dashed,linecolor=gray} \psline(c_1)(C_1) \psline(c_2)(C_2) \psline(c_3)(C_3) \egroup % Draw the tangent line at the turning points \psline([nodesep=-0.5]C_1)([nodesep=0.5]C_1) \psline([nodesep=-0.5]C_2)([nodesep=0.5]C_2) \psline([nodesep=-0.5]C_3)([nodesep=0.5]C_3) % Plot the function \psplot[plotpoints=100]{0.4}{8.2}{\f(x)} % Attach the function label \rput(*7.5 {\f(x)}){$y=f(x)$} % Draw the coordinate axes \psaxes[labels=none,ticks=none]{->}(0,0)(-0.5,-0.5)(8.5,4)[$x$,0][$y$,90] \end{pspicture} \end{document}
{ "pile_set_name": "StackExchange" }
Q: My Brainfuck interpreter in F# I'm very new to functional world. I've written a simple brainfuck interpreter as my first F# program. What I would like to know: Am I using the right data structure for each situation? Is my code easy to understand? Is my code style similar to common F# codes (naming, organization etc)? Is my code "very declarative" or I'm still thinking in the imperative way? What is not so relevant: Performance (but if I've made something very bad, let me now!) More features Obs.: I assume input is always a valid brainfuck code and that's ok to me. open System open System.IO let getValue (position:int) (memory:Map<int, int>) = match Map.tryFind position memory with | None -> 0 | value -> value.Value let calcAccumulator (acc:int) (instruction:char) (search:char) (miss:char) = match instruction with | x when x = search -> acc+1 | x when x = miss -> acc-1 | _ -> acc let rec findMatch (code:string) (search:char) (miss:char) (inc:int) (current:int) (acc:int) = let instruction = code.[current] match instruction, acc with | x, 0 when x = search -> current+1 | _ -> findMatch code search miss inc (current+inc) (calcAccumulator acc instruction search miss) let updateMemory (instruction:char) (position:int) (memory:Map<int, int>) = let oldValue = getValue position memory let newValue = match instruction with | '+' -> oldValue + 1 | '-' -> oldValue - 1 | ',' -> Console.ReadKey().KeyChar |> Convert.ToInt32 | _ -> oldValue Map.add position newValue memory let updateOutput (instruction) (value:int) = if instruction = '.' then Console.Write ((char) value) let updatePosition (instruction:char) (position:int) = match instruction with | '>' -> position+1 | '<' -> position-1 | _ -> position let updateIndex (code:string) (index:int) (value:int) = match code.[index], value with | '[', 0 -> findMatch code ']' '[' 1 (index+1) 0 | ']', x when x <> 0 -> findMatch code '[' ']' -1 (index-1) 0 | _ -> index+1 let rec interpretHelper (code:string) (index:int) (memory:Map<int, int>) (position:int) = match code.Length with | x when x = index -> memory | _ -> let instruction = code.[index] let newPosition = updatePosition instruction position let newMemory = updateMemory instruction position memory let newIndex = updateIndex code index (getValue position memory) updateOutput instruction (getValue position memory) interpretHelper code newIndex newMemory newPosition let interpret (code:string) = interpretHelper code 0 Map.empty 0 let options (args:string array) = let pathToFile = args.[0] let showMemory = match args with | [| _ ; "-memory" |] -> true | _ -> false pathToFile, showMemory let onlyCode (text:string) = let validInstructions = "+-.,][<>".ToCharArray() String.filter (fun x -> Array.contains x validInstructions) text [<EntryPoint>] let main args = let pathToFile, showMemory = options args let code = onlyCode (IO.File.ReadAllText pathToFile) let memory = interpret code if showMemory then Map.iter (fun a b -> printf "\n%3d: %d" a b) memory 0 A: Let me make a few comments: F# has a good system of type inference. You don't need to specify the type explicitly, the compiler in many cases will do it for you. You can read more about it here. In function getValue. Usually, matched with None|Some value: match Map.tryFind position memory with | None -> 0 | Some value -> value Although you can rewrite this code with using defaultArg let getValue position memory = defaultArg (Map.tryFind position memory) 0 In function updateIndex you can check the value with 0 in match: let updateIndex (code:string) index value = match code.[index], value = 0 with | '[', true -> findMatch code ']' '[' 1 (index + 1) 0 | ']', false -> findMatch code '[' ']' -1 (index - 1) 0 | _ -> index + 1 The helper functions better write as nested: let interpret (code:string) = let rec interpretHelper index memory position = You can read more about it here In function interpretHelper, use if-then-else instead of PM: let rec interpretHelper index memory position = if code |> String.length = index then memory else ...
{ "pile_set_name": "StackExchange" }
Q: Ruby on Rails escape umlauts in url I try to post some parameters containing umlauts to a url (PHP Script). So I've to escape the parameters. But Ruby returns me an unexpected string. PHP: urlencode("äöü"); output: %E4%F6%FC and RoR: URI.escape("äöü") output: %C3%A4%C3%B6%C3%BC or: CGI.escape("äöü") output: %C3%A4%C3%B6%C3%BC I'm working on Rails 3.0.5 and Ruby 1.9.2 and my application is setup for UTF-8. Where is my fault or what should I do? Thanx andi A: Welcome to the wonderful world of String encodings. As you noted, Ruby is configured for UTF-8, whereas your installation of PHP looks like it's trying to encode using ISO 8859-1. To solve this, you need to make sure both of your scripts are operating using the same encoding, or explicitly convert your URL paramaters from UTF-8 to ISO 8859-1.
{ "pile_set_name": "StackExchange" }
Q: swap values between two String values in minimum iterations I have : String a1 = "Hello"; String a2 = "World"; I want a2 becomes a1 and a1 becomes a2 i.e. they interchange values. Doing to without writing a function: a1 = "World"; a2 = "Hello"; want to write a function that can do it in minimum number of iterations. A: Since String is immutable in Java, you cannot change the value of a1 and a2, and the question is quite confusing. What you can do is swap the variables: String tmp = a1; a1 = a2; a2 = tmp; But I suspect this is an exercise, and perhaps it was intended more like this: Given char[] a1 = {"h", "e", "l", "l", "o"} and char[] a2 = ..., swap their values in the minimum number of steps without reassigning these variables. Then you would iterate over the elements of the arrays, and perform swapping character by character just like I did earlier with the strings.
{ "pile_set_name": "StackExchange" }
Q: Display details in a textbox I have a list box.Whenever I search a person from the database, the result will be displayed in a listbox. Then what I want is whenever I click on the name of the person from the listbox is that the persons detail will be displayed in textboxes. I have my code but the problem is that only the details of the person I first click are displayed in the textboxes. private void listBox1_SelectedIndexChanged(object sender, EventArgs e) { connection.Open(); OleDbCommand select = new OleDbCommand(); select.Connection = connection; select.CommandText = "Select * From Accounts"; OleDbDataReader reader = select.ExecuteReader(); while (reader.Read()) { if (reader[0].ToString() == listBox1.Tag.ToString()) { fnametb.Text = reader[1].ToString(); lnametb.Text = reader[2].ToString(); agetb.Text = reader[3].ToString(); addresstb.Text = reader[4].ToString(); coursetb.Text = reader[5].ToString(); } } connection.Close(); } A: If you want to use your code you need to refresh listBox1.Tag and put into Tag Selected list box item key. Or you need to use something like reader[0].ToString() ==listBox1.SelectedValue
{ "pile_set_name": "StackExchange" }
Q: MVVM WPF: Reflecting a controls property to the viewmodel, when an events get triggered Okay i'm trying to understand WPF and the popular MVVM Pattern. Now i have this issue. I'm using a ribbon control with several tabs. In my ViewModel i have a property "ActiveTab (string)" Which should reflect the currently active tab. Since ribboncontrol doesn't have any property that shows this information i can't bind to it. So i was thinking: I could bind the selected event like this: <r:RibbonTab Label="tab1" Selected="RibbonTab_Selected"></r:RibbonTab> <r:RibbonTab Label="tab2" Selected="RibbonTab_Selected"></r:RibbonTab> <r:RibbonTab Label="tab3" Selected="RibbonTab_Selected"></r:RibbonTab> <r:RibbonTab Label="tab4" Selected="RibbonTab_Selected"></r:RibbonTab> <r:RibbonTab Label="tab5" Selected="RibbonTab_Selected"></r:RibbonTab> Then in codebehind set the property in the viewmodel by using Activetab = sender.Label But Then i would need a refference to my viewmodel in the codebehind of my view. I'm trying to solve this problem without using any code behind files. (MVVM). Now the real question: Is it somehow possible to use an eventtrigger or eventsetter. that when the selected event gets fired. A setter automaticly sets the activetab property to the sender.Label value?. Using xaml only. -- My excuses for my rather bad english and maybe noobish question. I'm very new at wpf =) UPDATE: As i just found out, there is a isSelected property on a ribbonTab. Now i have some issues on how to bind it to the property in my viewmodel. I tried the following code: <Style TargetType="{x:Type r:RibbonTab}"> <Style.Triggers> <Trigger Property="IsSelected" Value="True"> <Setter Property="{Binding SelectedTab}" Value="{Binding RelativeSource=Self, Path=Label}" /> </Trigger> </Style.Triggers> </Style> But this doesn't work: Error 1 Cannot find the Style Property 'SelectedTab' on the type 'Microsoft.Windows.Controls.Ribbon.RibbonTab'. SelectedTab offcourse is in my viewmodel and not in ribbonTab ... How can i make the setter, set the property on my viewmodel with the value of the tab? =) Thanks in advance!! A: The August release of the Microsoft Ribbon, the RibbonTab has a IsSelected dependency property, so you should be able to bind to that.
{ "pile_set_name": "StackExchange" }
Q: How to substract current utc time from php iso utc? I want to get current time in UTC, substract it from ISO UTC string which I'm getting from backend in the following format: 2016-11-29T17:53:29+0000 (ISO 8601 date, i guess?) and get an answer in milliseconds. What is the shortest way to do it in javascript? I'm using angular 2 and typescript frontend. A: You can get the current time using: var now = Date.now() And convert the string to a milliseconds timestamp using: var ts = (new Date('2016-11-29T17:53:29+0000')).getTime() And then subtract the values: var diff = ts - now
{ "pile_set_name": "StackExchange" }
Q: Динамическое создание input Есть функция: $("input[name='file_img[]']").change(function () { var max = 5; var total = $("input[name='file_img[]']").length; if (total < max) { total = total + 1; $("#gallery").append('<input type="file" class="form-control" name="file_img[]">'); } }); по идеи как видно из кода она должна срабатывать 4 раза т.е. чтобы было 5 input`ов, но она срабатывает всего один раз. Помогите. Она не должна быть цыклом. Она должна срабатывать только при нажатии. A: В js есть такое понятие как "делегирование событий". На динамически добавляемые элементы нельзя напрямую повесить обработчик. Но можно повесить на родителя: $('body').on('change', 'input', function() { if($('input').size() < 5) { $('body').append('<input type="file" /><br />'); } }); <input type="file" /><br /> <script src="https://code.jquery.com/jquery-2.2.0.min.js"></script>
{ "pile_set_name": "StackExchange" }
Q: Python Beautifulsoup Getting Attribute Value I'm having difficulty getting the proper syntax to extract the value of an attribute in Beautifulsoup with HTML 5.0. So I've isolated the occurrence of a tag in my soup using the proper syntax where there is an HTML 5 issue: tags = soup.find_all(attrs={"data-topic":"recUpgrade"}) Taking just tags[1]: date = tags[1].find(attrs={"data-datenews":True}) and date here is: <span class="invisible" data-datenews="2018-05-25 06:02:19" data-idnews="2736625" id="horaCompleta"></span> But now I want to extract the date time "2018-05-25 06:02:19". Can't get the syntax. Insight/help please. A: You can access the attrs using key-value pair Ex: from bs4 import BeautifulSoup s = """<span class="invisible" data-datenews="2018-05-25 06:02:19" data-idnews="2736625" id="horaCompleta"></span>""" soup = BeautifulSoup(s, "html.parser") print(soup.span["data-datenews"]) Output: 2018-05-25 06:02:19
{ "pile_set_name": "StackExchange" }
Q: String Value is not recognized by select statement passing identical string value I'm struggling with pulling data using query below. This problem occurs on all string values in that column. Even if I copy the actual value in SSMS from this table and the paste it into the select statement (where string = 'MyStringVlaue'). LTRIM/RTRIM functions also did not help. Some info: The database table source column data type is VARCHAR(500). My database compatibility level is 130. Using SSMS 2016. My Database Collation is SQL_Latin1_General_CP1_CI_AS And my string Column Collation is SQL_Latin1_General_CP1_CI_AS But it does not create any issues when querying the same string values which were original copied over from the problematic source table into other tables. The problematic table was imported using SQL Server Import Wizard from an Excel file recognized by the Wizard as 2007-2010 type. drop table #T select 'MyStringVlaue' as String into #t select * from #t where String ='MyStringVlaue' -- this does not return anything when executed on the Real table! select * from #t -- example select ASCII('MEDICAL SERVICES DISTRICT') -- output is 65 -- P.S. I created a copy of the problematic table by running select * into from originalSourceTable Still has the same issue. A: TENTATIVE SOLUTION: It maybe a bug in SSMS itself. Once I opened a new query editor window and typed manually the same select statement the issue was gone. Though note, that I copied and pasted the code from old SSMS query editor window the issue still persists. It only helped when I type the code manually from the scratch.
{ "pile_set_name": "StackExchange" }
Q: How to inspect iframes in Chrome DevTools? I'd like to point the developer tools at a specific iframe within a document. In Firefox, there is a button in the toolbar. In Chrome, I found this: But I don't know how to use this feature in panels other than the Console. In Firefox, "If you select an entry in the list, all the tools in the toolbox - the Inspector, the Console, the Debugger and so on - will now target only that iframe, and will essentially behave as if the rest of the page does not exist." How to inspect elements in an iframe as if the rest of the page does not exist? I need to see how the iframe fits in the parent page, but don't want to see the elements of the parent page in the Elements panel (because of the depth of the elements). A: One possible workaround is to enable the still-in-development Out-of-process iframes (OOPIF) using chrome://flags/#enable-site-per-process flag: A new devtools floating window will open when an iframe is inspected via rightclick menu. To inspect a youtube-like iframe with a custom context menu just rightclick again on that menu. IFRAME contents won't be shown in the parent Inspector since it's in a different process. You may want to do it on a separate installation of Chrome like Canary or a portable because the feature breaks iframes on some sites (these flags affect the entire data folder with all profiles inside).
{ "pile_set_name": "StackExchange" }
Q: How do I extract login history? I need to know the login history for specific user (i.e. login and logout time), How do I extract this history for a specific date range in Linux ? A: You can try the last command: last john It prints out the login/out history of user john. Whereas running just last prints out the login/out history of all users. A: If you need to go further back in history than one month, you can read the /var/log/wtmp.1 file with the last command. last -f wtmp.1 john will show the previous month's history of logins for user john. The last log output isn't too heavy and relatively easy to parse, so I would probably pipe the output to grep to look for a specific date pattern. last john | grep -E 'Aug (2[0-9]|30) ' to show August 20-30. Or something like: last -f /var/log/wtmp.1 john | grep -E 'Jul (1[0-9]|2[0-9]|30) ' to acquire July 10-30 for user john. A: How to extract login history for specific date range in Linux? An example to list all users login from 25 to 28/Aug: last | while read line do date=`date -d "$(echo $line | awk '{ print $5" "$6" "$7 }')" +%s` [[ $date -ge `date -d "Aug 25 00:00" +%s` && $date -le `date -d "Aug 28 00:00" +%s` ]] && echo $line done awk '{ print $5" "$6" "$7 }' to extract the date time at corresponding column from last output +%s to convert datetime to Epoch time -ge stand for greater than or equal -le stand for less than or equal You can also do it for specific user with last <username>.
{ "pile_set_name": "StackExchange" }
Q: Is $E(XY) = E(XE(Y|Z))$ true? Is $E(XY) = E(XE(Y|Z))$ always true if I don't know the relationship between these 3 random variables. I think when $\sigma(X) \subset \sigma(Z)$, the equation is true, but what will happen otherwise? A: Suppose $X$ is $+1$ or $-1$ with equal probability, $Y=X$, and $Z$ is anything independent of $X$ and $Y$. Then $E(XY) = E(X^2) = 1$. On the other hand, $E(Y|Z) = E(Y) = 0$, so $E(XE(Y|Z)) = 0$ as well.
{ "pile_set_name": "StackExchange" }
Q: Why is my ImageView not showing the image? Trying to display an image using ImageView but it's not being displayed. The source image is kept in the drawable directory. <?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/constrainLayout" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <ImageView android:id="@+id/imageView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:scaleType="centerInside" android:visibility="visible" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintHorizontal_bias="0.485" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" tools:ignore="ContentDescription" tools:srcCompat="@drawable/nature"/> </androidx.constraintlayout.widget.ConstraintLayout> A: The tools:srcCompat="@drawable/nature" is using for showing the image in design time. When you need to see it on run time, you should add android:src="@drawable/nature" as below in your ImageView: <?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:id="@+id/constrainLayout" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <ImageView android:id="@+id/imageView" android:layout_width="wrap_content" android:layout_height="wrap_content" android:scaleType="centerInside" android:visibility="visible" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintHorizontal_bias="0.485" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" android:src="@drawable/nature" tools:ignore="ContentDescription" tools:srcCompat="@drawable/nature"/> </androidx.constraintlayout.widget.ConstraintLayout>
{ "pile_set_name": "StackExchange" }
Q: Ruby Rails--easy way to get animated front end? I am developing a web-based simulation where I want the front-end to be "animated" in real-time--it will be a mapping application, and I want to have little icons (representing the components of my simulation) moving all over the map as the simulation runs. I am developing the back-end in Rails, but I am wondering what are good packages to use for the front-end / animation part? I used Graphviz to generate the base map (a directed graph), but it doesn't seem well suited for live animations. Something like Hans Rosling's Gapminder (if it's even possible to do in real-time). Or should I be doing something similar and "recording" the data then playing it back? What packages should I consider in that case? Gapminder Currently using Rails 3.0, Ruby 1.9.2, Graphviz 2.28. A: Here are 2 javascript libraries that may help. You could use ajax calls to your rails backend to populate their data. I'm unsure if D3 has mapping capability, but I believe Raphael does. As far as real-time, you'll have to check out their documenation. In Rails 4, I believe the streaming capability may benefit you as well, but I've not investigated. Raphael D3
{ "pile_set_name": "StackExchange" }
Q: enum in stringWithFormat provokes incompatible pointer type warning I have an enum property: typedef enum syncCodeTypes { kCodeNull, kCodeFoo, kCodeBar, kCodeDone } syncCodeType; //... @property syncCodeType syncCode; I use it in a stringWithFormat:: [self showAlertWithMessage:NSLocalizedString(@"Sync Error", @"Sync Error") andInfo:[NSString stringWithFormat:NSLocalizedString("Heads up re foobar code %d.", "Heads up re foobar code %d."), self.syncCode]]; …and get this warning: Passing argument 1 of localizedStringForKey:value:table from incompatible pointer type. Same thing happens if I substitue the unsigned conversion specifier (%u instead of %d). The compiler doesn’t like %lu, %ld, %llu, or %lld either. Other posts regarding a related language advise that enums are neither signed nor unsigned, so I tried explicitly casting the enum to a signed and to an unsigned integer — and got exactly the same error message: NSInteger iSyncCode = self.syncCode; [self showAlertWithMessage:NSLocalizedString(@"Sync Error", @"Sync Error") andInfo:[NSString stringWithFormat:NSLocalizedString(“Heads up re foobar code %d.", “Heads up re foobar code %d."), iSyncCode]]; // compiler still annoyed NSUInteger uSyncCode = self.syncCode; [self showAlertWithMessage:NSLocalizedString(@"Sync Error", @"Sync Error") andInfo:[NSString stringWithFormat:NSLocalizedString(“Heads up re foobar code %u.”, “Heads up re foobar code %u.”), uSyncCode]]; // compiler still annoyed In runtime there’s no problem — for now. But I'd like to be kosher. Any suggestions? A: You forgot the @-sign before the strings in NSLocalizedString. Replace "Heads up re foobar code %d." with @"Heads up re foobar code %d.".
{ "pile_set_name": "StackExchange" }
Q: AES mode which verifies data integrity AFTER decryption I wonder if there is an AES mode which would verify data integrity after decryption operation. Due to the reasons which does not matter here, there is a possibility that AES implementation I use may produce wrong results under certain conditions (particular data length, memory alignment etc). On the platform, two authenticated encryption modes are available: GCM and CCM. There is an answer on Crypto which explains that CCM does MAC on plaintext whereas GCM does on cyphertext. Does it mean that if AES should malfunction during decryption, CCM will detect it whereas GCM won't? A: Yes, you are correct, as long as it really only is the AES part which could dysfunction. Quoting another source, for example Wikipedia's CCM page: These two primitives are applied in an "authenticate-then-encrypt" manner, that is, CBC-MAC is first computed on the message to obtain a tag t; the message and the tag are then encrypted using counter mode. Note that we generally do not recommend to do "Mac then Encrypt", however CCM is a mode of encryption which features its very own proof of security, so you are good to go.
{ "pile_set_name": "StackExchange" }
Q: Localstorage browsers It is known that the localstorage maximum size for Google Chrome is 10 MB. If my website is filling that localstorage with data of roughly 10 MB or less. In other words, my website data fits exactly so it can fill the localstorage to the limit so now the localstorage on my browser is at its maximum (cannot add data no more). Can other websites still store their data into my localstorage? If yes, how so? If no, isn't that considered a huge draw back? I mean someone can visit a website once and it saturates his browser's localstorage BAM ruined his browsing experience for all other websites! A: The limit is per origin, not overall. So your site can store its 10MB, another site can store its 10MB, etc. Can other websites still store their data into my localstorage? If you mean "my localstorage" as in, your website's, they can't. Only your website can access or store anything in the local storage maintained by he browser for your website ("website" is more accurately "origin," see the Web Storage spec and the section of the HTML spec it links to for details).
{ "pile_set_name": "StackExchange" }
Q: postfix rate-limit for locally submitted mails I would like to implement per-user rate and size limits (i.e. certain maximum number of mails/volume per hour) for all outgoing mail. So far I've implemented that via postfwd policy daemon for sasl-authenticated users. However, some users also have accounts on the box making it possible for them to send mails from their web apps using the /usr/sbin/sendmail command. Is there any way to implement per-user rate-limiting for that case as well or is my only option to forbid submitting mails this way through authorized_submit_users and require submission via sasl-authenticated SMTP? A: You can use a sendmail milter for non-smtp traffic using the non_smtpd_milters parameter. If that doesn't solve it for you, the safest way is to prohibit local sendmail(1) submission and force SMTP submission.
{ "pile_set_name": "StackExchange" }
Q: css hover visibility out div I have one problem about visibility. I have to create this DEMO from fiddle. If you click my demo you can see there is a one image. When you hoverover mouse this image then bubble will open. But if you come with the mouse to the left of the image blank bubble opens again. How can I fix this problem ? .balon { position: absolute; width: 345px; height: auto; padding: 3px; background: #FFFFFF; -webkit-border-radius: 2px; -moz-border-radius: 2px; border-radius: 2px; border: #d8dbdf solid 1px; -webkit-box-shadow: -1px 1px 1px 0px rgba(216, 219, 223, 0.52); -moz-box-shadow: -1px 1px 1px 0px rgba(216, 219, 223, 0.52); box-shadow: -1px 1px 1px 0px rgba(216, 219, 223, 0.52); visiblility: hidden; opacity:0; margin-left: -345px; z-index:5; -webkit-transition: all .3s ; -moz-transition: all .3s ; -ms-transition: all .3s ; -o-transition: all .3s ; transition: all .3s ; } .balon:after { content: ''; position: absolute; border-style: solid; border-width: 10px 0 10px 10px; border-color: transparent #fff; display: block; width: 0; z-index: 1; right: -10px; top: 16px; } .vizyon_bg:hover .balon { opacity:1; visibility:visible; z-index:5; transition: opacity .5s linear .5s; -webkit-transition: opacity .5s linear .5s; -moz-transition: opacity .5s linear .5s; -ms-transition: opacity .5s linear .5s; } A: Dude i found the solution! just replace your balon's CSS and you're done! you have wrong z-index :} I've created JSFiddle for you! http://jsfiddle.net/9pgqc24c/ .balon { position: absolute; width: 345px; z-index:-99999; height: auto; padding: 3px; background: #FFFFFF; -webkit-border-radius: 2px; -moz-border-radius: 2px; border-radius: 2px; border: #d8dbdf solid 1px; -webkit-box-shadow: -1px 1px 1px 0px rgba(216, 219, 223, 0.52); -moz-box-shadow: -1px 1px 1px 0px rgba(216, 219, 223, 0.52); box-shadow: -1px 1px 1px 0px rgba(216, 219, 223, 0.52); visiblility:hidden; opacity:0; margin-left:-345px; z-index:-1; -webkit-transition: all .3s ; -moz-transition: all .3s ; -ms-transition: all .3s ; -o-transition: all .3s ; transition: all .3s ; }
{ "pile_set_name": "StackExchange" }
Q: Visual C# Read Fonts From Custom Directory I can enumerate the installed fonts on the system by this code: InstalledFontCollection ifc = new InstalledFontCollection(); foreach(FontFamily font in ifc.Families) { if (font.IsStyleAvailable(FontStyle.Regular)) { // Code } } But I want to read fonts from a custom directory. For instance I will create this folder structure. C:\MyFonts C:\MyFonts\Handwriting C:\MyFonts\Gothic .. .. I will copy true type or open type font files to these folders according to its category. And let say I want to enumerate fonts only in the C:\MyFonts\Gothic folder in my program. How can I do this? A: You need a PrivateFontCollection.
{ "pile_set_name": "StackExchange" }
Q: Dynamic arrays in VBScript with Split(). Is there a better way? A lot of the scripts I write at my job depend on the creation of dynamically-sizable arrays. Arrays in VBScript make this a pretty arduous task, as one has to Redim arrays every time one wants to resize them. To work around this, I've started making comma-delimited strings and using Split(...) to create 1D arrays out of it. While this works fantastic for me, I've wondered whether VBScript has a more efficient way of handling this. So I ask StackOverflow; are there? Disclaimer: I'm fully aware that VBScript is a pretty substandard scripting language, but Python requires extra software, which is a bit of a hassle for server automation, and PowerShell isn't a core component yet. I'm learning them both, though! A: The solution I usually go for is to resize the array each time I add new item to it. In that way the end array will never have any unused entries. ReDim aArray(-1) For i = 1 To 10 ReDim Preserve aArray(UBound(aArray) + 1) aArray(UBound(aArray)) = i Next MsgBox Join(aArray, "," & vbNewLine) Other solution proposed by Carlos is to do it using Dictionary object which is probably cleaner solution: Set dic = CreateObject("Scripting.Dictionary") dic.Add "Item1", "" dic.Add "Item2", "" dic.Add "Item3", "" msgbox Join(dic.Keys, "," & vbNewLine) Thanks, Maciej
{ "pile_set_name": "StackExchange" }
Q: Is order of iteration for Qt’s QHash repeatable across multiple identical runs of a program? Assume that a program is run several times in identical fashion. In each run, the same set of objects is insert into a QHash in the same insertion order; then the objects in the QHash are iterated. The question is will the objects be iterated in the same order in each run of the program? A: Probably, but you can't absolutely rely on it. QHash like QSet requires that any type used as a key provide an overload of the qHash function that converts an object into a hash code. Inside the hash, the items are ordered by hash code. Normally, this conversion into a hash code would be stable and deterministic, and so the objects would receive the same hash codes and would thus be in the same order, even between runs. However, there's nothing to stop someone from creating a type where the output qHash depends on some value (e.g. a pointer address held within the object) that would be constant for a particular run, but not consistent between runs.
{ "pile_set_name": "StackExchange" }
Q: TCL pattern matching I am new to TCL programming. I want to write a tcl code that check if any of the patterns HAT GET DOT present in the given string and if it does, we should display which among the patterns HAT GET DOT is present in the given string. If more than one pattern is present in the string all the matched patterns should be displayed. I wrote the following code, but it only displays a single pattern even if more than one pattern matches the given string. Can anyone help? Thank you in advance Code: set data1 {asdGETdf ferGETfhgDOT} #data1 is the given string foreach index $test_data1 { set result [regexp {ABC|ACC|ADC|AXC} $index match] puts "\n$index" if { $result==1} { puts "MATCH:$match" } else { puts "NO MATCH" } } output:-asdGETdf MATCH:GET ferGETfhgDOT MATCH:GET For the second string I expect it to display GET and DOT (not GET alone as in the output). I think this is because regexp end the search once a match is found. But how to display all pattern matches? A: Simply by using the -all flag. I would also change your script a bit, by using the -inline flag as well to get the results directly instead of relying on match variable because when you get more than one match, it will only keep the last match. I also fixed a few errors from your code snippet. set data1 {asdGETdf ferGETfhgDOT} ;#data1 is the given string foreach index $data1 { set result [regexp -all -inline -- {HAT|GET|DOT} $index] puts "\n$index" if {$result != ""} { puts "MATCH: $result" } else { puts "NO MATCH" } } regexp manual
{ "pile_set_name": "StackExchange" }
Q: Several complex variables tag? I saw the following question, Sources on Several Complex Variables, and given the tag wiki: The theory of functions of one complex variable with an emphasis on the theory of complex analytic functions of one complex variables I figure perhaps there should be another tag for several complex variables? Or instead, maybe the complex variables tag should not be a 'synonym' for complex analysis with the wiki stating so explicitly the focus is on a single complex variable? A: FWIW, rather than repurposing (complex-variables), I would prefer creating (several-complex-variables) (I think that fits the length limit). There would probably be a bit fewer questions on that topic than on just complex analysis, so giving it a longer and more descriptive name may (I hope) help prevent mistags.
{ "pile_set_name": "StackExchange" }
Q: Is it possible to print a preprocessor variable in C? Is is possible to print to stderr the value of a preprocessor variable in C? For example, what I have right now is: #define PP_VAR (10) #if (PP_VAR > 10) #warning PP_VAR is greater than 10 #endif But what I'd like to do is: #define PP_VAR (10) #if (PP_VAR > 10) #warning PP_VAR=%PP_VAR% #endif Is something like this possible in C? A: You can print out the value of a preprocessor variable under visual studio. The following prints out the value of _MSC_VER: #define STRING2(x) #x #define STRING(x) STRING2(x) #pragma message(STRING(_MSC_VER)) Not sure how standard this is though. A: This works with GCC 4.4.3: #define STRING2(x) #x #define STRING(x) STRING2(x) #pragma message "LIBMEMCACHED_VERSION_HEX = " STRING(LIBMEMCACHED_VERSION_HEX) yields: src/_pylibmcmodule.c:1843: note: #pragma message: LIBMEMCACHED_VERSION_HEX = 0x01000017 A: Many C compilers support #warning (but it is not defined by the C standard). However, GCC at least does not do pre-processing on the data that follows, which means it is hard to see the value of a variable. #define PP_VAR 123 #warning "Value of PP_VAR = " PP_VAR #warning "Value of PP_VAR = " #PP_VAR #warning "Value of PP_VAR = " ##PP_VAR GCC produces: x.c:2:2: warning: #warning "Value of PP_VAR = " PP_VAR x.c:3:2: warning: #warning "Value of PP_VAR = " #PP_VAR x.c:4:2: warning: #warning "Value of PP_VAR = " ##PP_VAR
{ "pile_set_name": "StackExchange" }
Q: Is it possible to go through documents in cloud firestore to see if a value of a property is equal to a comparing one? I have website written in plain javascript to keep daily to-do tasks and the app crashed lately because different tasks of the same date was created on accident. My question is... how can i write an if statement that checks if a document from a collection has a property (in my case the date) that is equal to the one in the input field of my form. i guess it should check after i click submit? if it exists, creation should be denyed, if not, ok to proceed. i am using cloud firestore by the way... many thanks in advance for the help! A: First, make a query to get a document that has same date: var query = db.collection("yourCollectionName").where("date", "==", dateInInputfield); query.get().then(function(querySnapshot) { if (querySnapshot.empty) { //empty } else { // not empty } }); If empty{you can proceed}, if notEmpty{some other task already exist on same date} If you are making an app like this, a cleaner approach will be to name the id of a document as it's date, for eg. if a task is created at timestamp of 1234567, create a document named 1234567 and inside it, store all the necessary information. By following this approach, if you create a new task, simply fetch a document by the name in inputfield, var docRef = db.collection("yourCollectionName").doc("date"); docRef.get().then(function(doc) { if (doc.exists) { //this means some other document already exists } else { //safe to create a new document by this date. } }).catch(function(error) { console.log("Error:", error); });
{ "pile_set_name": "StackExchange" }
Q: XSLT transformation from XML to XML document I need help to get correct XSL transformation, I am expecting the source XML to be copied as is and do the required updates to the target XML file. Right now i am trying 2 things, first is to copy source to target XML and second is to update the namespace URL and version attribute of root element. Please find the code below and let me know what is going wrong because i am getting only root element in the target xml, the content is missing, the end tag for root element is missing and also the attribute version was not updated. Source XML- <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:v1="http://www.abc.com/s/v1.0"> <soapenv:Header/> <soapenv:Body> <v1:QueryRequest version="1"> <subject> <dataList> <!--1 or more repetitions:--> <dataset> <type>company</type> <value>abc</value> </dataset> <dataset> <type>user</type> <value>xyz</value> </dataset> </dataList> </subject> <!--Optional:--> <testList> <!--1 or more repetitions:--> <criteria> <type>test</type> <value>972</value> </criteria> <criteria> <type>test2</type> <value>false</value> </criteria> </testList> </v1:QueryRequest> </soapenv:Body> </soapenv:Envelope> XSL file :- <?xml version="1.0" encoding="utf-8"?> <xsl:transform version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:old="http://www.abc.com/s/v1.0" exclude-result-prefixes="old"> <xsl:output method="xml" encoding="utf-8" indent="yes" version="1.0" /> <xsl:param name="newversion" select="2.0"> </xsl:param> <!-- replace namespace of elements in old namespace --> <xsl:template match="old:*"> <xsl:element name="{local-name()}" namespace="http://www.abc.com/s/v2.0"> <xsl:apply-templates select="@* | node()"/> </xsl:element> </xsl:template> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*"/> <xsl:apply-templates/> </xsl:copy> </xsl:template> <xsl:template match="//*[local-name()='QueryRequest']/@version"> <xsl:attribute name="version"> <xsl:value-of select="$newversion"/> </xsl:attribute> </xsl:template> Output using above XSL file:- <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Header> </soapenv:Header> <soapenv:Body><ns7:QueryRequest version="1" xmlns="" xmlns:ns6="http://www.abc.com/s/v1.0" xmlns:ns7="http://www.abc.com/s/v2.0"/> </soapenv:Body> </soapenv:Envelope> " Expected Output: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:v2="http://www.abc.com/s/v2.0"> <soapenv:Header/> <soapenv:Body> <v2:QueryRequest version="2.0"> <subject> <dataList> <!--1 or more repetitions:--> <dataset> <type>company</type> <value>abc</value> </dataset> <dataset> <type>user</type> <value>xyz</value> </dataset> </dataList> </subject> <!--Optional:--> <testList> <!--1 or more repetitions:--> <criteria> <type>test</type> <value>972</value> </criteria> <criteria> <type>test2</type> <value>false</value> </criteria> </testList> </v2:QueryRequest> </soapenv:Body> </soapenv:Envelope> Spring config Code which triggers this transformation - <int-xml:xslt-transformer id="v2transformer" xsl-resource="classpath:transformtoversion2.xslt" input-channel="AChannel" output-channel="BChannel" result-transformer="resultTransformer"> </int-xml:xslt-transformer> A: For me, once I correct the XSLT by renaming the root element to xsl:stylesheet instead of xsl:transform, I get almost the correct output <?xml version="1.0" encoding="utf-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:v1="http://www.abc.com/s/v1.0"> <soapenv:Header/> <soapenv:Body> <QueryRequest xmlns="http://www.abc.com/s/v2.0" version="2"> <subject xmlns=""> <dataList> <!--1 or more repetitions:--> <dataset> <type>company</type> <value>abc</value> </dataset> <dataset> <type>user</type> <value>xyz</value> </dataset> </dataList> </subject> <!--Optional:--> <testList xmlns=""> <!--1 or more repetitions:--> <criteria> <type>test</type> <value>972</value> </criteria> <criteria> <type>test2</type> <value>false</value> </criteria> </testList> </QueryRequest> </soapenv:Body> </soapenv:Envelope> the only semantically significant difference from what you require being version="2" rather than version="2.0", which can be fixed by adding some quotes to <xsl:param name="newversion" select="'2.0'" /> to set the param value to the string 2.0 instead of the number 2. For the cosmetic differences, if you want to remove the unused xmlns:v1 from the root element and use a prefix for the QueryRequest element instead of a default namespace (which is then countermanded by xmlns="" for the children) then you need something more like this <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:old="http://www.abc.com/s/v1.0" exclude-result-prefixes="old" xmlns:v2="http://www.abc.com/s/v2.0"> <xsl:output method="xml" encoding="utf-8" indent="yes" version="1.0" /> <xsl:param name="newversion" select="'2.0'"/> <!-- fix namespace declarations on root element --> <xsl:template match="/*"> <xsl:element name="{name()}" namespace="{namespace-uri()}"> <!-- copy the v2: binding from the stylesheet document --> <xsl:copy-of select="document('')/xsl:stylesheet/namespace::v2" /> <xsl:apply-templates select="@* | node()" /> </xsl:element> </xsl:template> <!-- replace namespace of elements in old namespace --> <xsl:template match="old:*"> <xsl:element name="v2:{local-name()}"> <xsl:apply-templates select="@* | node()"/> </xsl:element> </xsl:template> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@* | node()"/> </xsl:copy> </xsl:template> <xsl:template match="old:QueryRequest/@version"> <xsl:attribute name="version"> <xsl:value-of select="$newversion"/> </xsl:attribute> </xsl:template> </xsl:stylesheet> I use <xsl:element> instead of <xsl:copy> in the /* template because copy copies the namespace bindings from the original tag. From the comments, you now want to upper-case all the <type> values. You can achieve this simply by adding one extra template <xsl:template match="type/text()"> <xsl:value-of select="translate(., 'abcdefghijklmnopqrstuvwxyz', 'ABCDEFGHIJKLMNOPQRSTUVWXYZ')" /> </xsl:template> There isn't a simple to-upper-case function as such in XPath 1.0, but you can use translate(X, Y, Z), which goes through the string X replacing each character in Y with the character at the same position in Z. This will convert all type elements, if you only want to convert the dataset types and not the criteria types then just use a more specific match="dataset/type/text()".
{ "pile_set_name": "StackExchange" }
Q: Microsoft Excel Macro: Bulk Reading and Writing First Line and Rest of File I am trying to make a macro that will bulk perform on all .txt files in a given directory. I would like the first line to be copied into the first cell (A1). And then I would like the rest of the contents to be pasted into B1. The macro would perform that for all the .txt files in a directory, except it would go to A2, B2...A3,B3 etc Can anyone help? A: This should work for you: Sub Mrig_GettxtData() Dim strFile As String, strPath As String, MyData As String, tempStr As String Dim filePath As Variant Dim strData() As String Dim lineNo As Long Dim myCell As Range strPath = "C:\test_folder\test" '--> write your path here (without "\") filePath = strPath & "\" Set myCell = ThisWorkbook.Sheets("Sheet1").Range("A1") '-->change Sheet1 as required strFile = Dir(filePath & "*.txt") Do While Len(strFile) > 0 Open filePath & strFile For Binary As #1 MyData = Space$(LOF(1)) Get #1, , MyData Close #1 strData() = Split(MyData, vbCrLf) lineNo = 0 tempStr = "" For Each a In strData lineNo = lineNo + 1 If lineNo = 1 Then 'tempStr = "" Then myCell.Value = a Set myCell = myCell.Offset(0, 1) ElseIf lineNo = 2 Then tempStr = a Else tempStr = tempStr & vbCrLf & a End If Next If lineNo <> 1 Then myCell.Value = tempStr Set myCell = myCell.Offset(1, -1) End If strFile = Dir() Loop End Sub
{ "pile_set_name": "StackExchange" }
Q: Ternary operator associativity in C# - can I rely on it? Ahh, don't you just love a good ternary abuse? :) Consider the following expression: true ? true : true ? false : false For those of you who are now utterly perplexed, I can tell you that this evaluates to true. In other words, it's equivalent to this: true ? true : (true ? false : false) But is this reliable? Can I be certain that under some circumstances it won't come to this: (true ? true : true) ? false : false Some might say - well, just add parenthesis then or don't use it altogether - after all, it's a well known fact that ternary operators are evil! Sure they are, but there are some circumstances when they actually make sense. For the curious ones - I'm wring code that compares two objects by a series of properties. It would be pretty nice if I cold write it like this: obj1.Prop1 != obj2.Prop1 ? obj1.Prop1.CompareTo(obj2.Prop1) : obj1.Prop2 != obj2.Prop2 ? obj1.Prop2.CompareTo(obj2.Prop2) : obj1.Prop3 != obj2.Prop3 ? obj1.Prop3.CompareTo(obj2.Prop3) : obj1.Prop4.CompareTo(obj2.Prop4) Clear and concise. But it does depend on the ternary operator associativity working like in the first case. Parenthesis would just make spaghetti out of it. So - is this specified anywhere? I couldn't find it. A: Yes, you can rely on this (not only in C# but in all (that I know) other languages (except PHP … go figure) with a conditional operator) and your use-case is actually a pretty common practice although some people abhor it. The relevant section in ECMA-334 (the C# standard) is 14.13 §3: The conditional operator is right-associative, meaning that operations are grouped from right to left. [Example: An expression of the form a ? b : c ? d : e is evaluated as a ? b : (c ? d : e). end example] A: If you have to ask, don't. Anyone reading your code will just have to go through the same process you did, over and over again, any time that code needs to be looked at. Debugging such code is not fun. Eventually it'll just be changed to use parentheses anyway. Re: "Try to write the whole thing WITH parentheses." result = (obj1.Prop1 != obj2.Prop1 ? obj1.Prop1.CompareTo(obj2.Prop1) : (obj1.Prop2 != obj2.Prop2 ? obj1.Prop2.CompareTo(obj2.Prop2) : (obj1.Prop3 != obj2.Prop3 ? obj1.Prop3.CompareTo(obj2.Prop3) : obj1.Prop4.CompareTo(obj2.Prop4)))) Clarification: "If you have to ask, don't." "Anyone reading your code..." Following the conventions common in a project is how you maintain consistency, which improves readability. It would be a fool's errand to think you can write code readable to everyone—including those who don't even know the language! Maintaining consistency within a project, however, is a useful goal, and not following a project's accepted conventions leads to debate that detracts from solving the real problem. Those reading your code are expected to be aware of the common and accepted conventions used in the project, and are even likely to be someone else working directly on it. If they don't know them, then they are expected to be learning them and should know where to turn for help. That said—if using ternary expressions without parentheses is a common and accepted convention in your project, then use it, by all means! That you had to ask indicates that it isn't common or accepted in your project. If you want to change the conventions in your project, then do the obviously unambiguous, mark it down as something to discuss with other project members, and move on. Here that means using parentheses or using if-else. A final point to ponder, if some of your code seems clever to you: Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. — Brian W. Kernighan A: The assertion that parentheses detract from the readability of the code is a false assumption. I find the parenthetical expression much more clear. Personally, I would use the parentheses and/or reformat over several lines to improve readability. Reformatting over several lines and using indenting can even obviate the need for parentheses. And, yes, you can rely on the fact that the order of association is deterministic, right to left. This allows the expression to evaluate left to right in the expected fashion. obj1.Prop1 != obj2.Prop1 ? obj1.Prop1.CompareTo(obj2.Prop1) : obj1.Prop2 != obj2.Prop2 ? obj1.Prop2.CompareTo(obj2.Prop2) : obj1.Prop3 != obj2.Prop3 ? obj1.Prop3.CompareTo(obj2.Prop3) : obj1.Prop4.CompareTo(obj2.Prop4);
{ "pile_set_name": "StackExchange" }
Q: Qt Application switches focus to a different application after using a file open dialog I'm porting one of my Qt apps to Windows after developing on OSX. I didn't have this problem with OSX, but it happens now under Windows 8. When I open and select a file with QFileDialog::getOpenFileName(), the focus switches to some other app and I have to alt-tab back into mine. How do I make Qt revert to the main app once the file is selected in the dialog instead of doing this? A: You probably just forgot to set the parent for the QFileDialog - you need to pass the pointer to the current window to getOpenFileName: QString fileName = QFileDialog::getOpenFileName(this, tr("Open File"), "", tr("Images (*.png *.xpm *.jpg)"));
{ "pile_set_name": "StackExchange" }
Q: Why do I Get Different Results Using Array? I've made a simple program that counts matrices, here's the code: #include <iostream> #include <math.h> using namespace std; int main() { int result[3] = {0,0,0}; int matrixc[3][6] = { {0,0,0,0,0,1}, {0,0,0,1,1,1}, {1,0,1,0,0,1} }; for(int x=0;x <3;x++) { for(int y=0;y < 6;y++) { result[x] += (matrixc[x][y] * pow(2,6-y)); } cout << result[x] << endl; } } The output is what I wanted, it is: 2,14,and 82. But, when I delete the initialization in the integer array of result: #include <iostream> #include <math.h> using namespace std; int main() { int result[3]; //deleted initialization int matrixc[3][6] = { {0,0,0,0,0,1}, {0,0,0,1,1,1}, {1,0,1,0,0,1} }; for(int x=0;x <3;x++) { for(int y=0;y < 6;y++) { result[x] += (matrixc[x][y] * pow(2,6-y)); } cout << result[x] << endl; } } I got odd outputs: 1335484418,32618, and 65617. Would you like to explain me why would the output be different between an array with and without an initialization? Actually, I don't want to initialize all result array, because I have a huge data of matrices. Is it possible if I use std::vector without initializing all of the result array? A: Would you like to explain me why would the output be different between an array with and without an initialization? Without initialisation, automatic variables aren't initialised. They will have an indeterminate value, depending on what happened to be in the memory they occupy. Actually, I don't want to initialize all "result" array, because I have a huge data of matrices. You can zero-initialise the whole array, even if it's huge, like this: int result[huge] = {}; although, if it is huge, then it shouldn't be an automatic variable. These are typically kept on the stack, which is typically not huge and liable to overflow if you put too much stuff on it. Is it possible if I use std::vector without initializing all of the "result" array? Yes, a vector will zero-initialise its elements by default.
{ "pile_set_name": "StackExchange" }
Q: How do I copy only one file (the newest one) from a directory using a shell script on Mac OS X? I'm looking to periodically run a script which copies only the most recently edited file over from one directory, over to another one. How do I do this with a shell script on OSX? A: The easiest way would be to do this directly through cron. For example, to copy the file once a week, create a crontab like this: @weekly cp "$(ls -t /path/to/source | head -1)" /path/to/target DETAILS: The -t flag of ls means sort by time, so by printing only the first file (head -1) I know I am getting the newest. Running ls -t /path/to/source | head -1 will return the newest file in the directory /path/to/source so cp "$(ls -t /path/to/source | head -1)" /path/to/target will copy the newest file from source to target. The quotes around the expression are needed in order to deal with file names that contain spaces. Wikipedia explains that Cron is the time-based job scheduler in Unix-like computer operating systems. Cron enables users to schedule jobs (commands or shell scripts) to run periodically at certain times or dates. It is commonly used to automate system maintenance or administration. [...] Cron is driven by a crontab (cron table) file, a configuration file that specifies shell commands to run periodically on a given schedule. To create a new crontab, open a terminal and run crontab -e This will launch your default editor ($EDITOR) and present you with a text file. Paste the line above into that file, save and exit and that's it, your crontab has been created. The format of crontabs is (taken from here): * * * * * command to be executed - - - - - | | | | | | | | | +----- day of week (0 - 6) (Sunday=0) | | | +------- month (1 - 12) | | +--------- day of month (1 - 31) | +----------- hour (0 - 23) +------------- min (0 - 59) So, for example, to run cp /foo /bar at 14:35 on October 12th you would write: 35 14 12 11 cp /foo /bar The cron daemon also understands some shorthand commands such as: string meaning ------ ------- @reboot Run once, at startup. @yearly Run once a year, "0 0 1 1 *". @annually (same as @yearly) @monthly Run once a month, "0 0 1 * *". @weekly Run once a week, "0 0 * * 0". @daily Run once a day, "0 0 * * *". @midnight (same as @daily) @hourly Run once an hour, "0 * * * *". So, the crontab I gave above means run the cp command once a week at midnight on Sunday morning.
{ "pile_set_name": "StackExchange" }
Q: Django running wrong view method I have to view files stored in mysite folder. one is named as views.py and other is named as request_view.py. In urls.py, I have used 'answer' method for views.py and 'display_meta' method for request_view.py. (django version: 1.5 and python version: 2.7.3) this is the url pattern: url(r'^twitter/$', answer), url(r'request/$', display_meta) when I call http:/127.0.0.1:8000/request/, then also first view(i.e. /twitter/) is called! any help? one more thing. In my view.py, I have some unbounded code (i.e. the code which is neither present in a method or class). can this be the cause of the problem? l = StdOutListener() auth = OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) stream = Stream(auth, l) keyword = input('enter the keyword you want to search for?') stream.filter(track = [keyword]) apart from this code, evry code is either in the class or method. One thing that I noticed is that first the code of the view.py runs, then display_meta runs. Thanks in advance. SOLVED The problem was with the import function that I was using. since my code was unbounded in one of the views, the import function always import that regardless of the url that I choose. Suggestion Always use the nomenclature mentioned in the this example. In many books it has been suggested that we should import the views, but it might cause an error if you have unbounded code like I had. A: I don't know exactly why /twitter/ view is called, but I can see two things to change: You should use a string as the second parameter for url(), as you can see in this example [1]. You can use 'myapp.views.my_method' nomenclature. You forgot to start the request URL with ^ that indicates the start of the URL. About the unbounded code, I don't know if that could be causing the problem. But I can't see why are you putting that code unbounded. I am not sure when that code would be executed, I guess the first time you call a view in that file and Django loads the file (I'm guessing, I don't know exactly), but I don't think that would be a good way to do that. Think when do you want to execute that code, put it in a method, and call it. [1] https://docs.djangoproject.com/en/1.5/topics/http/urls/#example
{ "pile_set_name": "StackExchange" }
Q: Call non static method from another page How to call non static method from another page I cant use of static method I have a method in master page Now, I want calling method in another page from master page method Master Page: protected void Pagination_Click(object sender, EventArgs e) { int Count = Convert.ToInt32(DRCount.Text); LinkButton LinkButton = (LinkButton)sender; int Select = Convert.ToInt32(LinkButton.Text); int Num2 = Count * Select; int Num1 = Num2 - Count; **//Calling GetData method in 01.aspx** } 01.aspx.cs page: public void GetData(int Num1, int Num2) { int Count = Convert.ToInt32(this.Master.Count); int PriceSort = Convert.ToInt32(this.Master.Price); string NameSort = this.Master.Name.ToString();... } A: You don't want to new another ASP.NET page from codebehind. Instead move the code to a class in which you can utilize the shared logic.
{ "pile_set_name": "StackExchange" }
Q: Sniffing at work- How to detect Because of the place I work has some real issues (people) especially in IT and the owner, I wonder if we are being sniffed. Is there any way to tell if on a Vista 64-bit machine: 1) In system logs some identification that would tell me that someone might log into my PC such as an Admin 2) Something in the logs that would give me a flag about maybe I'm being monitored some other way? 3) How can I be sure that my gmail, hotmail, and chat is not being sniffed. I know there are things like Simp, etc. I'm talking about specific hidden system signs either in registry or logs. Obviously I'm not going to raise any suspicion by me asking our network admin. I don't trust anyone at this company. is there a good way to basically monitor for this as an end user? Could someone log in and basically watch me work and if so, would there be any goodies left behind for me to find out if this has happened other than visual signs which would not be present...maybe some running processes? A: Nothing you do on your local area network is private. Nothing. If someone is sniffing traffic at the router, you can't tell. If someone has attached a hub and is using a promiscuous sniffer, you can't tell. This is the reality of being on a corporate network. That said, there are usually some exceptions. If you are visiting a website that uses SSL or TLS encryption, then the content of your messages is probably safe. They will know WHERE the content is heading, but not what is in it. This can be compromised by something called 'man-in-the-middle' attack, but that requires intimate knowledge of the network. That said, if it's your own IT manager who's doing it, it's a possibility. The fact of the matter is that all this monitoring happens outside the realm of your local machine, which means that it's undetectable. Whether or not it is legal for your employer to do this to you though is another matter, and it varies GREATLY depending on where you live (UK, USA, Australia, etc) A: For preventing them sniffing elsewhere on the network you can run a web proxy on an external machine you do trust that lets you connect over SSL. That'll let you browse non SSL sites without anyone on the LAN being able to sniff it. Beyond that, if they've tampered with their computer that you're using, I'm not sure you can ever detect that. You also can't really detect if they've put pinhole cameras or microphones around the place, or are listening through laser mics or watching you through telescopes. At some point you just have to trust your employer and, if you don't, find one you can trust. I've had employers who knew that I would occasionally have a rant on IRC or spend an hour reading blogs. As long as my work was done they didn't care. I've had other employers (briefly) where if you accessed anything that wasn't directly, provably, work related, it'd be a serious disciplinary matter. That's their call, not yours. This also goes both ways, if you distrust them that much, you'll find they'll start to distrust you.
{ "pile_set_name": "StackExchange" }
Q: using wpf datagridcomboboxcolumn's IsSynchronizedWithCurrentItem (see below for my own answer that I came up with after letting this percolate for days & days) I am trying to achieve the following scenario in WPF. I have a datagrid that is displaying rows of data for viewing and additional data entry. It is a new app but there is legacy data. One particular field in the past has had data randomly entered into it. Now we want to limit that field's values to a particular list. So I'm using a DataGridComboBoxColumn. FWIW I have alternatively tried this with a DataGridTemplateColumn containing a ComboBox. At runtime, if the existing value is not on the list, I want it to display anyway. I just cannot seem to get that to happen. While I have attempted a vast array of solutions (all failures) here is the one that is most logical as a starting point. The list of values for the drop down are defined in a windows resource called "months". <DataGridComboBoxColumn x:Name="frequencyCombo" MinWidth="100" Header="Frequency" ItemsSource="{Binding Source={StaticResource months}}" SelectedValueBinding="{Binding Path=Frequency,UpdateSourceTrigger=PropertyChanged}"> <DataGridComboBoxColumn.ElementStyle> <Style TargetType="ComboBox"> <Setter Property="IsSynchronizedWithCurrentItem" Value="False" /> </Style> </DataGridComboBoxColumn.ElementStyle> </DataGridComboBoxColumn> What is happening is that if a value is not on the list then the display is blank. I have verified at runtime that the IsSynchronizedWithCurrentItem element is indeed False. It is just not doing what I am expecting. Perhaps I am just going down the wrong path here. Maybe I need to use a textbox in combination with the combobox. Maybe I need to write some code, not just XAML. I have spent hours trying different things and would be really appreciative of a solution. I have had a few suggestions to use this class or that control but without explanation of how to use it. Thanks a bunch! A: I have finally solved this. The trick is to get rid of the comboboxcolumn and use a template that has a textbox for display and a combobox for editing. However, I still spent hours with a new problem...when making a selection in the combobox, it would modify any other rows where I had also used the combobox in the grid. Guess what solved the problem! The IsSynchronizedWithCurrentItem property that I was trying to use before. :) <DataGridTemplateColumn x:Name="frequencyCombo" Header="Frequency"> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Path=Frequency}" /> </DataTemplate> </DataGridTemplateColumn.CellTemplate> <DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <ComboBox ItemsSource="{Binding Source={StaticResource frequencyViewSource}, TargetNullValue=''}" SelectedItem="{Binding Path=Frequency}" IsSynchronizedWithCurrentItem="False" /> </DataTemplate> </DataGridTemplateColumn.CellEditingTemplate> </DataGridTemplateColumn> No ugly hacks. No non-usable choices hanging around at the bottom of the dropdown. No code to add those extra values and then clean them up. I am not going to take away the "Answer" on Mark's suggestion since it enabled me to get the app into my client's hands, but this is the solution I was looking for. I found it buried in a "connect" item after hours of web searching. Thanks for everyones help!
{ "pile_set_name": "StackExchange" }
Q: Chrome - Script tag isn't blocking In the following example: <script src="1.js"></script> <script src="2.js"></script> <img src="FDA.PNG" alt="" /> The first tag should block the parsing of the html, but according to the timeline, it's not. All the files are loading at the same time, why? A: Scripts are executed in the order they appear in the HTML (unless you use the async or defer attributes). Browsers are perfectly welcome, however, to download them in any order they like, including in parallel, and including in parallel with other resources such as CSS files and images they find in the HTML. This is a Good Thing(tm), it helps our pages load faster. Scripts downloaded before it's their turn to run are held until it's their turn. The first tag should block the parsing of the html... Not the parsing. Just the building of the DOM and execution of the scripts.
{ "pile_set_name": "StackExchange" }
Q: Remove extra char in the string in shell script line="premon D0000070 0x201 0x40" # it has 26 chars echo $line | wc -c # giving out put 27 chars. i want to remove the extra char in the string.. please help? A: Actually you don't have an extra char in your string. echo puts a '\n' at the end of line. If you don't want to echo that char, you can do echo -n $line | wc -c
{ "pile_set_name": "StackExchange" }
Q: different result between phpinfo.php and php-v i was using appserv 5.8 and in my phpinfo.php the php version was 5.6.26 now i installed laravel5.5 and its required phpversion 7 so i changed the php version to 7 from 5 now in my phpinfo.php PHP Version 7.0.11 and when i write in the command php -v its give me PHP 5.6.26 (cli) (built: Sep 15 2016 18:12:07) Copyright (c) 1997-2016 The PHP Group Zend Engine v2.6.0, Copyright (c) 1998-2016 Zend Technologies and i cant install the packages with laravel 5.5 bc the version in command line is 5.6 not 7 but when i check in phpinfo its 7 i have read something thats the command php -v tack the version from php-cli so how can i change the php -v to be 7.0.1 as phpinfo.php thanks .. A: phpinfo.php shows what version of PHP Apache is using. -v shows what's in your $PATH. If you're on a Mac I recommend using homebrew to install php 7 as described here To clarify, PHP can be run in 3 ways: behind a web server, for command line scripting, and for GUI building. You have 2 versions: the web server one, which Apache is calling and invoking phpinfo.php, and PHP-CLI, which is invoked from the command line with php -v. A: It seems like your PHP CLI version is different than the PHP web version. Upgrade your PHP CLI package.
{ "pile_set_name": "StackExchange" }
Q: Make a method report progress in Winform C# I have two classes, form1.cs and test.cs Form1.cs calls some public method in test.cs. Is it possible to somehow make the program report the progress? For example, In Form1.cs test.CallTestMethod(); In test.cs public void CallTestMethod() { // Read excel file line by line (~5000 lines) // I used double for loops to iterate row and col } I know how to report progress if the method is in the form element, but how would I report progress if im calling an external method? Is it even possible? Thanks A: You're going to need CallTestMethod() to execute in a non-UI thread. give it a parameter Action<double> reportProgressPercent. Have it call reportProgressPercent as appropriate. When Form1 calls CallTestMethod(), have it pass in an appropriate lambda that invokes into the UI thread to report progress. public void CallTestMethod(Action<double> reportProgressPcnt) { foreach (var blah in whatever) { foreach (var foo in innerLoopWhatever) { // do stuff. On every nth iteration or whatever, figure out what // your completed percentage is and pass it to reportProgressPcnt double progress = (curRow / totalRows) * 100; reportProgressPcnt(progress); } } } Form1.cs progBar1.Maximum = 100; progBar1.Step = 1; Task.Run(() => { test.CallTestMethod(pcnt => { Invoke(new Action(() => progBar1.Value = (int)pcnt)); }) }); If you want to report progress in some other way, change the parameters to your Action; for example: public void CallTestMethod(Action<int, int> reportCurrentRowAndColumn) { int curRow = 0; int curCol = 0; //...blah blah loop stuff, update values of curRow & curCol as needed... reportCurrentRowAndColumn(curRow, curCol); Then maybe your Action could update a pair of labels displaying current row and current column.
{ "pile_set_name": "StackExchange" }
Q: How to set up filter for isort as external tool in PyCharm I'm trying to set up isort as external tool in PyCharm. I'm unable to set up filter so that file paths are links. Output from isort is: ERROR: C:\dev\path\to\a\project\file.py Imports are incorrectly sorted. According to docs putting $FILE_PATH$ should be sufficient yet it does not work for me. I've tried several regex styles without any success. A: tl;dr use $FILE_PATH$(?<=\.py)( |$) as filter. So (^|[\W])(?<file>(?:\p{Alpha}\:|/)[0-9 a-z_A-Z\-\\./]+)(?<=\.py) is regexp used for $FILE_PATH. Source: https://github.com/JetBrains/intellij-community/blob/d29c4fa1a73e03b852353186d792540150336b9f/platform/lang-api/src/com/intellij/execution/filters/RegexpFilter.java#L39 See how it allows spaces in there? Meaning it will grab C:\dev\path\to\a\project\file.py Imports are incorrectly sorted. and as it does not point to a real file it won't be converted to a link. So you can either modify isort output format to something with clear filepath boundaries, or use something more fancy in regexp like positive look behind, which would make your filter look like this: $FILE_PATH$(?<=\.py)( |$) For testing java regexps you can use https://www.freeformatter.com/java-regex-tester.html if the provided filter does not meet your particular needs.
{ "pile_set_name": "StackExchange" }
Q: C# - executing 200 http get requests and output the results I have an console app where the user inputs an menu option (1-5), I execute some feature and output the result. One of the features is to execute 200 http get requests to some url, get all the results back, do some work on them and output to the user. This my current code: Parallel.For(0, 200, i => { String[] words = webApi.getSplittedClassName(); for (int j = 0; j < words.Length; j++) { wordsList.Add(words[j]); } }); and getSplittedClassName: public string[] getSplittedClassName() { HttpResponseMessage response = null; try { response = httpClient.GetAsync(url).Result; } catch (WebException e) { return null; } return parser.breakdownClassName(response); } Now, since the user inputs a option number, the program executess the required feature and then I put the output, I thought there is not point of doing the http work in async, so its all synchronously. The issue is that it is taking A LOT of time to do the requests, about 30-40 secconds.. does that make sense? There are 3 features basically: do 1 request, do 3 requests and 200 requests. What is the best option of doing the 200 requests and wait for all the results? should it be synchronously like when I only send out one request? thanks A: Parallel.For() tends to assume that your operations are mostly CPU-bound, so it'll use a degree of parallelism that's tuned to how many CPU cores your machine has. But HTTP Requests tend to be IO-bound, so most of your time is spent just waiting for the target machine to send information back to you. That means that this is a good opportunity to use Asynchronous processing. Try something like this: public async Task<string[]> getSplittedClassName() { HttpResponseMessage response = await httpClient.GetAsync(url); return parser.breakdownClassName(response); } and this: var classNameTasks = Enumerable.Range(1, 200) .Select(i => webApi.getSplittedClassName()) .ToArray(); wordList.AddRange( Task.WhenAll(classNameTasks).Result .SelectMany(g => g)); Explanation: Make getSplittedClassName() async so that rather than getting the stuff it needs synchronously and then returning the result, it immediately returns a Task<> that will be completed when the result is available. I removed the code that eats all exceptions, because that's generally a bad practice. You should think about what you'd really want to do if there was an exception here: should you retry the request? Just let the exception be thrown? It's typically a bad idea to just ignore problems like this. Task.WhenAll() will return a Task<> that will return all of the results of the given tasks. You can synchronously wait for all those tasks to complete, then add them all to wordList as a batch. This is thread-safe because all the items are added to wordList on a single thread, whereas your original code had multiple threads potentially trying to add values to wordList at the same time. Also, I'm assuming this is just a homework assignment, but if this were a real-world scenario, the fact that you're doing 200 GET requests to the same URL at the same time would be a big red flag.
{ "pile_set_name": "StackExchange" }
Q: Visual C++ error LNK1120 compiling Good afternoon. I am starting with Visual c++ and I have a compilation problem I dont understand. The Errors I get are the following : error LNK1120 external links unresolved error LNK2019 I paste the code: C++TestingConsole.CPP #include "stdafx.h" #include "StringUtils.h" #include <iostream> int _tmain(int argc, _TCHAR* argv[]) { using namespace std; string res = StringUtils::GetProperSalute("Carlos").c_str(); cout << res; return 0; } StringUtils.cpp #include "StdAfx.h" #include <stdio.h> #include <ostream> #include "StringUtils.h" #include <string> #include <sstream> using namespace std; static string GetProperSalute(string name) { return "Hello" + name; } Header: StringUtils.h #pragma once #include <string> using namespace std; class StringUtils { public: static string GetProperSalute(string name); }; A: You only need to declare the method static in the class definition and qualify it with the class name when you define it: static string GetProperSalute(string name) { return "Hello" + name; } should be string StringUtils::GetProperSalute(string name) { return "Hello" + name; } Other notes: remove using namespace std;. Prefer full qualifications (e.g. std::string) your class StringUtils seems like it would be better suited as a namespace (this will imply more changes to the code) string res = StringUtils::GetProperSalute("Carlos").c_str(); is useless, you can just do: string res = StringUtils::GetProperSalute("Carlos"); pass strings by const reference instead of by value: std::string GetProperSalute(std::string const& name)
{ "pile_set_name": "StackExchange" }
Q: Replace wrapping tags and reverse We have a legacy system that requires a format which is not html but is close enough to be confusing. On our shiny front end website we have an instance of CKEditor that allows users to edit this a-bit-like-html-but-not-really format. The big difference is that our format does not understand <p> tags. It expects new lines to be formatted with <br /> instead. CKEditor can be set to operate in BR mode but, perhaps unsurprisingly, this causes some annoying user interface bugs. As an alternative, I'm considering allowing it to run in its default P mode and replacing the tags on the server with some XSLT. This is easy enough in the one direction: Transforming: <root> <p>Test</p><p>Test</p><p>Test</p> <p><b>Test</b></p> </root> With: <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> <!-- Replace `[p]contents[/p]` with `contents[br /]` --> <xsl:template match="p"> <xsl:apply-templates/><br/> </xsl:template> Results in: <root>Test<br/>Test<br/>Test<br/><b>Test</b><br/></root> The question is, have I lost too much information to do the same process in reverse? And if not, what's the best way of approaching this? Is XSLT even the right option? A: How about: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:key name="k1" match="root/node()[not(self::br)]" use="generate-id(following-sibling::br[1])"/> <xsl:template match="@* | node()"> <xsl:copy> <xsl:apply-templates select="@* | node()"/> </xsl:copy> </xsl:template> <xsl:template match="root"> <xsl:copy> <xsl:apply-templates select="br" mode="wrap"/> </xsl:copy> </xsl:template> <xsl:template match="br" mode="wrap"> <p> <xsl:apply-templates select="key('k1', generate-id())"/> </p> </xsl:template> </xsl:stylesheet> With Saxon 6.5.5 that converts <root>Test<br/>Test<br/>Test<br/><b>Test</b><br/></root> into <root><p>Test</p><p>Test</p><p>Test</p><p><b>Test</b></p></root>
{ "pile_set_name": "StackExchange" }
Q: Is it possible to cast a String to an int in the Spring xml-config Say I have a properties file 'config.properties' that contains this line: myIntValue=4711 and a little java bean: public MyLittleJavaBean { private int theInt; public void setTheInt(int theInt) { this.theInt = theInt } } In my applicationContext.xml I read the properties file: <context:property-placeholder location="config.properties"/> and then I want to wire the stuff together like this: <bean id="theJavaBean" class="MyLittleJavaBean"> <property name="theInt" value="${myIntValue}"/> </bean> Then I'll get this errorMessage: org.springframework.beans.TypeMismatchException: Failed to convert property value of type 'java.lang.String' to required type 'int' for property 'theInt' Is it possible to cast ${myIntValue} to an int in the spring-xml? A: hmm... there must be something funny with your setup, because for me spring does the String-to-Int convertion without any effort from my side. Here is a example which works for me: xml configuration: <util:properties id="props"> <prop key="foobar">23</prop> </util:properties> <context:property-placeholder properties-ref="props" /> <bean class="Foo" p:bar="${foobar}" /> Foo.java public class Foo { private int bar; public void setBar(int bar) { this.bar = bar; } } UPDATE tested with spring 3.1.2
{ "pile_set_name": "StackExchange" }
Q: Why we have separate Interface called Entry which is nested in Map Interface in JAVA We know Map is an Interface which is being implemented by classes HashMap, TreeMap... Since all these implementing classes have same entry pattern (i.e. key-value pair), why should not we have this Entry pattern within Map Interface itself? What is the purpose to have this Entry pattern separately as Interface that is nested inside Map Interface? Thanks in advance. A: The reason Map.Entry is encapsulated within Map is because it is a very intimate strongly coupled interface that is purposely designed to be used with a Map exclusively. For your intents and purposes you can see it as a pair (key and value) representing one single entry in the Map. Different Map implementations have different requirements about how to store the entries. A HashMap computes the hash code of the key and stores it in its Node implementation (which extends Map.Entry), while TreeMap's Entry has information like the parent entry, the left and right children and the 'colour' of the node (since it is a red-black tree). Each Map implementation has its own requirements, so the Entry was kept as an interface.
{ "pile_set_name": "StackExchange" }
Q: Proper hand placement for chest flys? I was doing butterfly chest excercise with machine as shown here But, my trainer said I am doing it wrong as my hands should be straight when coming to front. While earlier, my elbow was bent when I bring my hands together. I was able to do 110 pounds with my prior technique. However, with the technique told by trainer I was hardly able to do 60 pound. So, if the trainer told me the correct form, which I think he did, which muscle was I training earlier? He told me it was shoulders, but, I have my doubts. Can someone clarify proper method of butterfly fly on this machine? A: It's hard to answer this question since you did not describe your form very well. The picture provided is the correct technique, and I don't know what your trainer is telling you. Chest flys primarily target the chest. Secondary muscles used in this exercise include the front deltoids and the biceps. Biceps act only as a stabilizer. The chest is much bigger than the front deltoids, so if you are lifting less weight, chances are that you're using your deltoids. This means that you are performing the exercise incorrectly. Go by feel. If you feel a squeeze in the middle of your chest, you're on the right track. If your shoulders are burning, or even in pain, then you're probably doing it wrong. P.S. Chest flys are an accessory movement. You shouldn't really be concerned with how much weight you're moving. Stick to bench presses to build strength.
{ "pile_set_name": "StackExchange" }
Q: CSS Layout with full size left navbar and header I would like to have the following layout +++++++++++++++++++++++ +Header + +++++++++++++++++++++++ +Nav+ + + + + + + + + + Content + + + + +++++++++++++++++++++++ so basically a two column layout with a header. I've checked many CSS layout generators on the net, but they just produced me a result where the left navbar is as big as the content in it. I can scale it with "height:500px" or whatever, but i want it to be fullsize (from top to bottom of browser window) all the time. Changing the value with "height:100%" does not work. If you want to try it out yourself: http://guidefordesign.com/css_generator.php and then select full page, two column layout, with header to see what i mean. If you want you can tell me which property i have to adjust in the generated css file to make it work A: You can try this. It works on the browsers I tested (Firefox, IE7+8, Opera, Safari, Chrome). Just play around with the percentage units for header and columns. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>for stackoverflow</title> <style> body, html { padding : 0px; margin : 0px; height : 100%; } #wrapper { width:900px; height:100%; margin: 0px; padding: 0px; } #header { height:10%; background-color:#930; width:900px; } #nav { background-color:#999; width:200px; height:90%; float:left; } #content { height:90%; background-color:#363; width:700px; float:left; } </style> </head> <body> <div id="wrapper"> <div id="header"></div> <div id="nav"></div> <div id="content"></div> </div> </body>
{ "pile_set_name": "StackExchange" }
Q: SQL Server incrementing a column's value depending on a trigger from another table I'm working on a WebApp with the following tables in it's database: CREATE TABLE Store ( [Name_of_Store], [Number_of_Ratings_Received] TINYINT, [Average_Rating] TINYINT, ) ; CREATE TABLE Employee ( [Name_of_Employee] ,[Number_of_Ratings_Received] TINYINT, [Average_Rating] TINYINT, ) ; CREATE TABLE Rating ( [Rating_for_Employee_out_of_Ten] TINYINT, [Rating_for_Store_out_of_Ten] TINYINT, ) ; For arguments sake, the tables contain the following data: INSERT INTO Store ([Name_of_Store], [Average_Rating], [Number_of_Ratings_Received]) VALUES ('Med', '5', '1'), ('Chem', '4', '3'), ; INSERT INTO Employee ([Name_of_Employee], [Average_Rating], [Number_of_Ratings_Received]) VALUES ('John', '5', '1'), ('Stephen', '1', '8'), ; Assuming there are primary and foreign keys that link the tables accordingly. The Webapp updates the Rating table, but then I need the Rating table (using it's corresponding foreign keys to the Store and Employee table's primary keys) to trigger the ratings fields in the Store and Employee tables. For example, every time a 'Employee' rates a Store, I need the value contained in the 'Number_of_Ratings_Received' field of that particular Store to increase by 1 and the 'Average_Rating' field to adjust accordingly. Bear in mind that this is my first attempt after watching tutorial videos. I just can't get the syntax right. So far I have: GO CREATE TRIGGER NumberOfRatingsReceived1 AFTER INSERT ON Store BEGIN UPDATE Store SET Number_of_Ratings_Received = AUTO_INCREMENT END GO CREATE TRIGGER NumberOfRatingsReceived2 AFTER INSERT ON Employee BEGIN UPDATE Employee SET Number_of_Ratings_Received = AUTO_INCREMENT END I'm struggling with getting the auto increment working, let alone the average calc. Please assist or point me into the right direction. A: I would suggest not storing the number of ratings and average. Instead create a view like the following to calculate the information on the fly and not have data duplication built into your model. CREATE VIEW StoreWithStatistics AS SELECT s.*, COUNT(r.StoreRating) OVER (PARTITION BY s.StoreID) AS Number_of_Ratings_Recieved, AVG(r.Rating_for_Store_out_of_Ten) OVER (PARTITION BY s.StoreID) AS Average_Rating FROM STORE s LEFT JOIN Rating r on s.StoreID = r.StoreID
{ "pile_set_name": "StackExchange" }
Q: Visual Studio locks output file on build I have a simple WinForms solution in VS 2010. Whenever I build it, output file (bin\debug\app.exe) ends up locked, and subsequent builds fail with a message like "The process cannot access the file 'bin\Debug\app.exe' because it is being used by another process." The only way to build the project is to restart VS after every build, which is very awkward. I have found this old blog post http://blogs.geekdojo.net/brian/archive/2006/02/17/VS2005FileLocking.aspx - it seems that the problem is really old. Does anyone know what is happening here, or at least some workaround? Update I don't actually run the file. Locking happens after build, not after debug (i.e. start VS - build - build - fail!) And I tried turning antivirus off. It doesn't help. Update 2 Process Explorer shows devenv.exe having loaded the file (in DLLs, not in Handles). It seems like some glitch during build prevented the unloading, but the (first) build completes without any messages other then "1 succeeded, o failed"/ A: Had the same issue, but found a solution (thanks to Keyvan Nayyeri): But how to solve this? There are various ways based on your project type but one simple solution that I recommend to Visual Studio add-in developers is to add a simple code to their project's build events. You can add following lines of code to the pre-build event command line of your project. if exist "$(TargetPath).locked" del "$(TargetPath).locked" if exist "$(TargetPath)" if not exist "$(TargetPath).locked" move "$(TargetPath)" "$(TargetPath).locked" A: It is not a virus issue. It is visual studio 2010 bug. It seems the issue is related to using visual studio gui designer. The workaround here is to move locked output file into another temporary one in pre-build event. It make sense to generate temporary file name randomly. del "$(TargetPath).locked.*" /q if exist "$(TargetPath)" move "$(TargetPath)" "$(TargetPath).locked.%random%" exit /B 0 In case of using constant temporary file names you will just postpone locks: That workaround works exactly once if exist "$(TargetPath).locked" del "$(TargetPath).locked" if exist "$(TargetPath)" if not exist "$(TargetPath).locked" move "$(TargetPath)" "$(TargetPath).locked" I have also found a solution with 2 temporary files that works exactly 2 times. A: The problem occurred to me too. My scenario was this : Running windows 7 (But might also happened in Windows XP) and while working on a project with WPF User Control I could build all of the times, Until opening the XAML file of the User Control - From there, I've got one build, and then the files are locked. Also, I've noticed that I was running Visual Studio (Devenv.exe) as Administrator, I've started to run Visual Studio without Administrator privileges and the problem was gone!. Let me know if it helped you too. Good luck.
{ "pile_set_name": "StackExchange" }
Q: Caught In Potential Infinite Loop I am simply trying to make a list of district names and District objects from a pandas DataFrame, but for some reason, the code never finishes running. I can't see anywhere that could become an infinite loop, so it is beyond me as to why it gets stuck every time I run it. Here is the section that is getting stuck (particularly the j-iterated for loop): import numpy as np import pandas as pd #make dataframe data = pd.read_csv('gun-violence-data_01-2013_03-2018.csv', header=0, delimiter=',') #drop data points with null condressional district values data = data[data.congressional_district != 0] data.dropna(axis=0,how='any',subset=['congressional_district'],inplace= True) #constructing working table table = data[['incident_id','state','congressional_district']] #list of districts. Formatting in original file must be corrected to analyze data districtNames = ['filler1','filler2'] districts = [] s = table.shape #loop thru the rows of the table for i in range(s[0]): check = True #build strings for each district ds = table.iloc[i,1] + str(table.iloc[i,2]) #testString = str(table.iloc[i,2]) #append ds to districtNames if it isnt in already #make array of District Objects for j in range(len(districtNames)): if(ds == districtNames[j]): check = False if(check): districtNames.append(ds) districts.append(District(ds,0)) For reference, here is the District class: class District: def __init__(self, name, count): self._name = name self._count = count def get_name(self): return name def get_count(self): return count def updateCount(self,amount): self._count += amount The initial .csv file is quite large, and after cutting out some of the data points in the 8th and 9th lines, I have 227,312 data points left. I understand this is quite a few, but the code doesn't even finish after running for 5 minutes. What am I doing wrong? A: It's not that it won't terminate, but that it is inefficient in its current state. Try something like this: import numpy as np import pandas as pd class District: def __init__(self, name, count): self._name = name self._count = count def get_name(self): return name def get_count(self): return count def updateCount(self,amount): self._count += amount #make dataframe data = pd.read_csv('gun-violence-data_01-2013_03-2018.csv', header=0, delimiter=',') #drop data points with null condressional district values data = data[data.congressional_district != 0] data.dropna(axis=0,how='any',subset=['congressional_district'],inplace= True) #constructing working table table = data[['incident_id','state','congressional_district']] #list of districts. Formatting in original file must be corrected to analyze data districtNames = (table.state + table.congressional_district.astype(str)).unique() districts = list(map(lambda districtName: District(districtName, 0), districtNames))
{ "pile_set_name": "StackExchange" }
Q: Can my characters age? Can I level up my characters without the game time affecting their age? I'm asking that because before going into a new area, I level up my character skills. When they are strong enough, I go to a new area. The problem is the process takes a lot of in-game time. So do my characters age? Can they die of old age? A: In-game time has no effect on the story or characters in Final Fantasy Tactics. My sister got near 100 hours and was still in the first part of the game with no consequences. Generally, unless the game clock is brought up as a real-time clock in the game's dialogue, you don't have to worry about things like that. A: The characters in-game do not age, nor does the calendar affect the story. The in-game calendar is only used for two purposes. One is for Propositions you can do in the Bars found in various cities. Propositions are non-interactive side content which you can send up to three of your units to do, and require a number of in-game days to pass for completion. The other purpose of the calendar is in monster breeding. The in-game date at which a monster egg hatches determines that monster's zodiac sign, as determined by the following table: --------------------------------------------- Sign | Start Date | End Date | --------------------------------------------- Capricorn | December 23 | January 19 | Aquarius | January 20 | February 18 | Pisces | February 19 | March 20 | Aries | March 21 | April 19 | Taurus | April 20 | May 20 | Gemini | May 21 | June 21 | Cancer | June 22 | July 22 | Leo | July 23 | August 22 | Virgo | August 23 | September 22 | Libra | September 23 | October 23 | Scorpio | October 24 | November 22 | Sagittarius | November 23 | December 22 | --------------------------------------------- Apparently, the in-game day being in a particular zodiac or on the birthday you select for Ramza has no effect on the game. Source: http://www.gamefaqs.com/ps/197339-final-fantasy-tactics/faqs/23143
{ "pile_set_name": "StackExchange" }
Q: Bootstrap griding, columns proportions Can I or is not advised to change Bootstrap columns percentage to achieve different proportions. What I mean. default .col-lg-3 { width: 25%; } .col-lg-9 { width: 75%; } changing to .col-lg-3 { width: 20%; } .col-lg-9 { width: 80%; } if so how and where do I change the less variables? A: Less Variables of Twitter Bootstrap can be edited, compiled and downloaded here: http://getbootstrap.com/customize/#less-variables However, I don't think you can change percentages of columns via width - only control the number of columns, gutter width, breakpoints, etc. You can, however create another CSS file to override Bootstrap's grid and make your own. But be cautious in changing those percentages, as one column will affect others. For example, you may have changed .col-lg-3 and col-lg-9 and they fit together - however when you start using four .col-lg-3, they won't fit anymore. col-lg-3 + col-lg-9 = 20% + 80% = 100% col-lg-3 + col-lg-3 + col-lg-3 + col-lg-3 = 20% + 20% + 20% + 20% = 80% What I usually do, when a section needs custom percentage columns, I put an ID then override the bootstrap column inside that specific section, so the default column percentages of bootstrap won't be affected on other parts of the website like so: HTML: <div id="custom" class="row"> <div class="col-lg-3"> Some Content </div> <div class="col-lg-9"> Some Content </div> </div> CSS: #custom > .col-lg-3 { width: 20%; } #custom > .col-lg-9 { width: 80%; } A: The .col-X-3 = 20% and a col-X-12 (NOT -9) is a 15 column grid not 12. You can do two things: Go to the customizer (http://getbootstrap.com/customize/#grid-system) And do this: Then you will need to use different classes in your html since this is 100/15. You will see the new classes in the un-minified version of your download (at the bottom of the page). If you use less, you would open up the variables.less locate the variable: @grid-columns: 12; COPY THAT. Create your OWN custom-variables.less file, import that after the bootstrap variables.less file in your import file and change the value: @grid-columns: 15; Then recompile with your application. Otherwise, create your own columns in the min-width media query of your choice. @media (min-width:1200px) { .col-custom {float:left;padding-left:15px;padding-right:15px;} .col-20p {width:20%;) .col-80p {width:80%;) } } <div class="container"> <div class="row"> <div class="col-custom col-20p">...</div> <div class="col-custom col-80p">...</div> </div> </div>
{ "pile_set_name": "StackExchange" }
Q: How do I compile all C files to object files at once to the same directory? I want to compile all the c files at once. ex) aaa.c,bbb.c -> aaa.o, bbb.o each independent. My Make CFLAGS=-std=c99 SRCS=$(wildcard *.c) all:$(SRCS) gcc -o $@ $^ $(CFLAGS) but make all command does not work. How to compile all c file at once? I changed to under the code, but still not working CFLAGS=-std=c99 SRCS=$(wildcard *.c) OBJS:=$(patsubst %.c,%.o,%(SRCS)) all:$(OBJS) A: Simplest way: change gcc -o ... to gcc -c -o ... Better way: make a list of the object files you want to build: OBJS := $(patsubst %.c,%.o,$(SRCS)) and then all you have to do is: all: $(OBJS)
{ "pile_set_name": "StackExchange" }
Q: Where can i find "Shop By" file Magento 2 Where can I find the file that contains "Shop By" on the category pages? A: You can find in these files: vendor/magento/module-catalog/view/frontend/templates/navigation/left.phtml vendor/magento/module-layered-navigation/view/frontend/templates/layer/view.phtml vendor/magento/theme-frontend-luma/Magento_LayeredNavigation/templates/layer/view.phtml
{ "pile_set_name": "StackExchange" }
Q: Difference between も and だって Both of those particles can mean "too", "even" or "any" (when used with wh-question). Do they have any difference in meaning or connotation when used like this? E.g. 私もフランスに行きたい vs 私だってフランスに行きたい. A: だって doesn't mean "too" or "also" but only "even" or "any". So, you can't say あなただって行きたいですか? for "Do you want to go too?" or reply as 私だって行きたい to "Do you want to go together?" though you can say あなただって行きたいでしょう? for "You would like to go there even if you were in the position, wouldn't you?". You might want translate 私だって行きたい as "I want to go too" but more precisely it's "even if I were in your position, I'd like to go". A: も and だって are different. だって is an informal version of でも and is used in the same way, i.e. Noun+でも = even (the noun). も merely expresses 'too' or 'also' when combined with a noun. So the difference is as follows: 私もフランスに行きたい。 I also want to go to France. 私だってフランスに行きたい。Even I (would) want to go to France.
{ "pile_set_name": "StackExchange" }
Q: JavaScript in button onclick not working I have a webpage (ASP.NET C#) that has a button: <asp:Button ID="btnHide" runat="server" OnClick="HidePanel" Text="Hide"/> and I'm doing a JavaScript alert like: public void HidePanel(object sender, EventArgs e) { Page.ClientScript.RegisterStartupScript(this.GetType(),"Hello","<script type=text/javascript> alert('Hello') </script>"); } If I modify the function to not have object sender and EventArgs e then I can call it from Page_Load, and it works fine, I get the alert. As written above I expect to see the alert when I click the button, but it's not happening. I'm sure it's something obvious, but I don't see it. A: Use OnClientClick instead of OnClick. And add a return false to avoid a postback on the page. <asp:Button ID="btnHide" runat="server" OnClientClick="alert('Hello'); return false;" Text="Hide"/> A: You can remove the code to register the JavaScript code and instead do this: <asp:Button ID="btnHide" runat="server" OnClick="HidePanel" Text="Hide" OnClientClick="alert('Hello');" UseSubmitBehavior="false" /> This: UseSubmitBehavior="false" will cause the button to fire the client-side script, and it will run the server-side code (post-back).
{ "pile_set_name": "StackExchange" }
Q: How to find identical byte[]-objects in two arrays concurrently? I'm trying to implement an collision attack on hashes (I'm visiting the course 'cryptography'). Therefore I have two arrays of hashes (= byte-sequences byte[]) and want to find hashes which are present in both arrays. After some research and a lot of thinking I am sure that the best solution on a single-core machine would be a HashSet (add all elements of the first array and check via contains if elements of the second array are already present). However, I want to implement a concurrent solution, since I have access to a machine with 8 cores and 12 GB RAM. The best solution I can think of is ConcurrentHashSet, which could be created via Collections.newSetFromMap(new ConcurrentHashMap<A,B>()). Using this data structure I could add all elements of the first array in parallel and - after all elements where added - I can concurrently check via contains for identical hashes. So my question is: Do you know an algorithm designed for this exact problem? If not, do you have experience using such a ConcurrentHashSet concerning problems and effective runtime complexity? Or can you recommend another prebuilt data structure which could help me? PS: If anyone is interested in the details: I plan to use Skandium to parallelize my program. A: I think it would be a complete waste of time to use any form of HashMap. I am guessing you are calculating multi-byte hashes of various data, these are already hashes, there is no need to perform any more hashing on them. Although you do not state it, I am guessing your hashes are byte sequences. Clearly either a trie or a dawg would be ideal to store these. I would suggest therefore you implement a trie/dawg and use it to store all of the hashes in the first array. You could then use all of your computing power in parallel to lookup each element in your second array in this trie. No locks would be required. Added Here's a simple Dawg implementation I knocked together. It seems to work. public class Dawg { // All my children. Dawg[] children = new Dawg[256]; // Am I a leaf. boolean isLeaf = false; // Add a new word. public void add ( byte[] word ) { // Finds its location, growing as necessary. Dawg loc = find ( word, 0, true ); loc.isLeaf = true; } // String form. public void add ( String word ) { add(word.getBytes()); } // Returns true if word is in the dawg. public boolean contains ( byte [] word ) { // Finds its location, no growing allowed. Dawg d = find ( word, 0, false ); return d != null && d.isLeaf; } // String form. public boolean contains ( String word ) { return contains(word.getBytes()); } // Find the Dawg - growing the tree as necessary if requested. private Dawg find ( byte [] word, int i, boolean grow ) { Dawg child = children[word[i]]; if ( child == null ) { // Not present! if ( grow ) { // Grow the tree. child = new Dawg(); children[word[i]] = child; } } // Found it? if ( child != null ) { // More to find? if ( i < word.length - 1 ) { child = child.find(word, i+1, grow); } } return child; } public static void main ( String[] args ) { Dawg d = new Dawg(); d.add("H"); d.add("Hello"); d.add("World"); d.add("Hell"); System.out.println("Hello is "+(d.contains("Hello")?"in":"out")); System.out.println("World is "+(d.contains("World")?"in":"out")); System.out.println("Hell is "+(d.contains("Hell")?"in":"out")); System.out.println("Hal is "+(d.contains("Hal")?"in":"out")); System.out.println("Hel is "+(d.contains("Hel")?"in":"out")); System.out.println("H is "+(d.contains("H")?"in":"out")); } } Added This could be a good start at a concurrent lock-free version. These things are notoriously difficult to test so I cannot guarantee this will work but to my mind it certainly should. import java.util.concurrent.atomic.AtomicReferenceArray; public class LFDawg { // All my children. AtomicReferenceArray<LFDawg> children = new AtomicReferenceArray<LFDawg> ( 256 ); // Am I a leaf. boolean isLeaf = false; // Add a new word. public void add ( byte[] word ) { // Finds its location, growing as necessary. LFDawg loc = find( word, 0, true ); loc.isLeaf = true; } // String form. public void add ( String word ) { add( word.getBytes() ); } // Returns true if word is in the dawg. public boolean contains ( byte[] word ) { // Finds its location, no growing allowed. LFDawg d = find( word, 0, false ); return d != null && d.isLeaf; } // String form. public boolean contains ( String word ) { return contains( word.getBytes() ); } // Find the Dawg - growing the tree as necessary if requested. private LFDawg find ( byte[] word, int i, boolean grow ) { LFDawg child = children.get( word[i] ); if ( child == null ) { // Not present! if ( grow ) { // Grow the tree. child = new LFDawg(); if ( !children.compareAndSet( word[i], null, child ) ) { // Someone else got there before me. Get the one they set. child = children.get( word[i] ); } } } // Found it? if ( child != null ) { // More to find? if ( i < word.length - 1 ) { child = child.find( word, i + 1, grow ); } } return child; } public static void main ( String[] args ) { LFDawg d = new LFDawg(); d.add( "H" ); d.add( "Hello" ); d.add( "World" ); d.add( "Hell" ); System.out.println( "Hello is " + ( d.contains( "Hello" ) ? "in" : "out" ) ); System.out.println( "World is " + ( d.contains( "World" ) ? "in" : "out" ) ); System.out.println( "Hell is " + ( d.contains( "Hell" ) ? "in" : "out" ) ); System.out.println( "Hal is " + ( d.contains( "Hal" ) ? "in" : "out" ) ); System.out.println( "Hel is " + ( d.contains( "Hel" ) ? "in" : "out" ) ); System.out.println( "H is " + ( d.contains( "H" ) ? "in" : "out" ) ); } }
{ "pile_set_name": "StackExchange" }
Q: php роутинг. преимущество использования Всем привет. Почему, применяя маршрутизацию используют именно такой подход www.my.site/docs/writes/news?id=7 а не набор GET параметров www.my.site/index.php?docs=writes&news=7 Ведь короче проверить существование get параметра (isset($_GET['X'])) чем распарсить url и потом проверять наличие определенного пути в нем. В чем преимущество первого варианта? A: Основные причины использования ЧПУ ("Человекопонятный URL"): Удобство использования, они более естественны и интуитивно понятны. Такие ссылки обычно позволяют определить структуру приложения, а также содержимое, просто по названию. SEO-оптимизация. Использование ЧПУ - один из факторов, который учитывают поисковые системы при ранжировании сайта. Собственно, этого одного фактора достаточно, чтобы решить: использовать ЧПУ или нет. Во многих приложениях используется шаблон FrontController с единой точкой входа, ей делегируется маршрутизация, которую пишете вы сами. И нет никакой причины писать сложно ?docs=writes&news=7, когда можно написать просто /docs/writes/news/7 Насчет распарсить url - не нужно его парсить. Вы настраиваете правила либо на сервере (к примеру .htaccess для апаче), или у себя в своей единой точке входа. В любом случае вы пишете простой набор правил: 'ссылка' => 'что запустить'. Ну да, и приведенный пример не совсем корректен: www.my.site/docs/writes/news?id=7 вы говорите, что пишут так, но так не пишут. Это либо news/7, либо news/article_header , то есть название именно новости текущей, а не ее id
{ "pile_set_name": "StackExchange" }
Q: Defending a non-idempotent post operation against being rapidly called in Node.js? Short question: assuming a non-idempotent post operation, how do you defend your post request handlers in node.js from being called multiple times before they can respond, and hence cause data corruption? Specific case: I have a matching API, which takes about 2-3 seconds to return (due to having to run through a large userbase). There are a number of operations where user can simply double call this within the same second (this is a bug, but not under my control, and therefore answering this part does not constitue an answer to the root question). Under these conditions, multiple matches are selected for the user, which is not desirable. Desirable outcome for this would be for all of these rapid requests to have the same end result. Specific constrains: node.js / express / sequelize. If we add a queue, every single user's request will be on top of all other users' request, which might have drastic implications during heavy traffic. A: You can push all your requests into a queue. In this case all your responses will have to wait for the preceding ones to finish. The other solution is to use sequelize transactions, but that would cause lock_wait_timeout errors in DB.
{ "pile_set_name": "StackExchange" }
Q: Find the orthogonal projection of b onto col A When finding the orthogonal projection for this problem, why were those vectors added? Aren't the vectors normally subtracted for Gram-Schmidt and finding projections? Also, how do you carry out the Gram Schmidt process for doing part (a)? A: The column space of $A$ is $\operatorname{span}\left(\begin{pmatrix} 1 \\ -1 \\ 1 \end{pmatrix}, \begin{pmatrix} 2 \\ 4 \\ 2 \end{pmatrix}\right)$. Those two vectors are a basis for $\operatorname{col}(A)$, but they are not normalized. NOTE: In this case, the columns of $A$ are already orthogonal so you don't need to use the Gram-Schmidt process, but since in general they won't be, I'll just explain it anyway. To make them orthogonal, we use the Gram-Schmidt process: $w_1 = \begin{pmatrix} 1 \\ -1 \\ 1 \end{pmatrix}$ and $w_2 = \begin{pmatrix} 2 \\ 4 \\ 2 \end{pmatrix} - \operatorname{proj}_{w_1} \begin{pmatrix} 2 \\ 4 \\ 2 \end{pmatrix}$, where $\operatorname{proj}_{w_1} \begin{pmatrix} 2 \\ 4 \\ 2 \end{pmatrix}$ is the orthogonal projection of $\begin{pmatrix} 2 \\ 4 \\ 2 \end{pmatrix}$ onto the subspace $\operatorname{span}(w_1)$. In general, $\operatorname{proj}_vu = \dfrac {u \cdot v}{v\cdot v}v$. Then to normalize a vector, you divide it by its norm: $u_1 = \dfrac {w_1}{\|w_1\|}$ and $u_2 = \dfrac{w_2}{\|w_2\|}$. The norm of a vector $v$, denoted $\|v\|$, is given by $\|v\|=\sqrt{v\cdot v}$. This is how $u_1$ and $u_2$ were obtained from the columns of $A$. Then the orthogonal projection of $b$ onto the subspace $\operatorname{col}(A)$ is given by $\operatorname{proj}_{\operatorname{col}(A)}b = \operatorname{proj}_{u_1}b + \operatorname{proj}_{u_2}b$.
{ "pile_set_name": "StackExchange" }
Q: Errors when implementing a fwrite to get date from socket I see a plenty of examples but none addresses what I want to accomplish. I need to read the bytes from a socket and write them in to a file. In this Code Project blog I see where in the client script a while loop iterates through a read call: while((n = read(sockfd, recvBuff, sizeof(recvBuff)-1)) > 0) So I modified the code do that fputs(recvBuff, f1) where f1 is a pointer to a pdf file. A pdf file is also a file I'm fetching from the server so I need to reassemble it, however the fputs operated with a string and corrupts the file, so I need a byte "writer" so fwrite would have been the choice but I can't get fwrite to work. I ended up modifying my code to resemble some of the examples to test it out but to no avail. If in fwrite the first parameters is the 'data' how would I pass it? I've tried the read() call as in the while loop above but that seem to return an integer rather then a byte stream. Any ideas? I'm new to programming but am new to C and would appreciate a little push in a right direction. Thanks. A: You want something more like this. fwrite doesn't return a stream it returns the number of items (i.e. the 3rd parameter) successfully written. In this case the "item" is a single char and you are attempting to write "bytesRead" number of them. Good form dictates that you should check that the result fread returns is the same as you requested be written but this rarely fails on a disk file so many people skip it in non-critical situations. You may want to add that on yourself. FILE *f1; int sockfd; char recvBuff[4096]; size_t bytesWritten; ssize_t bytesRead; while((bytesRead = read(sockfd, recvBuff, sizeof(recvBuff))) > 0) bytesWritten = fwrite(recvBuff, 1, bytesRead, f1);
{ "pile_set_name": "StackExchange" }
Q: Relationship conditional expectation and random variable under specific constraint on its values I am trying to establish a relationship between the following conditional expectation and random variable based on the a given identity: Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. Let $X,Y \in \{0,\dots, n\}, n \in \mathbb{N}$ and $Z \in [0,1]$ be random variables on said space. Suppose it holds that \begin{align} \tag{1} \label{1} P(X=Y \mid Z = z) = z \quad \forall z \in [0,1] \end{align} What is the relationship between $\mathbb{E}[1_{X=Y} \mid Z]$ and $Z$? I can show this implies equality in distribution. Let $W = \mathbb{E}[1_{X=Y} \mid Z]$, then for $w \in [0,1]$ \begin{align} P(W \leq w) = \int_0^w \mathbb{E}[1_{X=Y} \mid Z = z] \,f(z) \,dz \overset{\ref{1}}{=} \int_0^w z \,f(z) \,dz = P(Z \leq z) \end{align} My questions are: Is the other direction also true, i.e. \ref{1} $\Leftrightarrow \mathbb{E}[1_{X=Y} \mid Z] \overset{P}{=} Z$? Does \ref{1} $\Leftrightarrow \mathbb{E}[1_{X=Y} \mid Z] \overset{a.s.}{=} Z$ hold? A: The answer on 2. is: "yes" (hence also the answer of 1. is: "yes"). $$\mathbb E\mathsf1_{X=Y}=P(X=Y)\tag2$$ and according to the same principle: $$\mathbb E[\mathsf1_{X=Y}\mid Z=z]=P(X=Y\mid Z=z)\text{ for all }z\in[0,1]\tag3$$ So $(1)$ in your question can be translated into: $$\mathbb E[\mathsf1_{X=Y}\mid Z=z]=z\text{ for all }z\in[0,1]\tag4$$ which on its turn comes to the same as:$$\mathbb E[\mathsf1_{X=Y}\mid Z)=Z\tag5$$
{ "pile_set_name": "StackExchange" }
Q: Text overflowing cell in PDF generated using TCPDF I have a issue trying to display large chunks of text coming from a sql table on a pdf file created with Tcpdf. The layout of the pdf consists of a header, a footer and several cells of text. When the last cell of text overflows the page, the next page shows the remaining text over the header of the next page. The problem is that the text isn't putted line by line on the file but it's dumped completely into the cell. Is there a way to prevent this behavior? Any idea of how to trim the text so it can fix in two cells on two pages? Any help or idea will be appreciated. A: Are you are using TCPDF's writeHTML() method to generate your PDF? If so I'd highly recommend using TCPDF's built in functions for laying out your page - TCPDF is a decent library, but in my experience if you attempt to layout with a half implementation of HTML then it's always a headache. If you're not using html then try setting the page margins or split up your text using PHP's substr() and then set the AutoPageBreak in TCPDF
{ "pile_set_name": "StackExchange" }
Q: Subdataframes of Subdataframes If I have a dataframe df_i and I want to split it into sub-dataframes based on unique values of 'Cycle Number' I use: dfs = {k: df_i[df_i['Cycle Number'] == k] for k in df_i['Cycle Number'].unique()} Assuming the 'Cycle Number' ranges from 1 to 50 and in each cycle, I have steps ranging from 1 to 15, how do I split each data frame into 15 further data frames? I am presuming something of this type would work: for i in range(1,51): dsfs = {k: dfs[i][dfs[i]['Step Number'] == k] for k in dfs[i]['Step Number'].unique()} But, this will return me 15 data frames only from the cycle number corresponding to 50, not the ones before. If I want to access a sub-dataframe in the 20th Cycle with step number 10, is there a way of generating the subdata frame such that I can access it using something like dfs[20][10]? A simple parallel: Step Number Cycle Number Desired Access 1 1 dfs[1][1] 2 1 dfs[1][2] 3 1 dfs[1][3] 4 1 dfs[1][4] 5 1 dfs[1][5] 1 2 dfs[2][1] 2 2 dfs[2][2] 3 2 dfs[2][3] 4 2 dfs[2][4] 5 2 dfs[2][5] 1 3 dfs[3][1] 2 3 dfs[3][2] 3 3 dfs[3][3] 4 3 dfs[3][4] 5 3 dfs[3][5] 1 4 dfs[4][1] 2 4 dfs[4][2] 3 4 dfs[4][3] 4 4 dfs[4][4] 5 4 dfs[4][5] A: You can use tuple keys instead and utilize groupby. Here's a minimal example: df = pd.DataFrame([[0, 1, 2], [0, 1, 3], [1, 2, 4], [1, 2, 5], [1, 3, 6], [1, 3, 7]], columns=['col1', 'col2', 'col3']) dfs = dict(tuple(df.groupby(['col1', 'col2']))) for k, v in dfs.items(): print(k) print(v) (0, 1) col1 col2 col3 0 0 1 2 1 0 1 3 (1, 2) col1 col2 col3 2 1 2 4 3 1 2 5 (1, 3) col1 col2 col3 4 1 3 6 5 1 3 7
{ "pile_set_name": "StackExchange" }
Q: Client side application help requested I am testing an application. In tableadapter configuration wizard, On server side: While choosing data source, I chose Microsoft SQL Server Database File. This gave the connection string as: Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Database1.mdf;Integrated Security=True;User Instance=True Is this correct? Any way this works in my desired way. Then in SQL Server Mgmt Express, I attached the database to the instance of my SQL Server Express (to MyhomeServer\SQLExpress" Now I want to use the same application on Client side (off course databse is stored in Data Directory of my application in Server Side) Now in tableadapter wizard, I choose Microsoft SQL Server Express. Is that correct? I have done all configurations for remote connection etc. and also done the firewall settings. When I run this test on client side, it returns error: Cannot open database "Database1.mdf" requested by the login. The login failed. Login failed for user 'MYHOMESERVER\Kh. Furqn'. Why does it go to Kh. Furqan while I am giving it SQLExpress, where the DB is attached. My server is MyHomeServer\SQLExpress and connection is MyHomeServer\Kh. Furqan (Authentication is Windows Authentication, and no password for it) A: Since you chose integrated security the program will try to log onto SQL Server with the current logged on user's credentials - 'MYHOMESERVER\Kh. Furqn'. So the first place I would check is make sure that you can log onto Sql Server Mgmt Studio (SSMS) Express with the windows logon option and logged into the server as Kh. Furqn. Let me know if that works. Wade
{ "pile_set_name": "StackExchange" }
Q: How to use date and Time API in android development? I am new to android programming so please help me I want to access Time and date of a country and have that API too but dont know how to use it in my android Code can anyone help me please...... A: Use the standard Java DateFormat class. For example to display the current date and time do the following: Date date = new Date(location.getTime()); DateFormat dateFormat = android.text.format.DateFormat.getDateFormat(getApplicationContext()); System.out.println("Time: " + dateFormat.format(date)); You can initialise a Date object with your own values, however you should be aware that the constructors have been deprecated and you should really be using a Java Calendar object.
{ "pile_set_name": "StackExchange" }
Q: Android Studio Locale.SWEDISH not found I try to make multilingual app for Android, but there's no Finnish or Swedish language in Locales. What's wrong? I also tried to use Locale.forLanguageTag("se") But it didn't work on my Samsung Galaxy SII. http://i.imgur.com/AA1G5l2.png A: It is not wrong, it simply does not exist. http://developer.android.com/reference/java/util/Locale.html
{ "pile_set_name": "StackExchange" }
Q: Declined Rude/Abusive flag on nonsense post that doesn't involve keyboard-mashing This is a question regarding a declined Rude/Abusive flag I raised against this answer (10k+) earlier today. The contents of the answer for those under 10k rep: Body must be at least 30 characters; you entered 11. So nothing particularly foul, and does not even attempt to answer the question so at the very least Not An Answer. But where have I seen that text before? Copy-pasting the body length warning from the answer box (in my opinion) constitutes a gibberish answer as much as mashing the keyboard does. There is already a consensus on MSE that nonsensical answers should be flagged as Rude/Abusive if only to get the content off the site quicker. For me this falls under: This includes posts by new users that contain no useful content at all The user was also named "Test", which implies that this account was never really going to provide any serious content. What confuses me further is that (presumably) the moderator who handled this flag decided to delete the user in question, something which to my knowledge is not normally done if a post is marked NAA instead. I'm just confused about how the outcomes of this flag seem so inconsistent. If the post wasn't R/A, why delete the user? If it's borderline, why decline instead of dispute? If gibberish or nonsense is posted, should it be left to go through the (fairly busy) Low Quality Posts queue, against the already-established consensus? What's the best way of handling nonsense posts that don't involve keyboard-mashing? A: TL DR: A rude/abusive, not an answer or low quality flag are all suitable for this type of post. A custom mod flag is only suitable if the post has been on the site for some time, i.e. the automatic deletion provided by the other flags hasn't worked. Disclaimer: I didn't handle the flag, so I can't answer for the mod who did. What happened The not an answer flags were marked helpful, the rude/abusive flags were declined. The account was then destroyed. Reason This user was created to post spam or nonsense and has no other positive participation this is a canned reason Abusive of the site Posts like that are abusive of the site and it's recommended to flag as rude/abusive. They're time wasters and often the pattern that trolls use on the site. There is the caveat that sometimes people will test the waters (literally at times), but it's a fine line to know which is which. I wouldn't penalise the flagger for flagging a nonsense post as rude/abusive, I'd dispute the flag if there was ambivalence. For example, if the user has also posted useful content. The not an answer or very low quality flags also suit this type of post. The post would most likely be deleted before a custom mod flag would be handled, so it's probably of no practical use to raise a custom flag. The mod will still have to go to the post even if it's been deleted by standard user flags, so it's a waste of time, unless the post has been sitting there for some time. When handling posts like that, I check the user's activity for other posts. If there's no sensible posts and the account has been opened recently (often they will be opened minutes before posting) the account will be destroyed. If it's an older account I usually check to see if there's been any suspicious activity, re login before destroying, as people's accounts can sometimes be hacked. After discussion with another mod, I've cleared the flags and re-deleted it as rude/abusive. This has now marked the rude/abusive flags as disputed, rather than declined. There's some controversy over the intent of the linked Meta posts. I've included some of Shog's answer here: Abusive means what it says. Don't overthink this. Look... The problem folks have with these is that they see the pile of nonsense and try to extract meaning from it. "Surely if I can determine what the author's intent was," you might imagine them saying to themselves, "...I can then pick the exact right type of flag." This is an utter waste of time. There is no meaning to the post! It's VLQ, it's abuse, it's Not An Answer, heck it might even be a spammer, testing the waters... There's no metric you can apply that'll narrow that down, because there is no meaningful content to apply metrics to. So pick the flag that speaks to you. I'm partial to "rude or abusive", because enough of them immediately delete and lock the post, which is handy in those rare scenarios where someone's flooding the site with a lot of these... But VLQ or NAA work just as well in the vast majority of cases. The important thing to remember here is that when the post clearly means nothing, you shouldn't be wasting too much thought trying to decipher it; flag it and move on with your life. Please note "I'm partial to "rude or abusive"", but Shog also states that any flag would work as well. A: This includes posts by new users that contain no useful content at all This is my suggestion to keep it simple (and it was also probably Animuson♦'s original intent*): Flag as rude/abusive for only gibberish (cat on keyboard) when OP has no reasonable posts elsewhere. All other no useful content at all must somehow be evaluated; in your case, a moderator needs to notice that it was a copy and paste from the Q/A interface and that it was not related to the question. If it's no useful content at all you can still flag: "In need of moderator intervention", if you think the matter needs to be handled fast, explain the issue and as a moderator arrives they will quickly get the context. "Not an answer" or "very low quality" if we are in no hurry, to let the community delete it and we leave moderators to handle more urgent issues. * The meta you quoted specifically indicates what the phrase "no useful content at all" is, it uses i.e and the original phrase is "It contains only gibberish, such as "fsdguejgkfdlk", see also how "I don't care about your problem" is only NAA A: The "rude or abusive" flag description says, A reasonable person would find this content inappropriate for respectful discourse. It links to the Code of Conduct, which only talks about behaviors against other users. The answer said, Body must be at least 30 characters; you entered 11. There is obviously no rude language nor abuse of another user here. In fact, this might even be an attempt to answer if the question was, "Why am I getting an error attempting to submit this content to my website?" Maybe the user posted this answer on the wrong question and meant to post it on one like I suggest. Who knows? Now, that would be a poor question and a poor answer indeed, but it clearly doesn't qualify as "rude or abusive." Declining the flag and then deleting it separately is clearly an appropriate response. It would of course be inappropriate (and likely a mistake) for a mod to decline and then not delete the post, but the post was deleted in this case. As for what flag you should use, the FAQ you linked suggests Not an Answer is the most appropriate: The post contains no useful information, such as an answer that says “I don't care about your problem”. Flag as not an answer instead. And logically, I agree. However, NAA has a long history of being evaluated out of context. It gets declined more frequently than it ought to (or at least has historically). If NAA fails (which it may), then you'd be better served by raising a custom flag for a post like this. Then you can include an explanation, explicitly telling the moderator that the post makes no sense in context. Apparently, it's discouraged to do this from the get go, but it might also give you a better result with less effort in practice. You may wish to apply your judgement about how obvious it is that the post is Not an Answer. Since it's human readable and doesn't contain any offensive language, rude/abusive isn't appropriate. Neither is spam since it's not undisclosed promotion. Lastly, the "consensus" you cite is based on this post by Shog, which has several notable qualities: It's talking about pure gibberish. For example, the text "dfajiojaifojadiofjadhigaowkokaomdiovnuiyhioqejgioqejgio". The post you flagged is not this. It is readable English, even if it doesn't make a lot of sense in context. Shog is suggesting that any flag is okay for pure gibberish, and he personally prefers the "abusive" reason only because of the side effect of post locking. Once it's in front of a moderator's eyes, locking no longer matters. It follows then that a moderator may decline the flag if they determine that it's an inappropriate type. The important thing is that the gibberish post gets deleted. This is a far cry from the advice that rude/abusive is the correct flag type. It explicitly excludes "broken English": Note that this advice does not apply to questions or answers posted in horribly broken English; while those may well be Very Low Quality, in most cases they're still a slight step up from the sort of "cat on a keyboard" nonsense you're referring to. The post in question isn't even as bad as broken English. It just doesn't answer the question. Furthermore, the advice was edited into the post you linked in 2015, long, long after the answer was deemed to be consensus by votes, and a long time before the current incarnation of our flagging system. I'm not even convinced it's valid advice anymore. And given that the answer you linked differs significantly in insisting on a particular kind of flag, I'm not sure that it was actually consensus at the time it was edited in.
{ "pile_set_name": "StackExchange" }
Q: Command timed out after no response trying to deploy hugo blog to github page with wercker I am trying to setup an automatic deployment of my Hugo blog to the Github Pages using Wercker. The build phase is ok, I can build my public directory with my blog static files. But I have a Command timed out after no response error while trying to deploy my hugo blog to the github page. This is my wercker.yml file box: python:wheezy no-response-timeout: 15 build: steps: - arjen/hugo-build: theme: hd-theme flags: --disableSitemap=true deploy: steps: - lukevivier/[email protected]: token: $GIT_TOKEN repo: herveDarritchon/herveDarritchon.github.io basedir: public ### My log Error during deployment Running wercker version: 1.0.152 (Compiled at: 2015-06-02T19:21:14Z, Git commit: 12391582ed7323e803e15b277b9da3a65f7dde7c) Using config: box: python:wheezy no-response-timeout: 15 build: steps: - arjen/hugo-build: theme: hd-theme flags: --disableSitemap=true deploy: steps: - lukevivier/[email protected]: token: $GIT_TOKEN repo: herveDarritchon/herveDarritchon.github.io basedir: public Pulling repository python Pulling image (wheezy) from python: 169d81d45993 Pulling image (wheezy) from python, endpoint: https://registry-1.docker.io/v1/: 169d81d45993 Pulling dependent layers: 169d81d45993 Download complete: 7a3e804ed6c0 Download complete: b96d1548a24e Download complete: 0f57835aec39 Download complete: 7d22d0f990bc Download complete: be6ffc9d87fc Download complete: 6cb13f325b61 Download complete: b394be4f3c52 Download complete: ddc8488da9fa Download complete: 13700980fafa Download complete: 7f729a93d07e Download complete: 089f6d0ff231 Download complete: 7c67244ee4eb Download complete: 169d81d45993 Download complete: 169d81d45993 Status: Image is up to date for python:wheezy export WERCKER="true" export WERCKER_ROOT="/pipeline/source" export WERCKER_SOURCE_DIR="/pipeline/source" export WERCKER_CACHE_DIR="/cache" export WERCKER_OUTPUT_DIR="/pipeline/output" export WERCKER_PIPELINE_DIR="/pipeline" export WERCKER_REPORT_DIR="/pipeline/report" export WERCKER_APPLICATION_ID="556eaec700bccd884305010b" export WERCKER_APPLICATION_NAME="software-the-good-parts" export WERCKER_APPLICATION_OWNER_NAME="herveDarritchon" export WERCKER_APPLICATION_URL="https://app.wercker.com/#application/556eaec700bccd884305010b" export TERM="xterm-256color" export DEPLOY="true" export WERCKER_DEPLOY_ID="556eced1453eb1bb0500347f" export WERCKER_DEPLOY_URL="https://app.wercker.com/#deploy/556eced1453eb1bb0500347f" export WERCKER_GIT_DOMAIN="github.com" export WERCKER_GIT_OWNER="herveDarritchon" export WERCKER_GIT_REPOSITORY="software-the-good-parts" export WERCKER_GIT_BRANCH="master" export WERCKER_GIT_COMMIT="9c247dfd78daa8897f4ef73cf050f6a72a35ffbb" export WERCKER_DEPLOYTARGET_NAME="software-the-good-part" export WERCKER_STARTED_BY="herveDarritchon" export WERCKER_MAIN_PIPELINE_STARTED="1433325265" I have a timeout message, I tried to raised the time out duration in case but with 15mn I also run into the time out. Any help appreciate, Hervé A: I have fixed my problem. It comes from my wercker.yml file and the content of the deply step. deploy: steps: - lukevivier/[email protected]: token: $GIT_TOKEN repo: herveDarritchon/herveDarritchon.github.io basedir: public In fact, the domain tag is not optional, I have to set it up to github.com to have my deploy step to run. I think you should change your documentation to be more straight forward for newbie ;) Thanks anyway for this step. So the new wercker.yml file is : box: python:wheezy build: steps: - arjen/hugo-build: theme: hd-theme flags: --disableSitemap=true deploy: steps: - lukevivier/[email protected]: token: $GIT_TOKEN domain: github.com repo: herveDarritchon/herveDarritchon.github.io basedir: public Any way, with this wercker.yml file you can build and deploy automatically a hugo site into github pages.
{ "pile_set_name": "StackExchange" }
Q: Update date and time in text file via Linux script or falcon : Hadoop I have some text file with below entries : Name type startTime Endtime comments my I 01-03-2016 02-03-2016 zoom my F 01-03-2016 02-03-2016 zoom2 abd F 03-03-2016 04-03-2016 zoom5 my I 01-03-2016 02-03-2016 zoom6 If the Currnt date is march 18 : the output should be : Output : Name type startTime Endtime comments my I **02-03-2016** ***18-03-2016*** zoom my F 01-03-2016 02-03-2016 zoom2 abd F 03-03-2016 04-03-2016 zoom5 my I **02-03-2016** ***18-03-2016*** zoom6 Conditions are If name == my && type ==I then needs to update the start time with End time -- End time would be current date which is processed : Can any one help me in choosing best methodology to process that file with above requirements . I hope my requirement is cleared :) Thanks, Madhu A: Pure Perl solution will look like #!/usr/bin/env perl use strict; use warnings; open(my $fh, "<", "file.txt") || die $!; my ($header, @lines) = <$fh>; close($fh); my @keys = split(/[\s\t]+/, $header); open($fh, ">", "file.txt") || die $!; print $fh join("\t",@keys), "\n"; my @cdate = (localtime)[3,4,5]; $cdate[1] += 1; $cdate[2] += 1900; foreach my $line (@lines) { my %tmp; @tmp{@keys} = split(/[\s\t]+/, $line); if($tmp{'Name'} eq 'my' && $tmp{'type'} eq 'I') { $tmp{'Endtime'} = sprintf("%02d-%02d-%04d", @cdate) } print $fh join("\t", @tmp{@keys} ),"\n" } close($fh)
{ "pile_set_name": "StackExchange" }
Q: jQuery add class to middle elements in group of 3 I have a set of div boxes that are dynamically created - sometimes there will be lots of boxes, sometimes there won't be many. They are laid out in rows of 3. When I click on a box it fadesOut and the box next to it fills it's space. What I need is to give the middle boxes in each row a class "middle" - the issue I'm having is that when a box fadesOut, the middle box obviously changes. Here is a JSfiddle demonstrating my issue. When you click on a box, if one from the middle moves, it should lose it's class and the new middle box should gain the class "middle" http://jsfiddle.net/xmq2x/ Here is the code I'm currently using: $('.box:nth-child(3n+2)').addClass('middle'); $( ".box" ).click(function() { $(this).fadeOut( "slow" ); }); A: Filter your div's by visibility and the do some math: $( ".box" ).click(function() { $(this).fadeOut( "slow", function(){ $( ".middle" ).removeClass("middle"); $('.box:visible').addClass(function(i){ if((i-1) % 3 == 0) return 'middle'; }) }) }); You can pass in addClass argument a function. i is the index of the current div. Just make sure the function is called after the fadeOut occur! Fiddle : http://jsfiddle.net/xmq2x/7/
{ "pile_set_name": "StackExchange" }
Q: How to resize Image/IconImage in JLabel? Here's my code: String s = "/Applications/Asphalt6.app"; JFileChooser chooser = new JFileChooser(); File file = new File(s); Icon icon = chooser.getIcon(file); // show the icon JLabel ficon = new JLabel(s, icon, SwingConstants.LEFT); Now, the image extracted from the icon is really small. How can I resize it? A: import java.awt.*; import java.awt.image.*; import javax.swing.*; import java.io.*; class BigIcon { public static void main(String[] args) { JFileChooser chooser = new JFileChooser(); File f = new File("BigIcon.java"); Icon icon = chooser.getIcon(f); int scale = 4; BufferedImage bi = new BufferedImage( scale*icon.getIconWidth(), scale*icon.getIconHeight(), BufferedImage.TYPE_INT_ARGB); Graphics2D g = bi.createGraphics(); g.scale(scale,scale); icon.paintIcon(null,g,0,0); g.dispose(); JOptionPane.showMessageDialog( null, new JLabel(new ImageIcon(bi))); } }
{ "pile_set_name": "StackExchange" }
Q: Return type as IEnumerable instead of just List? I am doing a .NET MVC tutorial. With that being said, I've came across code like this: public class MoviesController : Controller { public ActionResult Index() { var movies = GetMovies(); return View(movies); } private IEnumerable<Movie> GetMovies() { return new List<Movie> { new Movie {Id = 1, Name = "Shrek"}, new Movie {Id = 2, Name = "LotR"} }; } } Index view for Movies looks like: @model IEnumerable<VideoStore.Models.Movie> @{ ViewBag.Title = "Index"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Movies</h2> <table class="table table-bordered table-hover"> <thead> <tr> <th>Movie</th> </tr> </thead> <tbody> @foreach (var movie in Model) { <tr> <td>@movie.Name</td> </tr> } </tbody> </table> So my question is, why in the MoviesController in the private method GetMovies(), the IEnumerable return type is used? Why not to use the List return type? A: IEnumerable<> is the interface relevant to iterating over a List<>. Returning a List<> would immediately expose more operations to the consumer of GetMovies() than is strictly necessary--such as adding or removing from the collection--which could make it easier to introduce errors. There is no technical reason to choose IEnumerable<> over List<>, because under the hood it will behave the same. The decision is purely practical. A: Using IEnumerable weakens the coupling, which allows it to be forward compatible with other concrete types should the implementation change. All of the following changes could be made without changing the interface: //Original private IEnumerable<Movie> GetMovies() { return new List<Movie> { new Movie {Id = 1, Name = "Shrek"}, new Movie {Id = 2, Name = "LotR"} }; } //Using an array private IEnumerable<Movie> GetMovies() { return new Movie[] { new Movie {Id = 1, Name = "Shrek"}, new Movie {Id = 2, Name = "LotR"} }; } //From EF private IEnumerable<Movie> GetMovies() { return dbContext.Movies.Where( m => Name == "Shrek" || Name == "Lotr ); } //Covariant class NerdMovie : Movie {} private IEnumerable<Movie> GetMovies() { return new List<NerdMovie> { new NerdMovie {Id = 1, Name = "Shrek"}, new NerdMovie {Id = 2, Name = "LotR"} }; } //Custom type class MovieList : List<Movie> { } private IEnumerable<Movie> GetMovies() { return new MovieList { new Movie {Id = 1, Name = "Shrek"}, new Movie {Id = 2, Name = "LotR"} }; } //Using yield private IEnumerable<Movie> GetMovies() { yield return new Movie {Id = 1, Name = "Shrek"}; yield return new Movie {Id = 2, Name = "LotR"}; } A: Return type as IEnumerable instead of just List? //controller var a = new Dictionary<string, string>(); return View(a); var b = new List<string>(); return View(b); var c = new LinkedList<string>(); return View(c); // All work with: @model IEnumerable<string> While using an IEnumerable<> is more Open sOlid Principles, I rarely recommend passing any collection type interface/class to a view. While in C# arrays/collections are First-Class Citizens the issue is that they are not extensible while maintaining the Single Responsibility Principle. For example: // Controller Returns: var people = .... as IEnumerable<Person>; return View(people); @model IEnumerable<Person> Now suppose to want to add any information to the view that has nothing to do with the group (like a title to the page).. how do you do that? You could extend and make your own class that derives from IEnumerable<T> but that breaks the SRP because the Title of the page has nothing to do with the group of people. Instead you should create a First-Class Model that represents everything the view needs: public class MyViewModel { public string Title { get; set;} public IEnumerable<Person> People { get; set;} } return View(myViewModel); @model MyViewModel I suggest always doing this. As soon as you start using partials or templates in MVC, or want to post back the same object, it becomes increasingly difficult to move away from IEnumerable<> because you need to changes Partials and/or Templates and/or Javascript... So my question is, why in the MoviesController in the private method GetMovies(), the IEnumerable<> return type is used? Generally it's good practice to Program against an Interface and not an Implementation, also referred to as Design by contract (DbC), also known as contract programming, programming by contract and design-by-contract programming,. It's a soLid Principle specifically Liskov substitution principle. Excerpt: Substitutability is a principle in object-oriented programming stating that, in a computer program, if S is a subtype of T, then objects of type T may be replaced with objects of type S (i.e. an object of type T may be substituted with any object of a subtype S) without altering any of the desirable properties of the program (correctness, task performed, etc.). More formally, the Liskov substitution principle (LSP) is a particular definition of a subtyping relation, called (strong) behavioral subtyping, that was initially introduced by Barbara Liskov in a 1987 conference keynote address titled Data abstraction and hierarchy. It is a semantic rather than merely syntactic relation, because it intends to guarantee semantic interoperability of types in a hierarchy, object types in particular. Barbara Liskov and Jeannette Wing described the principle succinctly in a 1994 paper as follows... In practice it means the current code: public ActionResult Index() { var movies = GetMovies(); return View(movies); } private IEnumerable<Movie> GetMovies() { return new List<Movie> { new Movie {Id = 1, Name = "Shrek"}, new Movie {Id = 2, Name = "LotR"} }; } Could change to: public class MoviesController : Controller { private readonly IMovieDb _movieDb; // Dependency Injecting access to movies public MoviesController(IMovieDb movieDb) { _movieDb = movieDb; } public ActionResult Index() { var movies = _movieDb .GetMovies(); return View(movies); } // .... public interface IMovieDb { IEnumerable<Movie> GetMovies(); } Now we have no idea how the Movies are retrieved... and we shouldn't care as long as the contract/interface fulfills our data needs.
{ "pile_set_name": "StackExchange" }
Q: NetBeans 7.1 and Subversion 1.7 We're moving our version control from some old VCS to SVN, for ease of integration with IDE's (we're using both NetBeans and IBM RAD). In doing so, I set up a local repository with SlikSVN for win64 and started a server with the command svnserve -d -r c:\repo\test. I defined a basic group with an user-password pair (no anonymous access allowed). My authz is as follows: [groups] li_users=alessandro [/] @li_users=rw I then created a test project on both RAD (fitted with Subclipse 1.8) and NetBeans and tried to import it into the newly created repository, with the following outcomes: On RAD, I didn't have any problem accessing/importing into repositories both via file:///c:/repo/test and svn://localhost/. On NetBeans, I could import the project "TestProject" using the file:///c:/repo/test, but I couldn't using the svn://localhost/ link. After I'm presented with the Import comment page and directory suggestion using the project name, it gives me this error: org.tigris.subversion.javahl.ClientException: URL 'svn://localhost/TestProject' doesn't exist Funny thing is, when I browse my repository by any means, including clicking the "Browse" button on the import wizard, it shows the "TestProject" directory and I can't create another with the same name. Also, if I create (with the "Into a new Folder") and try to use another directory, it gives me the same error. What's wrong there? I searched and it seems to be a bug with NetBeans, but I can't find a way around this bug. Thanks in advance. A: Apparently, falling back to Subversion 1.6 is the only way to go, at least until NetBeans catches up with 1.7.
{ "pile_set_name": "StackExchange" }