text
stringlengths
64
81.1k
meta
dict
Q: Django Haystack - How to filter search results by a boolean field? Trying to filter a SearchQuerySet by a boolean value doesn't work for me. (I am using the provided "Simple" backend search engine while testing.) I have an index like: class MyIndex(indexes.SearchIndex, indexes.Indexable): text = indexes.CharField(document=True, use_template=True) has_been_sent = indexes.BooleanField(model_attr='has_been_sent') # other fields def get_model(self): return MyModel And I use a custom form for the search: BOOLEAN_OPTIONS = [ ('either', 'Either'), ('yes', 'Yes'), ('no', 'No') ] class MyModelSearchForm(SearchForm): # other fields has_been_sent = forms.ChoiceField( widget = forms.Select(), label = 'Sent?', choices=BOOLEAN_OPTIONS ) def search(self): sqs = super(MyModelSearchForm, self).search() if not self.is_valid(): return self.no_query_found() sqs = sqs.models(MyModel) # cuts out other models from the search results if self.cleaned_data['has_been_sent'] != 'either': if self.cleaned_data['has_been_sent'] == 'yes': sent = True else: sent = False sqs = sqs.filter(has_been_sent=sent) return sqs If I set the has_been_sent option to Yes or No in the form, I always get 0 results, which is clearly wrong. I've also tried in the shell, with no luck. sqs.filter(has_been_sent=True) and sqs.filter(has_been_sent=False) both return an empty list, EVEN THOUGH sqs.values('has_been_sent') clearly shows a bunch of records with True values for has_been_sent. And even stranger, sqs.filter(has_been_sent='t') returns a subset of records, along with 'f', 'a', and unrelated letters like 'j'! I'm at a total loss. Does anybody have experience with this sort of problem with Haystack? On a related note, are the fields you filter on through SearchQuerySet().filter() from the index fields (in search_indexes.py) or the model fields (in their respective models.py)? EDIT: I've been attempting to test my filters through Django's manage.py shell, but I think I'm doing it wrong. It doesn't seem to be following my search_indexes.py, since I limited it to a subset of MyModel with the index_queryset() method there, but I get ALL objects of MyModel in the shell. >>> from haystack.query import SearchQuerySet >>> from myapp.models import MyModel >>> sqs = SearchQuerySet().models(MyModel) And then some testing: >>> len(sqs) # SHOULD be 5, due to the index_queryset() method I defined in search_indexes.py 17794 >>> sqs.filter(has_been_sent='true') # Same happens for True, 'TRUE', and all forms of false [] >>> len(sqs.filter(not_a_real_field='g')) # Made-up filter field, returns a subset somehow?? 2591 >>> len(sqs.filter(has_been_sent='t')) 3621 >>> len(sqs.filter(has_been_sent='f')) 2812 Because I get a subset when filtering on the fake field, I don't think it's recognizing has_been_sent as one of my filter fields. Especially since the results for 't' and 'f' don't add up to the total, which it SHOULD, as that boolean field is required for all records. Am I missing a step in my testing? A: Try to filter as a string true or false in the query, this has been the known limitation in haystack and i am not sure if this is fixed, instead of doing: sqs.filter(has_been_sent=True) Do this: sqs.filter(has_been_sent='true') # true or false in lowercase P.S when you do SearchQuerySet().filter() you filter based on the fields defined in search_indexes.py file. A: It appears that the problem was in the Simple backend. I installed and switched Haystack over to Whoosh, and this problem cleared up. (Now SearchQuerySet().models() doesn't work, but that's apparently a documented bug with Haystack + Whoosh.) Edit: Due to further troubles with Whoosh, I switched to using Solr 4.5.1 as my backend. Everything is working as expected now.
{ "pile_set_name": "StackExchange" }
Q: Proving seemingly obvious equations with measure theory I want to prove very simple equations, but am having some trouble because I have no clue. Definition of Conditional Probability is given as follows: Definition $E(Y|X)$ is a conditonal expectation of $Y$ given $X$ if it is a $\sigma (X)$-measurable random variable and for any Borel set $S \subseteq R$, we have $E(E(Y|X)1_{X \in S})=E(Y1_{X \in S})$ I want to solve those two problems based on above definition. Suppose $X$ and $Y$ are independent. (a) Prove that $E(Y|X)=E(Y)$ with probability 1. (b) Prove that $Var(Y|X)=Var(Y)$ with probability 1. (c) Explicitly verify the following theorem.(with $\mathcal{G}=\sigma (X)$ in this case) Theorem: Let $Y$ be a random variable, and $\mathcal{G}$ a sub-$\sigma$-algebra. If $Var(Y)<\infty$, then $Var(Y)=E(Var(Y|\mathcal{G}))+Var(E(Y|\mathcal{G}))$ Let X and Y be jointly defined random variables. (a) Suppose $E(Y|X)=E(Y)$ with probability 1. Prove that $E(XY)=E(X)E(Y)$ (b) Give an example where $E(XY)=E(X)E(Y)$, but it is NOT the case that $E(Y|X)=E(Y)$ with probability 1. 1. For this part, I think I could deal with this by myself if I understand how to prove (a). So please help me with just (a). In order to prove that $E(Y|X)=E(Y)$ with probability 1, I need to show that \begin{equation} E(E(Y|X)1_{X})=E(E(Y)1_{X}). \end{equation} if $X$ and $Y$ are independent. From the above definition, conditional expectation $E(Y|X)$ is defined such that $E(E(Y|X)1_{X \in S})=E(Y1_{X \in S})$. So maybe it suffices to show that $E(Y1_{X \in S})=E(E(Y)1_{X \in S})$. But it looks too obvious. My guess is that there is more educated way of showing why $E(Y1_{X \in S})=E(E(Y)1_{X \in S})$ holds if $X$ and $Y$ are independent. 2. I can do (b) on my own. So for (a), "$E(Y|X)=E(Y)$ with probability 1" means that $E(E(Y|X)1_{X})=E(E(Y)1_{X})$. But, how can I prove $E(XY)=E(X)E(Y)$ with that information? I know how to prove this equation with integral sign or summation sign. But how does information about a.s. convergence give any implication about that equation? A: $\newcommand{\PM}{\mathbb{P}}\newcommand{\E}{\mathbb{E}}$I understand from your that you only need help with 1a en 2a. Furthermore you did not specify what is $R$ etc. But I think you mean what I think you mean. Furthermore the integrability of $Y$ is assumed by talking about the conditional expectation $Y$ given something. Let $(\Omega,\mathcal F,\PM)$ be the probability space to get started. 1.a) Let $A\in \sigma(X)$, then we have $\mathbf{1}_A$ and $Y$ are independent, because $\sigma(X)$ and $\sigma(Y)$ are independent by assumption. So: \begin{align} \int_A Y\,d\PM = \int_\Omega \mathbf{1}_AY\,d\PM= \E[\mathbf{1}_AY]=\E[\mathbf{1}_A]\E[Y]=\E[Y]\int_A\,d\PM=\int_A\E[Y]\,d\PM \end{align} So we have for all $A\in\sigma(X)$: \begin{align} \int_A\E[Y]\,d\PM =\int_A Y\,d\PM=\int_A\E[Y|X]\,d\PM \end{align} We know that $\E[Y]$ is just a number hence it is surely $\sigma(X)$ measurable. By the uniqueness of existence of a $\sigma(X)$-measurable $\E[Y|X]$ up to a null set we get $\E[Y]=\E[Y|X]$ a.s. 2.b) For this question it makes sense if we assume the integrability of $X$ and $XY$. Otherwise why would someone be interested in $\E[X]$...? Okay, now I hope you know the pullout property. That says: \begin{align} \E[XY|X]=X\E[Y|X] \end{align} On one hand we have: \begin{align} \E[\E[XY|X]]=\E[XY] \end{align} And on the other hand: \begin{align} \E[X\E[Y|X]]=\int_\Omega X\E[Y|X]\,d\PM=\int_\Omega X \E[Y]\,d\PM=\E[X]\E[Y] \end{align} So: \begin{align} \E[XY]=\E[X]\E[Y] \end{align} I hope you see where we have used $\E[Y|X]=\E[Y]$ a.s.
{ "pile_set_name": "StackExchange" }
Q: The best way to register process start time? I am writing a program that must register time of starting a process such as notepad. I thought that it is good to create a Timer that checks all of processes every second. But I think that it will slow down the user's computer. Is there a better way of doing this? A: Initially determine for all running processes the creation time. Then use WMI to register for process creation events. See the code below for a small example on how to use WMI for process creation events: static void Main(string[] args) { using (ManagementEventWatcher eventWatcher = new ManagementEventWatcher(@"SELECT * FROM __InstanceCreationEvent WITHIN 1 WHERE TargetInstance ISA 'Win32_Process'")) { // Subscribe for process creation notification. eventWatcher.EventArrived += ProcessStarted_EventArrived; eventWatcher.Start(); Console.In.ReadLine(); eventWatcher.EventArrived -= ProcessStarted_EventArrived; eventWatcher.Stop(); } } static void ProcessStarted_EventArrived(object sender, EventArrivedEventArgs e) { ManagementBaseObject obj = e.NewEvent["TargetInstance"] as ManagementBaseObject; // The Win32_Process class also contains a CreationDate property. Console.Out.WriteLine("ProcessName: {0} " + obj.Properties["Name"].Value); } BEGIN EDIT: I've further investigated process creation detection with WMI and there is a (more) resouces friendly solution (but needs administrative privileges) using the Win32_ProcessStartTrace class (please see TECHNET for further information): using (ManagementEventWatcher eventWatcher = new ManagementEventWatcher(@"SELECT * FROM Win32_ProcessStartTrace")) { // Subscribe for process creation notification. eventWatcher.EventArrived += ProcessStarted_EventArrived; eventWatcher.Start(); Console.Out.WriteLine("started"); Console.In.ReadLine(); eventWatcher.EventArrived -= ProcessStarted_EventArrived; eventWatcher.Stop(); } static void ProcessStarted_EventArrived(object sender, EventArrivedEventArgs e) { Console.Out.WriteLine("ProcessName: {0} " + e.NewEvent.Properties["ProcessName"].Value); } In this solution you do not have to set an polling interval. END EDIT BEGIN EDIT 2: You could use the Win32_ProcessStopTrace class to monitor process stop events. To combine both process start and process stop events use the Win32_ProcessTrace class. In the event handler use the ClassPath proberty to distinguish between start/stop events: using (ManagementEventWatcher eventWatcher = new ManagementEventWatcher(@"SELECT * FROM Win32_ProcessTrace")) { eventWatcher.EventArrived += Process_EventArrived; eventWatcher.Start(); Console.Out.WriteLine("started"); Console.In.ReadLine(); eventWatcher.EventArrived -= Process_EventArrived; eventWatcher.Stop(); } static void Process_EventArrived(object sender, EventArrivedEventArgs e) { Console.Out.WriteLine(e.NewEvent.ClassPath); // Use class path to distinguish // between start/stop process events. Console.Out.WriteLine("ProcessName: {0} " + e.NewEvent.Properties["ProcessName"].Value); } END EDIT 2
{ "pile_set_name": "StackExchange" }
Q: Issue with the coding mode I am using Eclipse for PHP Developers Version: 3.0.2, today when I am coding, the mode seems changed, it looks like command mode, see below pic: The normal mode should be like this: if you see the status of cursor, you can see the difference. So my question is : How could I change the first mode to the second one? I want to use the normal one. A: To change the cursor status press Insert key. It will change the first mode in to second.
{ "pile_set_name": "StackExchange" }
Q: ¿Como registrar notificaciones push en Swift 3? Las notificaciones push en versiones anteriores a Xcode8, funcionaban correctamente, pero al migrar a swift 3 ya no registra el Token. Mi código es el siguiente: application.registerUserNotificationSettings(UIUserNotificationSettings(types: [.badge, .sound, .alert], categories: nil)); application.registerForRemoteNotifications() ¿Cómo registrar las notificaciones push en Swift 3? A: Tienes que importar el framework UserNotifications y añadir el delegado UNUserNotificationCenterDelegate en el fichero AppDelegate.swift Solicitar permiso al usuario func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool { let center = UNUserNotificationCenter.current() center.requestAuthorization(options:[.badge, .alert, .sound]) { (granted, error) in // Enable or disable features based on authorization. } application.registerForRemoteNotifications() return true } Obtener el Token func application(_ application: UIApplication, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data) { let deviceTokenString = deviceToken.reduce("", {$0 + String(format: "%02X", $1)}) print(deviceTokenString) }
{ "pile_set_name": "StackExchange" }
Q: Foreground Notification in service not working in Android 8.1 After upgrading my phone to 8.1 Developer Preview my background service no longer starts up properly. I still see a difference, in android oreo I don't see my custom foreground notification (I only see the "app is running in the background" notification). It works on android < 26 and on android 26 (Oreo) as well. Do I have to adjust anything there as well? Tks Bro! My Service: public class ForegroundService extends Service { private static final String LOG_TAG = "ForegroundService"; public static boolean IS_SERVICE_RUNNING = false; @Override public void onCreate() { super.onCreate(); } @Override public int onStartCommand(Intent intent, int flags, int startId) { if (intent != null && intent.getAction().equals(Constants.ACTION.STARTFOREGROUND_ACTION)) { showNotification(); } else if (intent != null && intent.getAction().equals(Constants.ACTION.STOPFOREGROUND_ACTION)) { MainActivity.exoPlayer.setPlayWhenReady(false); stopForeground(true); stopSelf(); } return START_STICKY; } private void showNotification() { Intent notificationIntent = new Intent(this, MainActivity.class); notificationIntent.setAction(Constants.ACTION.MAIN_ACTION); notificationIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_CLEAR_TASK); PendingIntent pendingIntent = PendingIntent.getActivity(this, 0, notificationIntent, 0); Intent playIntent = new Intent(this, ForegroundService.class); playIntent.setAction(Constants.ACTION.STOPFOREGROUND_ACTION); PendingIntent pplayIntent = PendingIntent.getService(this, 0, playIntent, 0); Bitmap icon = BitmapFactory.decodeResource(getResources(), R.drawable.radio); Notification notification = new NotificationCompat.Builder(this) .setContentTitle("Background") .setContentText("is Playing...") .setSmallIcon(R.drawable.background) .setLargeIcon(Bitmap.createScaledBitmap(icon, 128, 128, false)) .setContentIntent(pendingIntent) .setOngoing(true) .addAction(android.R.drawable.ic_delete, "Turn Off", pplayIntent).build(); startForeground(Constants.NOTIFICATION_ID.FOREGROUND_SERVICE, notification); } @Override public void onDestroy() { super.onDestroy(); } @Override public IBinder onBind(Intent intent) { // Used only in case if services are bound (Bound Services). return null; } } My Constants: public class Constants { public interface ACTION { public static String MAIN_ACTION = "com.marothiatechs.foregroundservice.action.main"; public static String PLAY_ACTION = "com.marothiatechs.foregroundservice.action.play"; public static String STARTFOREGROUND_ACTION = "com.marothiatechs.foregroundservice.action.startforeground"; public static String STOPFOREGROUND_ACTION = "com.marothiatechs.foregroundservice.action.stopforeground"; } public interface NOTIFICATION_ID { public static int FOREGROUND_SERVICE = 101; } } A: public class MyFirebaseMessagingService extends FirebaseMessagingService { private static final String TAG = "MyFMService"; String CHANNEL_ID = "com.app.app"; NotificationChannel mChannel; private NotificationManager mManager; private String title, msg, actionCode; private int badge = 0; @RequiresApi(api = Build.VERSION_CODES.O) @Override public void onMessageReceived(RemoteMessage remoteMessage) { // Handle data payload of FCM messages. Log.d(TAG, "FCM Message Id: " + remoteMessage.getMessageId()); Log.d(TAG, "FCM Notification Message: " + remoteMessage.getData() + "...." + remoteMessage.getFrom()); if (remoteMessage.getData() != null) { Map<String, String> params = remoteMessage.getData(); JSONObject object = new JSONObject(params); //Log.e("JSON_OBJECT", object.toString()); title = object.optString("title",""); actionCode = object.optString("action_code", ""); msg = object.optString("body", ""); if (remoteMessage.getData().containsKey("badge")) { badge = Integer.parseInt(remoteMessage.getData().get("badge")); //Log.d("notificationNUmber", ":" + badge); setBadge(getApplicationContext(), badge); Prefs.putBoolean(Constant.HAS_BADGE,true); } if (!(title.equals("") && msg.equals("") && actionCode.equals(""))) { createNotification(actionCode, msg, title); } else { //Log.e("Notification", "Invalid Data"); } } } public void createNotification(String action_code, String msg, String title) { Intent intent = null; intent = new Intent(this, HomeActivity.class); intent.putExtra(Constant.ACTION_CODE, action_code); PendingIntent contentIntent = PendingIntent.getActivity(this, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) { NotificationChannel androidChannel = new NotificationChannel(CHANNEL_ID, title, NotificationManager.IMPORTANCE_DEFAULT); // Sets whether notifications posted to this channel should display notification lights androidChannel.enableLights(true); // Sets whether notification posted to this channel should vibrate. androidChannel.enableVibration(true); // Sets the notification light color for notifications posted to this channel androidChannel.setLightColor(Color.GREEN); // Sets whether notifications posted to this channel appear on the lockscreen or not androidChannel.setLockscreenVisibility(Notification.VISIBILITY_PRIVATE); getManager().createNotificationChannel(androidChannel); Notification.Builder nb = new Notification.Builder(getApplicationContext(), CHANNEL_ID) .setContentTitle(title) .setContentText(msg) .setTicker(title) .setShowWhen(true) .setSmallIcon(R.mipmap.ic_small_notification) .setLargeIcon(BitmapFactory.decodeResource(this.getResources(), R.mipmap.ic_launcher_round)) .setAutoCancel(true) .setContentIntent(contentIntent); getManager().notify(101, nb.build()); } else { try { @SuppressLint({"NewApi", "LocalSuppress"}) android.support.v4.app.NotificationCompat.Builder notificationBuilder = new android.support.v4.app.NotificationCompat.Builder(this).setLargeIcon(BitmapFactory.decodeResource(getResources(), R.mipmap.ic_launcher)) .setSmallIcon(R.mipmap.ic_small_notification) .setLargeIcon(BitmapFactory.decodeResource(this.getResources(), R.mipmap.ic_launcher_round)) .setContentTitle(title) .setTicker(title) .setContentText(msg) .setShowWhen(true) .setContentIntent(contentIntent) .setLights(0xFF760193, 300, 1000) .setAutoCancel(true).setVibrate(new long[]{200, 400}); /*.setSound(Uri.parse("android.resource://" + getApplicationContext().getPackageName() + "/" + R.raw.tone));*/ NotificationManager notificationManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE); notificationManager.notify((int) System.currentTimeMillis() /* ID of notification */, notificationBuilder.build()); } catch (SecurityException se) { se.printStackTrace(); } } } private NotificationManager getManager() { if (mManager == null) { mManager = (NotificationManager) getSystemService(Context.NOTIFICATION_SERVICE); } return mManager; } }
{ "pile_set_name": "StackExchange" }
Q: Get dates between two months Can I know why am I getting weird output in my SQL statement whereby my table has two dates of the month May but it only appears for one month, the same things I test for dates range it is working. select distinct To_Char (attendance_date,'dd/MM/yyyy') from DIT_2010MAR_CIT4114A_FYP1_NO where attendance_Date between To_Date ('05', 'MM') and To_Date ('05', 'MM'); A: Actually your query doesn't return anything in my test :(. The reason is the attendance_date doesn't fall in between the condition that you have specified(To_date('05','MM') and To_date('05','MM') is returning the same value). You can change your code as ...where attendance_date between To_date('05','MM') and To_date('06','MM'); to select all the dates of the May month. SQL> desc atten; Name Null? Type ----------------------------------------- -------- ---------------------------- ATTENDANCE_DATE DATE SQL> select * from atten; ATTENDANC --------- 05-MAY-16 03-JUN-16 03-JUL-16 03-MAY-16 SQL> select distinct To_Char (attendance_date,'dd/MM/yyyy') from atten where attendance_Date between To_Date ('05', 'MM') and To_Date ('05', 'MM'); no rows selected You can try the following to get the result. SQL> select to_char(attendance_date, 'dd/mm/yyyy') from atten where extract(month from attendance_date) = 5; 2 TO_CHAR(AT ---------- 05/05/2016 03/05/2016 OR If you wish to display attendance date of months between 5 and 6(As you question's title suggests). select to_char(attendance_date, 'dd/mm/yyyy') from atten where extract(month from attendance_date) = 5 OR extract(month from attendance_date)=6; TO_CHAR(AT ---------- 05/05/2016 03/06/2016 03/05/2016
{ "pile_set_name": "StackExchange" }
Q: Crater equation I found this crater equation $$ D=0.07 \cdot C_f \cdot (g_e/g)^{1/6} \cdot (W p_a/p_t)^{1/3.4} $$ on a website, where $$ \begin{align} D &= \text{Crater Diameter}\\ C_f &= \text{Crater Collapse Factor (this is equal to 1.3 for craters > 4km on Earth)}\\ g_e &= \text{Gravitational Acceleration at the surface of Earth}\\ g &= \text{Acceleration at the surface of the body on which the crater is formed}\\ W &= \text{Kinetic Energy of the impacting body (in kilotons TNT equivalent)}\\ p_a &= \text{Density of the impactor (ranging from 1.8g/cm3 for a comet to 7.3g/cm3 for an iron meteorite).}\\ p_t &= \text{Density of the target rock} \end{align} $$ Can someone explain to me what the crater collapse factor is? A: I can't find any description of how the equation you cite is derived, so I can only speculate. With that caveat, I would guess the factor of 1.3 is the ratio of the rim diameter to the excavation diameter. The bolide will excavate an initial bowl shaped crater, and the diameter of this is the excavation diameter. Immediately after the impact various processes can occur, including a subsidence of the ground immediately outside the initial crater: (image is from this paper) The result of this is that the final crater diameter will be greater than the excavation diameter by about a factor of 1.3 (see for example this review). I would guess that this is what the author means by the crater collapse factor i.e. it describes the increase in the crater size due to subsidence of the ground outside the initial excavation crater.
{ "pile_set_name": "StackExchange" }
Q: onchange event doesn't invoke inside <apex:repeat My onchange event is not firing if it is inside the <apex:repeat and if I remove the repeat the onselect is invoking and working fine, what might be the issue? I do not find any log so it means that its not even firing the action method. VFP: <apex:repeat value="{!events}" var="itr"> <apex:selectList id="rt" label="Job Types" value="{!typeId}" size="1"> <apex:actionSupport event="onchange" reRender="new" action="{!onChangeSelect}" /> <apex:selectOptions value="{!selecttypes}"/></apex:selectList> </apex:repeat> APEX: public PageReference onChangeSelect() { system.debug(':::onChangeSelect'); return null; } A: Try adding an <apex:actionRegion> before the <apex:selectList> element.
{ "pile_set_name": "StackExchange" }
Q: Custom SmoothHistogram3D I decided to make my own SmoothHistogram3D function because I wanted to be able to specify my own bin sizes and not rely on their distribution functions so taking a cue from this answer and bringing it into the 3rd dimension I created this function: Create3DHist[histData_] := Module[{xAxis, yAxis, intensity, yAndI, xYAndI}, xAxis = histData[[1, 1]]; yAxis = histData[[1, 2]]; binCounts = histData[[2]]; (* Sorry it gets a little messy here *) yAndI = Map[{yAxis[[1 ;; -2]], #}\[Transpose] &, binCounts]; xYAndI = Partition[Flatten[MapThread[Thread[{##}] &, {xAxis[[1 ;; -2]], yAndI}]], 3]; ListPlot3D[xYAndI, InterpolationOrder -> 3, PlotRange -> All] ] (I got the idea for the xYAndI line from this answer) which takes the result of 2d HistogramList data and creates a 3D plot with it, like this: data = RandomVariate[BinormalDistribution[.5], 1000]; histBinnedData = HistogramList[data, {{0.2}, {0.2}}]; Create3DHist[histBinnedData] This produces a plot that looks like this: However, I found I could create a much more appealing plot if I just use the binCounts and put the plot range on top of it (which is essentially the same as using BinCounts instead of the HistogramList). ListPlot3D[histBinnedData[[2]], InterpolationOrder -> 3, PlotRange -> All, DataRange -> {{-3, 3}, {-3, 3}}] which produces: which is a lot smoother. However, my issue is that my smoothing algorithm uses non-equal bin sizes, so using DataRange to place the numbers onto the axes is imperfect because the binCounts aren't evenly spaced, but are treated as such by DataRange. However, my create3DHist function doesn't have that issue. So after all that my question is, is there a way to combine the smoothness of the 2nd method with the x and y axis accuracy of the first method? Also, what causes this difference in smoothing even though they both use an interpolation order of 3? Small Update: I realized the reason why the second method comes out so much smoother is because mathematica uses a lot more points than just the array you give it to plot it for the second method. I used mesh->All to compare how many points each plot had and got this result: Is there anyway to extract the points in the second way and fix masses to them, or is there a way to fill in points in the first method so that it is more like the second method? A: You can apply post-processing to the regular ListPlot3D data = RandomVariate[BinormalDistribution[.5], 1000]; {{x, y}, hist} = HistogramList[data, {{0.1}, {0.1}}, "PDF"]; {fx, fy} = Interpolation@Transpose@{Range[0.5, Length@#], #} & /@ {x, y}; ListPlot3D[GaussianFilter[Transpose@hist, {3, 1}], InterpolationOrder -> 3, PlotRange -> All] /. GraphicsComplex[p_, d_, opts___] :> GraphicsComplex[Transpose@{fx@#, fy@#2, #3} & @@ Transpose@p, d, Sequence @@ ({opts} /. (VertexNormals -> n_) :> (VertexNormals -> n/Transpose@{fx'@#, fy'@#2, 1 + 0 #3} & @@ Transpose@p))] P.S. Without GaussianFilter or another sort of smoothing it is just an interpolated histogram, not a smoothed histogram.
{ "pile_set_name": "StackExchange" }
Q: Policies on counterproductive retagging sprees I have just noticed that, right now, old questions in field theory and Galois theory are being brought up because someone is tagging them ring-theory, for instance here, here, or here. This somewhat suppresses new questions and seems unnecessary, but what strikes me is: Field and Galois theory is hardly ring theory in most situations, certainly not in the ones named in the examples. One could just as well tag these group-theory. Tagging them galois-extensions would be much more justifiable. So I think this is actually cluttering the ring-theory tag. Don’t we have policies against possibly counterproductive retag sprees? Evil suspicion: Does retagging old questions one has answered add to the specific tag badge process? A: The rule is not to do too many retags at the same time (where three to five is a rule of thumb). This was respected by the retagger as they did four and then stopped. As you mention correctly for smaller tags it can still be an issue. Of course retagging poorly should always be avoided but it is harder to have a simple guideline there (I did not check the posts in detail). If you think a retag is really wrong, undo it. You can also comment-notify the user that did the retag (auto-complete does not suggest it but it works for users that edited the post). Regarding your suspicion, technically that's true, I'd doubt it was the motivation. That it concerns their posts could also be explained by them cleaning up in their profile.
{ "pile_set_name": "StackExchange" }
Q: Calculate the azimuth between two points given the latitude and longitude are known with VBA I have to calculate the azimuth between two points given in latitude and longitude is this fynction correct? Function azimut(lat1, lat2, lon1, lon2) azimut = WorksheetFunction.Degrees(WorksheetFunction.Atan2( Cos(Application.WorksheetFunction.Radians(lat1)) * Sin(Application.WorksheetFunction.Radians(lat2)) - Sin(Application.WorksheetFunction.Radians(lat1)) * Cos(Application.WorksheetFunction.Radians(lat2)) * Cos(Application.WorksheetFunction.Radians(lon2 - lon1)), Sin(Application.WorksheetFunction.Radians(lon2 - lon1)) * Cos(Application.WorksheetFunction.Radians(lat2)))) End Function A: Assuming your formula is correct (since I interpret it to the code below without checking it), then here is the code: Function Azimuth(lat1 As Single, lat2 As Single, lon1 As Single, lon2 As Single) As Single Dim X1 As Single, X2 As Single, Y As Single, dX As Single, dY As Single With Application.WorksheetFunction X1 = .Radians(lat1) X2 = .Radians(lat2) Y = .Radians(lon2 - lon1) End With dX = Math.Cos(X1) * Math.Sin(X2) - Math.Sin(X1) * Math.Cos(X2) * Math.Cos(Y) dY = Math.Cos(X2) * Math.Sin(Y) With Application.WorksheetFunction Azimuth = .Degrees(.Atan2(dX, dY)) End With End Function Well, even if the formula turns out to be incorrect, at least the code above should give you the idea to start with.
{ "pile_set_name": "StackExchange" }
Q: Does the limit $\lim_{(x,y)\to(0,0)}\frac{x^2y}{x^2+y^4}$ exist? I am trying to evaluate $$\lim_{(x,y)\to(0,0)}\frac{x^2y}{x^2+y^4}$$ I was thinking of using $$0\leq\frac{x^2y}{x^2+y^4}<\frac{(x^2+y^4)\cdot y}{x^2+y^4}=y$$ which tends to as $(x,y)\to(0,0)$, which means that the limit is $0$ by the squeeze theorem. Is that correct? A: $$\left|\frac{x^2y}{x^2+y^4}\right|\le\frac{x^2|y|}{x^2}=|y|\xrightarrow[(x.y)\to(0,0)]{}0$$
{ "pile_set_name": "StackExchange" }
Q: How to stack several pandas DataFrames in a for loop Since I have multiple pandas DataFrames, I want to run the .stack() method on all of them using a for loop. Other methods like labeling columns and setting indexes work, but for some reason the stack method doesn't lead to any changes: for df in [df1, df2, df3, df4]: df = df.stack() Result: print(df1.head()) Q1 1990 Q2 1990 Q3 1990 ... Q2 2018 Q3 2018 Q4 2018 EC ... C13840 NaN NaN NaN ... NaN NaN NaN C28525 NaN NaN NaN ... 8480.00 8125.00 NaN C06541 NaN NaN NaN ... NaN NaN NaN C51345 NaN NaN NaN ... 13.75 15.00 NaN C44265 NaN NaN NaN ... 141.90 129.54 133.44 Expected result: print(df1.head(10)) EC C13840 Q1 1990 NaN Q2 1990 NaN Q3 1990 NaN Q4 1990 NaN Q1 1991 NaN Q2 1991 NaN Q3 1991 NaN Q4 1991 NaN Q1 1992 NaN Q2 1992 NaN ... ... Thank you. A: Assign output to new list od Series, because stack not working inplace: dfs = [df.stack() for df in [df1, df2, df3, df4]] And then if need assign back: df1, df2, df3, df4 = dfs Or join together: df = pd.concat(dfs, axis=1)
{ "pile_set_name": "StackExchange" }
Q: Configuring and installing python2.6.7 and mod_wsgi3.3 on RHEL for production This is a long question detailing all that I did from the start. Hope it helps. I am working on a django application and need to deploy it on to the production server. The production server is a virtual server managed by IT, and I do not have the root access. They have given me rights to manage the installations of my modules in /swadm and /home/swadm. So I have planned to do create the following arrangement: /swadm/etc/httpd/conf where I maintain httpd.conf /swadm/etc/httpd/user-modules where I maintain my apache modules (mod_wsgi) /swadm/var/www/django/app where I maintain my django code /swadm/usr/local/python/2.6 where I will maintain my python 2.6.7 installation with modules like django, south etc. /home/swadm/setup where I will be storing the required source tarballs and doing all the building and installing out of. /home/swadm/workspace where I will be maintaining application code that is in development. The system has python2.4.3 and python2.6.5 installed but IT recommended that I maintain my own python installation if I required lot of custom modules to be installed (which I would be). So I downloaded python2.6.7 source. I needed to ensure python is installed such that its shared library is available. When I ran the configure script with only the option --enable-shared and --prefix=/swadm/usr/local/python/2.6, it would get installed but surprisingly point to the system's installation of python2.6.5. $ /swadm/usr/local/python/2.6/bin/python Python 2.6.5 (r265:79063, Feb 28 2011, 21:55:45) [GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> So I ran the configure script following instructions from Building Python with --enable-shared in non-standard location as ./configure --enable-shared --prefix=/swadm/usr/local/python/2.6 LDFLAGS="-Wl,-rpath /swadm/usr/local/python/2.6/lib" Also making sure I had created the directories beforehand ( as the link suggests) to avoid the errors expected. Now typing /swadm/usr/local/python/2.6/bin/python would start the correct python version 2.6.7. So I moved on to configuring and installing mod_wsgi. I configured it as ./configure --with-python=/swadm/usr/local/python/2.6/bin/python the Makefile that was created tries to install the module into /usr/lib64/httpd/modules and I have no write permissions there, so I modified the makefile to install into /swadm/etc/httpd/user-modules. (There might be a command argument but I could not figure it out). The module got created fine. A test wsgi script which I used was import sys def application(environ, start_response): status = '200 OK' output = 'Hello World!' output = output + str(sys.version_info) output = output + '\nsys.prefix = %s' % repr(sys.prefix) output = output + '\nsys.path = %s' % repr(sys.path) response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] And the output shown was, surprisingly Hello World!(2, 6, 5, 'final', 0) sys.prefix = '/swadm/usr/local/python/2.6' sys.path = ['/swadm/usr/local/python/2.6/lib64/python26.zip', '/swadm/usr/local/python/2.6/lib64/python2.6/', '/swadm/usr/local/python/2.6/lib64/python2.6/plat-linux2', '/swadm/usr/local/python/2.6/lib64/python2.6/lib-tk', '/swadm/usr/local/python/2.6/lib64/python2.6/lib-old', '/swadm/usr/local/python/2.6/lib64/python2.6/lib-dynload']` So you see somehow the mod_wsgi module still got configured with the system's python 2.6.5 installation and not my custom one. I tried various things detailed in the mod_wsgi documentation Set WSGIPythonHome in httpd.conf to /swadm/usr/local/python/2.6 and WSGIPythonPath to /swadm/usr/local/python/2.6/lib/python2.6 Created a symlink in the python config directory to point to the libpython2.6.so file $ ln -s ../../libpython2.6.so When I do ldd libpython2.6.so this is what I see: $ ldd libpython2.6.so linux-vdso.so.1 => (0x00007fffc47fc000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b666ed62000) libdl.so.2 => /lib64/libdl.so.2 (0x00002b666ef7e000) libutil.so.1 => /lib64/libutil.so.1 (0x00002b666f182000) libm.so.6 => /lib64/libm.so.6 (0x00002b666f385000) libc.so.6 => /lib64/libc.so.6 (0x00002b666f609000) /lib64/ld-linux-x86-64.so.2 (0x00000031aba00000) And ldd mod_wsgi.so gives $ ldd /swadm/etc/httpd/user-modules/mod_wsgi.so linux-vdso.so.1 => (0x00007fff1ad6e000) libpython2.6.so.1.0 => /usr/lib64/libpython2.6.so.1.0 (0x00002af03aec7000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00002af03b270000) libdl.so.2 => /lib64/libdl.so.2 (0x00002af03b48c000) libutil.so.1 => /lib64/libutil.so.1 (0x00002af03b690000) libm.so.6 => /lib64/libm.so.6 (0x00002af03b893000) libc.so.6 => /lib64/libc.so.6 (0x00002af03bb17000) /lib64/ld-linux-x86-64.so.2 (0x00000031aba00000) I have been trying re-installing and re-configuring python and mod_wsgi but to no avail. Please let me know where I am going wrong. (Sorry for the very long post) TLDR; System with non-root access has default python installation. I am maintaining my own python and python modules. mod_wsgi configured and built with the custom python, still points to the system's python when I run a test script that prints out the sys version_info and path. UPDATE: On Browsing through the stackoverflow (should have done it earlier) I found this answer by Graham Dumpleton on mod_wsgi python2.5 ubuntu 11.04 problem which solved the error for me. Now when I do ldd mod_wsgi.so I see that it is linked to the correct shared library of python. I now installed Django and MySQLdb using my custom python install. And Now I am facing this error: The following error occurred while trying to extract file(s) to the Python egg cache: [Errno 13] Permission denied: '/var/www/.python-eggs' The Python egg cache directory is currently set to: /var/www/.python-eggs Perhaps your account does not have write access to this directory? You can change the cache directory by setting the PYTHON_EGG_CACHE environment variable to point to an accessible directory. So I did change the value of PYTHON_EGG_CACHE by doing export PYTHON_EGG_CACHE=/swadm/var/www/.python-eggs. but I am still getting the same error. I am investigating more. Will update when I solve this. A: Egg cache issue solved by setting environment variable in WSGI script: http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Access_Rights_Of_Apache_User or in Apache configuration: http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIPythonEggs http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess Which of latter two is used depends on whether using emebedded mode or daemon mode.
{ "pile_set_name": "StackExchange" }
Q: What is area in equation of lift?Does it remain constant during flight? I am beginner & learning aerodynamics concept. What is area in equation of lift(L) =Coffecient of lift x area x density x square of velocity/2. Does increase in angle of attack increases the area & thus increasing lift? A: The área $A$ in lift equation $L=\frac{1}{2} \rho AV^{2}C_{L}$ is a reference area, so you can choose the one you prefer, but some election can be more apropiate than others depending on what you want to compute. For example if you are going to relate lift coefficient with drag one, the reference area should be the same, in this case a good selection will be the projected area in lift direction (aka planform area). It has the drawback that it depends on angle of attack/incidence since the true area is the same but lift vector changes. Another well selected area could be the true area (aka wetted) when working with drag, specially when friction drag is dominant considering that viscous stresses are proporcional to this area. I give you some references when this is better explained: There are several different areas from which to choose when developing the reference area used in the drag equation. If we think of drag as being caused by friction between the air and the body, a logical choice would be the total surface area (As) of the body. If we think of drag as being a resistance to the flow, a more logical choice would be the frontal area (Af) of the body which is perpendicular to the flow direction. This is the area shown in blue on the figure. Finally, if we want to compare with the lift coefficient, we should use the same area used to derive the lift coefficient, the wing area, (Aw). Each of the various areas are proportional to the other areas, as designated by the "~" sign on the figure. Since the drag coefficient is determined experimentally, by measuring the drag and measuring the area and performing the necessary math to produce the coefficient, we are free to use any area which can be easily measured. If we choose the wing area, the computed coefficient has a different value than if we choose the cross-sectional area, but the drag is the same, and the coefficients are related by the ratio of the areas. In practice, drag coefficients are reported based on a wide variety of object areas. Size effects on drag - NASA A comment is in order regarding the reference area Sin Eqs. (2.3) to (2.5). This is nothing other than just a reference area, suitably chosen for the definition of the force and moment coefficients. Beginning students in aerodynamics frequently want to think that S should be the total wetted area of the airplane. (Wetted area is the actual surface area of the material making up the skin of the airplane-it is the total surface area that is in actual contact with, i.e., wetted by, the fluid in which the body is immersed.) Indeed, the wetted surface area is the surface on which the pressure and shear stress distributions are acting; hence it is a meaningful geometric quantity when one is discussing aerodynamic force. However, the wetted surface area is not easily calculated, especially for complex body shapes. In contrast, it is much easier to calculate the planform area of a wing, that is, the projected area that we see when we look down on the wing. Aircraft performance and design - John D. Anderson
{ "pile_set_name": "StackExchange" }
Q: Is it possible to view a list of what components you have worked on recently within the Tridion interface? I have had a request from a content editor on whether it is possible to see a list of components they have edited recently (maybe sortable by date). They have a large number of components to edit and thought it might be a good way to track which ones they have already worked on/completed. I have suggested they can use the publishing queue to see what pages they have published recently (and therefore what pages/components they have likely worked on) and I also suggested they could leave the ones they have not yet worked on unlocalised and only localise once they are working on that component (meaning they can simply search for unlocalised components to see the ones that are still to be worked on but I am unaware of any way to do exactly what they want. Does anyone have any suggestions or is this not possible? A: I guess the easiest way to do this (i.e., without customization) is to create a search folder listing the components that were last modified by the user. Users can create this themselves by searching, then saving the search as a "Search Folder". Otherwise, if development is an option, create a custom page that lists items last modified by %current user% - using a search query to get the results. A: To be very frank If I am a content author, I might not be happy with you suggestions. And even I might not be happy with a Custom solution (as others have also suggested) for the requirement that you have given. I would be in line with Nuno's suggestion of having a "Search Folder" for your search query in the Content Manager Explorer itself. If your search query is getting changed frequently, may be you can train your content author to use the "Advance Search" feature of the CME as shown below: The only Downside (I am not sure if it is really a Downside) is that your search indexes need to be maintained frequently as per the SDL Tridion Installation and Maintenance document I hope it helps
{ "pile_set_name": "StackExchange" }
Q: Downloading Pyclip python module on Ubuntu I need to install the Pyclip module for python but I cant find directions on how to do this on Ubuntu. does anyone here know? thanks! A: Run these commands: sudo apt get update sudo apt-get install python-clips
{ "pile_set_name": "StackExchange" }
Q: Fast way to extract data from a list I used data=Table[{i,f[i]},{i,1,n}] to produce a list, here n is greater than 2^20 = 1048576. The function f(N) runs in time O(N*Log(N)), it is defined as: Mod[PowerMod[i,n,n]-i,n] (n is an argument in a function which use this) Now I want to give a table which shows the values of i that f(i) is 0, and another table for f(i) non-zero. I used zero = Select[data,#[[2]]==0&], but it is slow in the following sense: n=2^22, timing for data = 10.171, timing for zero = 4.508 n=2^23, timing for data = 21.606, timing for zero = 9.250 n=2^24, timing for data = 43.399, timing for zero = 17.971 n=2^25, timing for data = 84.209, timing for zero = 34.523 n=2^26, timing for data = 167.420, timing for zero = 71.885 The hardest computation is the data, But after that I want to have a much faster way to know the zeros of the function f. Of course I can use For or Do to append the zeros i each time f(i) is zero. But we know that AppendTo is slow, and For or Do is slower than Table. Is there any way to construct a list + exact data fast? Update: Thanks for all the suggestion. Here is a table of comparison. The green columns is to find i such that f[i]=0 and the white columns (excluding the 1st and 2nd column) is to find i such that f[i]!=0. The last 2 columns are in fact using "NonzeroPositions" (the last column) as mentioned by ubpdqn, then do the complement (the second last column). This method is faster. A: Positions of zeros and non-zeros: dataZeros = SparseArray[1 - Unitize[data[[All, 2]]]]["AdjacencyLists"]; // Timing dataNonZeros = SparseArray[Unitize[data[[All, 2]]]]["AdjacencyLists"]; // Timing (* {0.249602,Null} {0.374402,Null} *) That's for a 1M entry table, on a netbook. Just use the result with Part or Extract to get the actual data. The first for zero positions can also be done (with slightly faster results via verbosity) as: dataZeros = SparseArray[Unitize[data[[All, 2]]], Automatic, 1]["AdjacencyLists"] You can also use Pick[data, Unitize[data[[All, 2]]], 0] Pick[data, Unitize[data[[All, 2]]], 1] to do the same, and get datasets directly. For ideas on how the (sparsely documented) feature of SparseArray and things like Unitize have potentially huge performance benefits, search for the terms on the site, have a look an Mr. Wizard's posts regarding them, and see my answer here where using it made a problem solvable thousands of times more quickly. As far as creating it more quickly, short of going parallel with multiple kernels, doing it as Array[{#,f[#]}&,n] might save 10% or so in time. Edit: I've found this quite a bit faster for creation: n = 1000000 data = Table[{i, Mod[PowerMod[i, n, n] - i, n]}, {i, 1, n}]; // Timing data2 = Transpose[{l = Range[n], Mod[PowerMod[l, n, n] - l, n]}]; // Timing data == data2 (* {18.064916,Null} {8.205653,Null} True *) Timings again on lounge-netbook, curious what your results are...
{ "pile_set_name": "StackExchange" }
Q: CORS issue - No 'Access-Control-Allow-Origin' header is present on the requested resource I have created two web applications - client and service apps.The interaction between client and service apps goes fine when they are deployed in same Tomcat instance. But when the apps are deployed into seperate Tomcat instances (different machines), I get the below error when request to sent service app. Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:8080' is therefore not allowed access. The response had HTTP status code 401 My Client application uses JQuery, HTML5 and Bootstrap. AJAX call is made to service as shown below: var auth = "Basic " + btoa({usname} + ":" + {password}); var service_url = {serviceAppDomainName}/services; if($("#registrationForm").valid()){ var formData = JSON.stringify(getFormData(registrationForm)); $.ajax({ url: service_url+action, dataType: 'json', async: false, type: 'POST', headers:{ "Authorization":auth }, contentType: 'application/json', data: formData, success: function(data){ //success code }, error: function( jqXhr, textStatus, errorThrown ){ alert( errorThrown ); }); } My service application uses Spring MVC, Spring Data JPA and Spring Security. I have included CorsConfiguration class as shown below: CORSConfig.java: @Configuration @EnableWebMvc public class CORSConfig extends WebMvcConfigurerAdapter { @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("*"); } } SecurityConfig.java: @Configuration @EnableGlobalMethodSecurity(prePostEnabled = true) @EnableWebSecurity @ComponentScan(basePackages = "com.services", scopedProxy = ScopedProxyMode.INTERFACES) public class SecurityConfig extends WebSecurityConfigurerAdapter { @Autowired @Qualifier("authenticationService") private UserDetailsService userDetailsService; @Bean @Override public AuthenticationManager authenticationManagerBean() throws Exception { return super.authenticationManagerBean(); } @Override protected void configure(AuthenticationManagerBuilder auth) throws Exception { auth.userDetailsService(userDetailsService); auth.authenticationProvider(authenticationProvider()); } @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers("/login").permitAll() .anyRequest().fullyAuthenticated(); http.httpBasic(); http.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS); http.csrf().disable(); } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } @Bean public DaoAuthenticationProvider authenticationProvider() { DaoAuthenticationProvider authenticationProvider = new DaoAuthenticationProvider(); authenticationProvider.setUserDetailsService(userDetailsService); authenticationProvider.setPasswordEncoder(passwordEncoder()); return authenticationProvider; } } Spring Security dependencies: <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> <version>3.2.3.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-web</artifactId> <version>3.2.3.RELEASE</version> </dependency> I am using Apache Tomcat server for deployment. A: CORS' preflight request uses HTTP OPTIONS without credentials, see Cross-Origin Resource Sharing: Otherwise, make a preflight request. Fetch the request URL from origin source origin using referrer source as override referrer source with the manual redirect flag and the block cookies flag set, using the method OPTIONS, and with the following additional constraints: Include an Access-Control-Request-Method header with as header field value the request method (even when that is a simple method). If author request headers is not empty include an Access-Control-Request-Headers header with as header field value a comma-separated list of the header field names from author request headers in lexicographical order, each converted to ASCII lowercase (even when one or more are a simple header). Exclude the author request headers. Exclude user credentials. Exclude the request entity body. You have to allow anonymous access for HTTP OPTIONS. Your modified (and simplified) code: @Override protected void configure(HttpSecurity http) throws Exception { http .authorizeRequests() .andMatchers(HttpMethod.OPTIONS, "/**").permitAll() .antMatchers("/login").permitAll() .anyRequest().fullyAuthenticated() .and() .httpBasic() .and() .sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and() .csrf().disable(); } Since Spring Security 4.2.0 you can use the built-in support, see Spring Security Reference: 19. CORS Spring Framework provides first class support for CORS. CORS must be processed before Spring Security because the pre-flight request will not contain any cookies (i.e. the JSESSIONID). If the request does not contain any cookies and Spring Security is first, the request will determine the user is not authenticated (since there are no cookies in the request) and reject it. The easiest way to ensure that CORS is handled first is to use the CorsFilter. Users can integrate the CorsFilter with Spring Security by providing a CorsConfigurationSource using the following: @EnableWebSecurity public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http // by default uses a Bean by the name of corsConfigurationSource .cors().and() ... } @Bean CorsConfigurationSource corsConfigurationSource() { CorsConfiguration configuration = new CorsConfiguration(); configuration.setAllowedOrigins(Arrays.asList("https://example.com")); configuration.setAllowedMethods(Arrays.asList("GET","POST")); UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); source.registerCorsConfiguration("/**", configuration); return source; } } A: Since Spring Security 4.1, this is the proper way to make Spring Security support CORS (also needed in Spring Boot 1.4/1.5): @Configuration public class WebConfig extends WebMvcConfigurerAdapter { @Override public void addCorsMappings(CorsRegistry registry) { registry.addMapping("/**") .allowedMethods("HEAD", "GET", "PUT", "POST", "DELETE", "PATCH"); } } and: @Configuration public class SecurityConfig extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { // http.csrf().disable(); http.cors(); } @Bean public CorsConfigurationSource corsConfigurationSource() { final CorsConfiguration configuration = new CorsConfiguration(); configuration.setAllowedOrigins(ImmutableList.of("*")); configuration.setAllowedMethods(ImmutableList.of("HEAD", "GET", "POST", "PUT", "DELETE", "PATCH")); // setAllowCredentials(true) is important, otherwise: // The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'. configuration.setAllowCredentials(true); // setAllowedHeaders is important! Without it, OPTIONS preflight request // will fail with 403 Invalid CORS request configuration.setAllowedHeaders(ImmutableList.of("Authorization", "Cache-Control", "Content-Type")); final UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource(); source.registerCorsConfiguration("/**", configuration); return source; } } Do not do any of below, which are the wrong way to attempt solving the problem: http.authorizeRequests().antMatchers(HttpMethod.OPTIONS, "/**").permitAll(); web.ignoring().antMatchers(HttpMethod.OPTIONS); Reference: http://docs.spring.io/spring-security/site/docs/4.2.x/reference/html/cors.html
{ "pile_set_name": "StackExchange" }
Q: Compiler Issue with Generics and Inheritance I have 2 classes with the following declarations: abstract class ClassBase<T, S> where T : myType where S : System.Data.Objects.DataClasses.EntityObject abstract class ServiceBase<T> where T : myType and I have 2 other classes, that inherit one from each, we can call ClassInherited and ServiceInherited. Note that the two Service classes are not in the same project as the other two. The idea is that in the ServiceBase class I can declare a property like protected ClassBase<T,System.Data.Objects.DataClasses.EntityObject> Class { get; set; } and then in the inherited service`s constructor something like this.Class = ClassInheritedInstance I already implemented the idea but it gives me this error when assigning the Class property in the ServiceInherited class constructor: Cannot implicitly convert type 'ClassInherited' to 'ClassBase< T, S>' Note that ClassInherited is indeed an specification of Class<T,S>... it's just that the compiler doesn't seem to be able to tell the types correctly. Also changing the declaration of the class property to protected ClassBase<T, EntityObjectInherited> works, and EntityObjectInherited is an implementation of System.Data.Objects.DataClasses.EntityObject... I don't see why is there a problem. Update 1 Note that at compile time the type of ClassInherited is known, as its declaration is public class ClassInherited : ClassBase<myTypeInherited, EntityObjectInherited> A: INITIAL ANSWER The reason that you cannot use protected ClassBase<T,S> Class { get; set; } in the ServiceInherited-class is that you do not know the S-type that is needed to declare a type of the property Class. You have to options: Include the S type in the specification of the Service-type: abstract class ServiceBase<T, S> where T : myType where S : System.Data.Objects.DataClasses.EntityObject Implement an interface for ClassBase with only the T-type, so that you can refer to a class-inherited-object without using the S-type. Then you CAN have a property in the service class (of the interface-type), since you do not need to specify the S-type. Note that generic-type-checking is not checked at run-time, but at compile-time. Else it wouldn't be strong-typing. UPDATE The reason the cast won't work is that type ClassBase<T, EntityObjectInherited> is not equal or castable to ClassBase<T, System.Data.Objects.DataClasses.EntityObject>. Covariance doesn't work on class-types, only on interface-types. I think the solution here is to work with interfaces. Use an interface for class-base, say IClassBase<T>. That way you can omit the S-type in the signature of the class, and only have it in the interface. UPDATE (2) One thing you can do is to create an interface for the Class property. You can define the following interface. public interface IClass<T> where T : myType { // TODO // Define some interface definition, but you cannot use the // EntityObject derived class, since they are not to be known // in the service class. } If you implement this interface on your ClassBase class, and add a constructor on your ServiceBase class which accepts an object of type IClass, then you can push this object to property Class in the base-class. Like this: public abstract class ClassBase<T, S> : IClass<T> where T : MyType where S : EntityObject { } public abstract class ServiceBase<T> where T : MyType { protected ServiceBase(IClass<T> classObject) { Class = classObject; } protected IClass<T> Class { get; set; } } public class ServiceInherited : ServiceBase<MyTypeDerived> { public ServiceInherited(IClass<MyTypeDerived> classObject) : base(classObject) { } } One thing to note, is not to expose the S-type of the ClassBase to the interface. Since you do not want the Service-classes to know this type, they cannot actively call any methods or use properties that somehow have the S-type in their definition.
{ "pile_set_name": "StackExchange" }
Q: Can not delete a docker image because repository is missing I am not able to delete a docker image library/memcached with tag 1.4.22. There are only three versions available for library/memcached on dockerhub:1.4.24, 1.4 and 1. When I try to delete it, it throws an error repository not found. Since its not a locally created image removing everything from /var/lib/docker also did not help. I need to clean up docker from my server including all the mapped devices. Please help. A: Finally I was able to remove this image. So as I mentioned there was no source repository in dockerhub for memcached version I had, it was not allowing me to delete it. I deleted complete docker folder from /var/lib and restarted docker. It created docker folder again with all the relevant folders.
{ "pile_set_name": "StackExchange" }
Q: Call custom controller action from view when I click on a button Rails I'm struggling with a simple action call in rails and i cannot find what is wrong and why many solutions don't work in my case. I mention that i'm a new guy to rails, coming from Java world. The problem is like that: I want to have a button in my view which points to a controller action, an action that changes a column in a table. routes.rb post 'punch/userout' => 'punch#userout', :as => :userout view: punch\out.erb <%= link_to('Out', userout_path, method: :post) %> controller: punch_controller.rb class PunchController ApplicationController before_filter :authorize_admin, only: :index layout 'application' layout false, :except => :new # GET method to get all products from database def index #@punchins = Punchin.all @filterrific = initialize_filterrific( Punchin, params[:filterrific] ) or return @punchins = @filterrific.find.page(params[:page]) respond_to do |format| format.html format.js end rescue ActiveRecord::RecordNotFound => e # There is an issue with the persisted param_set. Reset it. puts "Had to reset filterrific params: #{ e.message }" redirect_to(reset_filterrific_url(format: :html)) and return end # GET method for the new product form def new @punchin = Punchin.new if current_user.admin redirect_to root_path =begin elsif current_user.punched_in redirect_to punch_out_path =end end end # POST method for processing form data def create #@punchin.user_id = current_user.id #@punchin = Punchin.new(punch_params) @punchin = current_user.punchins.build(punch_params) @punchin.server_time = Time.now.strftime("%Y-%m-%d %H:%M") #@punchin.is_punched = true; #get current user from punchin @user = @punchin.user #set punched on user with true @user.punched_in = true; #update user @user.save #@punchin.user.punched_in = true; if @punchin.save flash[:notice] = 'Punched In!' # Tell the Punchinailer to send a notification email after save PunchinMailer.punchin_email(@punchin).deliver_later redirect_to punch_in_path else flash[:error] = 'Failed to edit Punch!' render :new end end # PUT method for updating in database a product based on id def update @punchin = Punchin.find(params[:id]) if @punchin.update_attributes(punch_params) flash[:notice] = 'Punchin updated!' redirect_to root_path else flash[:error] = 'Failed to edit Punchin!' render :edit end end # DELETE method for deleting a product from database based on id def destroy @punchin = Punchin.find(params[:id]) if @punchin.delete flash[:notice] = 'Punchin deleted!' redirect_to root_path else flash[:error] = 'Failed to delete this Punchin!' render :destroy end end private # we used strong parameters for the validation of params def punch_params params.require(:punchin).permit(:server_time, :address_geoloc, :work_type, :work_desc, :user_id) end def show # method level rendering @punchin = Punchin.find(params[:id]) end #when punched in def in end def userout if user_signed_in? current_user.update_attributes(:punched_in => false) else redirect_to new_user_session_path, notice: 'You are not logged in.' end end end And for info: one punch belongs_to :user and users has_many :punches I have a column in users table that says punched_in: true/false, and I only want that column to be set at false when I click on the link/button from view. I have tried many solution, with link_to, button_to, different routes etc. In this case, I get this error: The action 'userout' could not be found for PunchController In other cases, my button worked but cannot reach the action that I want. Thanks! A: Your action is private. Move it above the private line and it will work
{ "pile_set_name": "StackExchange" }
Q: Qt Creator. Why do I get an error about glut when compiling release version, but not debug? I'm rather new to Qt and I'm having trouble building a stand alone executable for my application. It has OpenGL widgets and I've used glut32 library. The thing is that I've been buiding it fine in debug mode, but following the steps given here: How to create executable file for a Qt Application? It gives an error when buiding it in release mode: " Undifined reference to ___glutInitWithExit " It's the very same code. I don't understand why it is having trouble under release. I thank in advance any tips to help me buid a standalone executable. A: For static single executables you will first need a commercial license for QT. This being said you can change the mkspecs in $QTDIR\mkspecs\$COMPILERVERSION\qmake.conf to use static linking. For example win32-msvc2008 (2008 Visual Studio Compiler) set CFLAGS and CXXFLAGS from QMAKE_CFLAGS_RELEASE = -O2 -MD to QMAKE_CFLAGS_RELEASE = -O2 -MT (MD => MT, MDd => MTd). Furthormore you will static libs of QT and GLUT. These typically end in .lib (not .dll). You probably will need to compile QT from Source for getting static-compatible libraries. When compiling QT remember to compile with opengl support. There is a good documentation at http://qt-project.org/wiki/Build_Static_Qt_For_Windows_With_Gcc_German for gcc, and you can also find examples for MSVC. After these prerequisites are met you still need to set in your Application-pro-file CONFIG += static and include your libraries with: LIBS += -Lpath/to/glutlib -lglut
{ "pile_set_name": "StackExchange" }
Q: Ignore calls to dispatch_async when a request is being handled I have an iPhone app where based on some parameters an image gets recreated. Since this image recreation can take some time, I use a separate thread to create the image. dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0); // costly drawing UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); dispatch_async(dispatch_get_main_queue(), ^{ self.image = newImage; }); }); The parameters affecting the recreation of this image can vary faster than the image can get recreated, so I'd like to "pause" recreation when needed and only perform one such dispatch_async call at a time. Then as more requests to recreate the image arrive, only remember the last one (with the most up to date parameters), and as soon as the image recreation finished start one for those parameters. It doesn't matter that all the other calls are never done, the image would be overwritten anyway. What's the best way to achieve this? A: You may want to consider using NSOperationQueue since you can cancel existing queue items every time a new one is added. Using dispatch_async will run whatever you place in the block until completion (unless you suspend the entire queue) so there's not a great way of stopping prior queue items without setting some sort of a cancellation flag (in which case, they are just short-circuited but the block is still run to completion). NSOperationQueue is built on top of GCD so it provides the same backgrounding capabilities, it just gives you more control over the queue, which is what you need in this case. It can also be run concurrently on multiple threads, but you shouldn't need that.
{ "pile_set_name": "StackExchange" }
Q: Is it possible for a Chrome Extension to get a user's full browsing history? I'm looking for a yes / no answer here. I've gathered that it's possible to get the pages linked to by 'back' and 'forward' buttons but I'm looking for a list of all the entries. I won't know the domains ahead of time, so if I needed to use a search parameter it would need to be a wildcard. I was hoping there'd be some HTML5 permission similar to geolocation or microphone / webcam recording, but I'm seeing Chrome Extensions as the promising approach to get these enhanced permissions. Not to bash other browsers, it's just what I personally use. A: Yes it is using the history api: https://developer.chrome.com/extensions/history chrome.history.search({text: '', maxResults: 10}, function(data) { data.forEach(function(page) { console.log(page.url); }); }); from: How to get browsing history using history API in Chrome extension
{ "pile_set_name": "StackExchange" }
Q: How can I preload .mp3 in iOS while playing a song? Is there any ability to preload a song (the next one in the game) while already playing the other one. I want the second song to start immediately after the first one. How can I achieve this effect ? A: Create a AVQueuePlayer, queue up all the tracks you want. Set the actionAtItemEnd = AVPlayerActionAtItemEndAdvance This should then play them with no break. If you still get a break, try creating two separate AVPlayers, and then switch between them programmatically.
{ "pile_set_name": "StackExchange" }
Q: Apply multiple functions with map I have 2D data that I want to apply multiple functions to. The actual code uses xlrd and an .xlsx file, but I'll provide the following boiler-plate so the output is easy to reproduce. class Data: def __init__(self, value): self.value = value class Sheet: def __init__(self, data): self.data = [[Data(value) for value in row.split(',')] for row in data.split('\n')] self.ncols = max(len(row) for row in self.data) def col(self, index): return [row[index] for row in self.data] Creating a Sheet: fake_data = '''a, b, c, 1, 2, 3, 4 e, f, g, 5, 6, i, , 6, , , , , ''' sheet = Sheet(fake_data) In this object, data contains a 2D array of strings (per the input format) and I want to perform operations on the columns of this object. Nothing up to this point is in my control. I want to do three things to this structure: transpose the rows into columns, extract value from each Data object, and try to convert the value to a float. If the value isn't a float, it should be converted to a str with stripped white-space. from operators import attrgetter # helper function def parse_value(value): try: return float(value) except ValueError: return str(value).strip() # transpose raw_cols = map(sheet.col, range(sheet.ncols)) # extract values value_cols = (map(attrgetter('value'), col) for col in raw_cols) # convert values typed_cols = (map(parse_value, col) for col in value_cols) # ['a', 1.0, 'e', 5.0, '', ''] # ['b', 2.0, 'f', 6.0, 6.0, ''] # ['c', 3.0, 'g', 'i', '', ''] # ['', 4.0, '', '', '', ''] It can be seen that map is applied to each column twice. In other circumstances, I want to apply a function to each column more than two times. Is there are better way to map multiple functions to the entries of an iterable? More over, is there away to avoid the generator comprehension and directly apply the mapping to each inner-iterable? Or, is there a better and extensible way to approach this all together? Note that this question is not specific to xlrd, it is only the current use-case. A: It appears that the most simple solution is to roll your own function that will apply multiple functions to the same iterable. def map_many(iterable, function, *other): if other: return map_many(map(function, iterable), *other) return map(function, iterable) The downside here is that the usage is reversed from map(function, iterable) and it would be awkward to extend map to accept arguments (like it can in Python 3.X). Usage: map_many([0, 1, 2, 3, 4], str, lambda s: s + '0', int) # [0, 10, 20, 30, 40] A: You can easily club the last two map calls using a lambda, typed_cols = (map(lambda element:parse_value(element['value']), col) for col in value_cols) While you can similar stick in parsing and extracting inside Sheet.col , IMO that would affect the readability of the code.
{ "pile_set_name": "StackExchange" }
Q: Hide background image in certain media query values I'm trying to hide the background image I'm using only in mobile view. html{ background: #fff url('//www.xxxxxxxxx.xxx/gray.jpg') center top no-repeat; background-attachment: initial; background-size: contain; background-position-y: 0; } But when I do this: @media (max-width: 425px){ background: transparent !important; } The background doesn't recognize the query rule set for HTML. How can I do this? What am I doing wrong? A: You must use a CSS selector inside the media query aswell. In this case add html { ... }. Try this: @media (max-width: 425px){ html { background: transparent !important; } }
{ "pile_set_name": "StackExchange" }
Q: Почему оператор= возвращает объект того же экземпляра класса, для которого он вызывается? Предположим есть некий класс который управляет неким ресурсом и у нас есть следующий код для оператора присваивания. Resource& Resource::operator=(const Resource& rhs) { this->someProperty = rhs.someProperty; return *this; } Вопрос, зачем нужно возвращать *this? Ну присвоили мы, и все хорошо: Resource resource1; Resource resource2; resource1 = resource2; Зачем самого себя еще возвращать? A: Это нужно для того чтобы была возможность писать так: first = second = third Если вы объявите в качестве типа возвращаемого значения void, то не сможете полученному результату какое-либо значение/ Маленький пример: class Int { public : int Variable; void operator= (const int& rhs); }; void Int::operator= (const int& rhs) { this->Variable = rhs; } Int i, g; i.Variable = 10; g.Variable = 10; i = 30; // так можно i = g = 30; // а так нельзя
{ "pile_set_name": "StackExchange" }
Q: Retrieve analyzed tokens from ElasticSearch documents Trying to access the analyzed/tokenized text in my ElasticSearch documents. I know you can use the Analyze API to analyze arbitrary text according your analysis modules. So I could copy and paste data from my documents into the Analyze API to see how it was tokenized. This seems unnecessarily time consuming, though. Is there any way to instruct ElasticSearch to returned the tokenized text in search results? I've looked through the docs and haven't found anything. A: This question is a litte old, but maybe I think an additional answer is necessary. With ElasticSearch 1.0.0 the Term Vector API was added which gives you direct access to the tokens ElasticSearch stores under the hood on per document basis. The API docs are not very clear on this (only mentioned in the example), but in order to use the API you have to first indicate in your mapping definition that you want to store term vectors with the term_vector property on each field. A: Have a look at this other answer: elasticsearch - Return the tokens of a field. Unfortunately it requires to reanalyze on the fly the content of your field using the script provided. It should be possible to write a plugin to expose this feature. The idea would be to add two endpoints to: allow to read the lucene TermsEnum like the solr TermsComponent does, useful to make auto-suggestions too. Note that it wouldn't be per document, just every term on the index with term frequency and document frequency (potentially expensive with a lot of unique terms) allow to read the term vectors if enabled, like the solr TermVectorComponent does. This would be per document but requires to store the term vectors (you can configure it in your mapping) and allows also to retrieve positions and offsets if enabled. A: You may want to use scripting, however your server should have the scripting enabled. curl 'http://localhost:9200/your_index/your_type/_search?pretty=true' -d '{ "query" : { "match_all" : { } }, "script_fields": { "terms" : { "script": "doc[field].values", "params": { "field": "field_x.field_y" } } } }' The default setting for allowing the script depends on the elastic search version, so please check that out from the official documentation.
{ "pile_set_name": "StackExchange" }
Q: Is there an event for when features are selected in OpenLayers 3? http://ol3js.org/en/master/examples/select-features.html Given the above examples, what extension points are there for hooking into when features are selected? A: Here is a solution that might be more intuitive than Danny's, and also seems to be the "official" way, see this issue on ol3's GitHub. Simply add the listener to the collection of selected features : mySelectInteraction.getFeatures().on('change:length', function(e) { if (e.target.getArray().length === 0) { alert("no selected feature"); } else { var feature = e.target.item(0); alert(feature.getId()); //or do something better with the feature ! } }); A: You can bind a precompose event to your layer when a singleclick event is triggered on your map. From here you can dispatch a change event on your select interaction. yourmap.on('singleclick',function(event)){ layer.once('precompose',function(event){ yourSelectInteraction.dispatchChangeEvent(); } } yourSelectInteraction.on('change',function(){ //Do stuff with your selected features here }
{ "pile_set_name": "StackExchange" }
Q: .NET Config File: How to check if ConfigSection is present Consider: The line: <section name="unity" /> The block: <unity> <typeAliases /> <containers /> </unity> Say the line is available in the .config file while the block is missing. How to programmatically check if the block exists or not? [EDIT] For those who geniuses, who were quick to mark the question negative: I have already tried ConfigurationManager.GetSection() and var config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); var section = config.GetSection("unity"); var sInfo = section.SectionInformation; var isDeclared = sInfo.IsDeclared; Correct me if I'm mistaken, above does not return a null if the <configSections> is defined (even though the actual unity block is missing). A: I found this post while searching for the answer to this post. I thought I would come back and post the answer now that I have solved it. Since ConfigurationSection inherits from ConfigurationElement you can use the ElementInformation to tell if the actual element was found after Deserialization. Use this method to detect whether the ConfigurationSection element is missing in the config file. The following method in ConfigurationSection comes from it's inheritance of ConfigurationElement: //After Deserialization if(!customSection.ElementInformation.IsPresent) Console.WriteLine("Section Missing"); To determine if a certain element was missing, you can use the property within the section (lets pretend it's called 'PropName'), get PropName's ElementInformation property and check the IsPresent flag: if(!customSection.propName.ElementInformation.IsPresent) Console.WriteLine("Configuration Element was not found."); Of course if you want to check if the <configSections> definition is missing, use the following method: CustomSection mySection = config.GetSection("MySection") as CustomSection; if(mySection == null) Console.WriteLine("ConfigSection 'MySection' was not defined."); -Hope this helps
{ "pile_set_name": "StackExchange" }
Q: Parallel programming approach to solve pandas problems I have a dataframe of the following format. df A B Target 5 4 3 1 3 4 I am finding the correlation of each column (except Target) with the Target column using pd.DataFrame(df.corr().iloc[:-1,-1]). But the issue is - size of my actual dataframe is (216, 72391) which atleast takes 30 minutes to process on my system. Is there any way of parallerize it using a gpu ? I need to find the values of similar kind multiple times so can't wait for the normal processing time of 30 minutes each time. A: Here, I have tried to implement your operation using numba import numpy as np import pandas as pd from numba import jit, int64, float64 # #------------You can ignore the code starting from here--------- # # Create a random DF with cols_size = 72391 and row_size =300 df_dict = {} for i in range(0, 72391): df_dict[i] = np.random.randint(100, size=300) target_array = np.random.randint(100, size=300) df = pd.DataFrame(df_dict) # ----------Ignore code till here. This is just to generate dummy data------- # Assume df is your original DataFrame target_array = df['target'].values # You can choose to restore this column later # But for now we will remove it, since we will # call the df.values and find correlation of each # column with target df.drop(['target'], inplace=True, axis=1) # This function takes in a numpy 2D array and a target array as input # The numpy 2D array has the data of all the columns # We find correlation of each column with target array # numba's Jit required that both should have same columns # Hence the first 2d array is transposed, i.e. it's shape is (72391,300) # while target array's shape is (300,) def do_stuff(df_values, target_arr): # Just create a random array to store result # df_values.shape[0] = 72391, equal to no. of columns in df result = np.random.random(df_values.shape[0]) # Iterator over each column for i in range(0, df_values.shape[0]): # Find correlation of a column with target column # In order to find correlation we must transpose array to make them compatible result[i] = np.corrcoef(np.transpose(df_values[i]), target_arr.reshape(300,))[0][1] return result # Decorate the function do_stuff do_stuff_numba = jit(nopython=True, parallel=True)(do_stuff) # This contains all the correlation result_array = do_stuff_numba(np.transpose(df.T.values), target_array) Link to colab notebook.
{ "pile_set_name": "StackExchange" }
Q: how to apply security rules to specific collections I have a question about the security rules. I know that we can do this to prevent unauthorized users for modifying any node in Firestore service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read, write: if request.auth.uid != null; } } } What if I want to remove this prevention from certain root collections only? I mean Lets say I have 2 root collections called Tracking, Incoming Anyone can write or read to that as really no authentications is required for them. But all other collections need to have read/write done by only Authenticated users. How can I achieve that? A: Just call them out and give access. The most permissive rules will override all others. Here, everyone has full access to documents in the collection called all-access: service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read, write: if request.auth.uid != null; } match /all-access/{id} { allow read, write: if true; } } } But you may want to consider if this is really a good idea. Anyone could jam billions of documents into the collection with these rules. Think carefully about what you want everyone to be able to do here.
{ "pile_set_name": "StackExchange" }
Q: Sort Rails Model Attribute by Method I have a model that uses a method to display an attribute on a view. I've managed to get them sorted, but they are not sorting the way I need them to be sorted. I have the red quantities at the top, which I want, but I need the green and yellow quantities reversed. The order should be red, yellow then green. Here is the method that adds the colors the column: def get_quantity_text_class case when quantity_on_hand > reorder_quantity then 'text-success' when quantity_on_hand > p_level then 'text-warning' else 'text-danger' end end And here is the method that creates the column: def quantity_on_hand ppkb.sum(:quantity) end Here is the sort algorithm I'm using: sort_by{ |item| item.get_quantity_text_class } I feel like I'm so close, but I just can't figure out how to reverse the green and yellow numbers. A: It is currently sorted based on the string values text-danger, text-success and text-warning. To sort it the way you want, try sorting it based on numeric values: sort_by do |item| case item.get_quantity_text_class when 'text-danger' 0 when 'text-warning' 1 else 2 end end
{ "pile_set_name": "StackExchange" }
Q: How many divisors of N ended by 5 I must know how to find how many divisor of N ended by 5 ? In my exercise, I have $\ N=63'000 = 2^3*3^2*5^3*7 $ and I can found the number of divisors of N using $\ (3+1)*(2+1)*(3+1)*(1+1)=96$ Among these 96 divisors, how many ended by 5 ? How can calculate this ? Thank you so much A: Take out the $2$s, because when multiplied by $5$, the result will end with $0$. Find the number of divisors of $3^2\cdot7^1$, which is $(2+1)\cdot(1+1)=6$: $3^0\cdot7^0$ $3^1\cdot7^0$ $3^2\cdot7^0$ $3^0\cdot7^1$ $3^1\cdot7^1$ $3^2\cdot7^1$ Multiply each divisor by each one of the following $3$ powers of $5$: $5^1$ $5^2$ $5^3$ Hence you have $6\cdot3=18$ divisors which end with $5$: $3^0\cdot7^0\cdot5^1$ $3^1\cdot7^0\cdot5^1$ $3^2\cdot7^0\cdot5^1$ $3^0\cdot7^1\cdot5^1$ $3^1\cdot7^1\cdot5^1$ $3^2\cdot7^1\cdot5^1$ $3^0\cdot7^0\cdot5^2$ $3^1\cdot7^0\cdot5^2$ $3^2\cdot7^0\cdot5^2$ $3^0\cdot7^1\cdot5^2$ $3^1\cdot7^1\cdot5^2$ $3^2\cdot7^1\cdot5^2$ $3^0\cdot7^0\cdot5^3$ $3^1\cdot7^0\cdot5^3$ $3^2\cdot7^0\cdot5^3$ $3^0\cdot7^1\cdot5^3$ $3^1\cdot7^1\cdot5^3$ $3^2\cdot7^1\cdot5^3$
{ "pile_set_name": "StackExchange" }
Q: django post_save causing IntegrityError - duplicate entry I need help to fix the issue IntegrityError at /admin/gp/schprograms/add/ (1062, "Duplicate entry '65' for key 'PRIMARY'") I am trying to insert a row into a table SchProgramForStates (whenever new entry gets added into a model SchPrograms) with two columns state_id (taking it from django session) and program_id trying to take it from SchPrograms model class . It works fine when I only save SchProgram table so I feel problem is with the below code. Please help me to fix this. @receiver(post_save, sender=SchPrograms, dispatch_uid="my_unique_identifier") def my_callback(sender, instance, created, *args, **kwargs): state_id = state_id_filter #its a global variable if created and not kwargs.get('raw', False): pfst_id = SchProgramForStates.objects.create(program_id=instance.program_id, state_id=state_id) pfst_id.save(force_insert=True) A: if created and not kwargs.get('raw', False): try: pfst_id = SchProgramForStates.objects.create(program_id=instance.program_id, state_id=state_id) except: pass Try with a try block and see or you can use a get or create method if created and not kwargs.get('raw', False): pfst_id = SchProgramForStates.objects.get_or_create(program_id=instance.program_id, state_id=state_id)
{ "pile_set_name": "StackExchange" }
Q: Excel to CSV with UTF8 encoding I have an Excel file that has some Spanish characters (tildes, etc.) that I need to convert to a CSV file to use as an import file. However, when I do Save As CSV it mangles the "special" Spanish characters that aren't ASCII characters. It also seems to do this with the left and right quotes and long dashes that appear to be coming from the original user creating the Excel file in Mac. Since CSV is just a text file I'm sure it can handle a UTF8 encoding, so I'm guessing it is an Excel limitation, but I'm looking for a way to get from Excel to CSV and keep the non-ASCII characters intact. A: A simple workaround is to use Google Spreadsheet. Paste (values only if you have complex formulas) or import the sheet then download CSV. I just tried a few characters and it works rather well. NOTE: Google Sheets does have limitations when importing. See here. NOTE: Be careful of sensitive data with Google Sheets. EDIT: Another alternative - basically they use VB macro or addins to force the save as UTF8. I have not tried any of these solutions but they sound reasonable. A: I've found OpenOffice's spreadsheet application, Calc, is really good at handling CSV data. In the "Save As..." dialog, click "Format Options" to get different encodings for CSV. LibreOffice works the same way AFAIK. A: Save the Excel sheet as "Unicode Text (.txt)". The good news is that all the international characters are in UTF16 (note, not in UTF8). However, the new "*.txt" file is TAB delimited, not comma delimited, and therefore is not a true CSV. (optional) Unless you can use a TAB delimited file for import, use your favorite text editor and replace the TAB characters with commas ",". Import your *.txt file in the target application. Make sure it can accept UTF16 format. If UTF-16 has been properly implemented with support for non-BMP code points, that you can convert a UTF-16 file to UTF-8 without losing information. I leave it to you to find your favourite method of doing so. I use this procedure to import data from Excel to Moodle.
{ "pile_set_name": "StackExchange" }
Q: Deleting an object in a SpringBoot 2.0.5.RELEASE app using Spring Data JPA I have a basic SpringBoot 2.0.5.RELEASE app. Using Spring Initializer, JPA, embedded Tomcat, Thymeleaf template engine, and package as an executable JAR file. I have this class: public class User implements Serializable { @OneToMany( cascade = CascadeType.ALL,orphanRemoval = true, fetch = FetchType.EAGER,mappedBy = "user") @JsonIgnore private List<Wallet> wallets = new ArrayList<Wallet>(); .. } and this one: public class Wallet implements Serializable { @ManyToOne(fetch = FetchType.EAGER) @JoinColumn(name = "invoice_id") @JsonIgnore @NotNull private Invoice invoice; @OneToMany(mappedBy = "wallet", cascade = CascadeType.ALL, orphanRemoval = true, fetch = FetchType.LAZY) @JsonIgnore private Set<Purchase> purchases = new HashSet<>(); @ManyToOne(fetch = FetchType.EAGER) @JoinColumn(name = "user_id" , nullable=false) @JsonIgnore private User user; .. } and this other one: public class Purchase implements Serializable { @ManyToOne(fetch = FetchType.EAGER) @JoinColumn(name = "wallet_id") @JsonIgnore Wallet wallet; ... } But I delete a wallet form the controller that have an invoice and purchases and belong to a user the wallet is not deleted from the DB walletService.delete(walletService.findById(id).get()); this is the service method: @Transactional public void delete(Wallet wallet) { if (LOG.isDebugEnabled()) { LOG.debug("deleting Wallet [ " + wallet + " ]"); } wallet .getPurchases() .parallelStream() .forEach(p -> purchaseService.delete(p)); walletRepository.delete(wallet); } and @Transactional public void delete (Purchase purchase ) { purchaseRepository.delete (purchase); } in the properties file: spring.jpa.show-sql=true and the last query I see in the console is this one: select purchases0_.wallet_id as wallet_i8_13_0_, purchases0_.id as id1_13_0_, purchases0_.id as id1_13_1_, purchases0_.amount as amount2_13_1_, purchases0_.wallet_id as wallet_i8_13_1_ from t_purchase purchases0_ where purchases0_.wallet_id=? and no delete and no Exceptions !!! ! A: try this in the controller: user.getWallets().remove(wallet); walletService.delete(wallet); userService.save(user);
{ "pile_set_name": "StackExchange" }
Q: Perceptron branch predictor implementation in C I was reading the paper, http://www.cs.utexas.edu/~lin/papers/hpca01.pdf, on Dynamic Branch Prediction with Perceptrons. I was wondering how to implement the perceptron branch predictor in C if given a list of 1000 PC addresses (word addresses) and 1000 number of actual outcome of the branch which are recorded in a trace line. Essentially, I want to use these traces to measure the accuracy of various predictors. The branch outcomes from the trace file should be used to train your predictors. Any suggestions? A: I think its fairly simple. Section 3.2 and 3.3 is all you really have to understand. Section 3.2 says output percepatron is sum of past histories multipled by their wieghting factors: #define SIZE_N 62 //or whatever see section 5.3 float history[n] = {0}; //Put branch history here, -1 not taken, 1 taken. float weight[n] = {0}; //storage for weights float percepatron(void ) { int i; float y=0; for (i=0;i<SIZE_N;i++) { y+= weight[i] * history[i];} return y; } Then in 3.3 the weighting factors come from training, which is simply train each one past on past result comparison: void train(float result, float y, float theta) //passed result of last branch (-1 not taken, 1 taken), and perceptron value { int i; if ((y<0) != (result<0)) || (abs(y) < theta)) { for (i=0;i<SIZE_N;i++;) { weight[i] = weight[i] + result*history[i]; } } } So all thats left is theta, which they tell you: float theta = (1.93 * SIZE_N) + 14; So the usage is: y = percepatron(); //make prediction: if (y < 0) predict_not_taken(); else predict_taken(); //get actual result result = get_actual_branch_taken_result();//must return -1 not taken, 1 taken //train for future predictions train(y,result,theta); //Then you need to shift everything down.... for (i=1;i<SIZE_N;i++) { history[i] = history[i-1]; //weight[i] = history[i-1]; //toggle this and see what happens :-) } history[0] = 1; //weighting - see section 3.2
{ "pile_set_name": "StackExchange" }
Q: Why isn't vector::operator[] implemented similar to map::operator[]? Is there any reason for std::vector's operator[] to just return a reference instead of inserting a new element? The cppreference.com page for vector::operator says here Unlike std::map::operator[], this operator never inserts a new element into the container. While the page for map::operator[] says "Returns a reference to the value that is mapped to a key equivalent to key, performing an insertion if such key does not already exist." Why couldn't vector::operator[] be implemented by calling vector::push_back or vector::insert like how map::operator[] calls insert(std::make_pair(key, T())).first->second;? A: Quite simply: Because it doesn't make sense. What do you expect std::vector<int> a = {1, 2, 3}; a[10] = 4; to do? Create a fourth element even though you specified index 10? Create elements 3 through to 10 and return a reference to the last one? Neither would be particularily intuitive. If you really want to fill a vector with values using operator[] instead of push_back, you can call resize on the vector to create the elements before settings them. Edit: Or, if you actually want to have an associative container, where the index is important apart from ordering, std::map<int, YourData> might actually make more sense.
{ "pile_set_name": "StackExchange" }
Q: NoClassDefFoundError static flieds @Override protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException { String reqURI = req.getRequestURI(); reqURI = reqURI.replace(req.getContextPath(), ""); try { ServiceFactory factory = ServiceFactory.getInstance(); Service service = factory.getService(reqURI); service.doPost(req, resp); } catch (Exception e ) { ROOT_LOGGER.error(e.getMessage(), e); throw new ServletException(e); } } When i try to get ServiceFactory instance, i get NoClassDefFoundError. It's only happens after deploying the app. If i start it through IntelliJ nothning wrong happens. What's the problem ? public class ServiceFactory { private static final Map<String, Service> SERVICE_MAP = new HashMap<>(); private static final ServiceFactory SERVICE_FACTORY = new ServiceFactory(); private ServiceFactory() { init(); } private static void init() { SERVICE_MAP.put(LOGIN_PAGE_URI, new LoginService()); SERVICE_MAP.put(LOGOUT_PAGE_URI, new LogoutService()); SERVICE_MAP.put(SWITCH_LANGUAGE_URI, new SwitchLanguageService()); SERVICE_MAP.put(USERS_PAGE_URI, new AllUsersService()); SERVICE_MAP.put(REGISTRATION_PAGE_URI, new RegistrationService()); SERVICE_MAP.put(DELETE_USER_PAGE_URI, new DeleteUserService()); SERVICE_MAP.put(NEW_DOCUMENT_PAGE_URI, new NewDocumentService()); SERVICE_MAP.put(GET_FORM_AJAX_PAGE_URI, new GetFormAJAX()); } public static ServiceFactory getInstance() { return SERVICE_FACTORY; } public Service getService(String request) { return SERVICE_MAP.get(request); } A: From the info you are providing you probably don't include the library that is been causing NoClassDefFoundError into your class-path java -cp <add-jar-paths-with-file-separator> <class-to-run> <arguments> e.g. java -cp lib/the-jar-you-are-missing.jar;myapp.jar com.mypackage.MyClassWithMain arg1 For linux replace ; character with : If you have a fixed path for your libraries you can also add the paths to your MANIFEST.MF Manifest-Version: 1.0 ... Main-Class: com.mypackage.MyClassWithMain Class-Path: lib/the-jar-you-are-missing.jar https://docs.oracle.com/javase/7/docs/technotes/tools/windows/classpath.html Intellij won't do that for you automatically. Probably there is a way to do it but unfortunately I don't know that answer.
{ "pile_set_name": "StackExchange" }
Q: Discard images from a group of similar images I am generating images (thumbnails) from a video every 3 seconds. Now I need to discard/remove all the similar images. Is there a way I could this? I generate thumbnails using FFMPEG. I read about various image-diff solutions like given in this SO post, but I do not want to do this manually. How and what parameters should be considered that could tell if a particular image is similar to other images present. A: You can calculate the Structural Similarity Index between images and based on the score keep or discard an image. There are other measures you can use, but basically a method that returns a score. Try PIL or OpenCV https://pillow.readthedocs.io/en/3.1.x/reference/ImageChops.html?highlight=difference https://www.pyimagesearch.com/2017/06/19/image-difference-with-opencv-and-python/
{ "pile_set_name": "StackExchange" }
Q: What's the second part of the word "colophon"? According to Wiktionary and Etymonline, I only find the ultimate Greek word "κολοφών", leading to my question. The first part of "colophon" is "colo-", which derives from PIE *kolən-, *koləm-. I want to find out the Greek or PIE root of "-phon", the second part of "colophon". colophon (Wiktionary) from Ancient Greek κολοφών (kolophon, “peak or finishing touch”) colophon (Etymonline) 1774, "publisher's inscription at the end of a book," from L. colophon, from Gk. kolophon "summit, final touch" (see hill). hill From Middle English, from Old English hyll (“hill”), from Proto-Germanic *hulliz (“stone, rock”), from Proto-Indo-European *kolən-, *koləm- (“top, hill, rock”). Cognate with Middle Dutch hille, hulle (“hill”), Low German hull (“hill”), Icelandic hóll (“hill”), Latin collis (“hill”), Old English holm (“rising land, island”). More at holm. A: I think you might be mistaken in splitting colophon into two parts. As you already noted, most sources agree that 'colophon' comes from the Greek city Kolophon, which in turn comes from the Greek work kolophon. Your research shows that kolophon might be related to (though not necessarily derived from) the PIE root for 'hill'. If you're familiar with Perseus, you might be able to get more info from this link. You might also be interested in a book called The Intriguing Derivation of the Word "Colophon". I don't actually have access to this title, so I can't vouch for its contents. Although it isn't relevant to the etymology of colophon, here's the answer to your original question: the -phon suffix is the same as the -phone suffix, which means "sound" and derives from the ancient Greek -ϕωνος. A few examples of this suffix in action: homophone, megaphone, microphone, and saxophone.
{ "pile_set_name": "StackExchange" }
Q: Is SOLVIT (european question asked system) fast to answer? I've found these 2 resources which seems to be very interesting: http://ec.europa.eu/solvit/index_en.htm http://europa.eu/europedirect/index_en.htm I've contacted the 1st one (SOLVIT). If I understand properly it's free and it's from the European Commission. Are they fast? A: Summary: The goal is to reach a resolution (positive or negative) within 10 weeks. Actual performance varies but does not seem too far off from this objective. The EU website provides a brief explanation of how Solvit works. It also published a scoreboard with more details and a few stats on actual performance. Basically, there are 3 main steps, each with a specific target/performance standard: Response from your “home” Solvit centre (the centre you contacted): 7 days Case preparation by the “home” centre: 30 days Case handling by the “lead” centre (the centre in the country where the problem happened): 70 days Thus, the goal is to tell you whether your case is accepted within one week and to reach a resolution within 10 weeks. Achieving this target in more than 75% of the cases is considered a good performance by Solvit. An older report mentions an average handling speed of 58 days in 2007 and 69 days in 2008 but that's only an average, meaning that some cases took longer than that. One problem is that handling a case typically involves getting in touch with the local authorities so whether the problem gets resolved promptly depends a lot on how quick and cooperative the local authorities are and that can vary widely. Case complexity also matters obviously. Guessing a bit from your past history, if you submitted a case to the Italian centre about a problem in Austria, you should expect some delay for the initial contact (Italy is very bad in this respect), a quick case preparation (Italy is good there) and an average handling time (Austria is not very fast). The good news is that the resolution rate is 90% for Austria so you should at least get some clarity on your situation (resolution does not necessarily means you get satisfaction, if it turns out that there was no breach of EU law).
{ "pile_set_name": "StackExchange" }
Q: How to change the background of a ScrollPane (JavaFX/ScalaFX)? I want to change the background color of a ScrollPane. This is part of my code where I try to do that: val sp=new javafx.scene.control.ScrollPane(new Group(new Text(...))) sp.setPannable(true) sp.setStyle("-fx-background-color: blue") sp.setBackground(new Background(Array(new BackgroundFill(Color.DARKCYAN,new CornerRadii(0),Insets(0))))) Text appears OK, but both attempts to change the background color have no effect, using: Scala version 2.10.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_05). Inspecting with Scenic View, I discover that two StackPanes have unexpectedly appeared in the scene graph below the ScrollPane, so the hierarchy is: ScrollPane //which I created StackPane //UNEXPECTED -- clips the content StackPane //UNEXPECTED -- full size content Group //which I created Text //which I created If I change the background of either of the StackPane-s to, say, "-fx-background-color: blue" (with Scenic View), it has effect, but not the style of the ScrollPane. But how to do that from code? If I do println(sp.content()) , it says Group@567fa81a Is there a simple way to access the StackPanes or change the background? I could "slap in" a big filled rectangle, but that seems ugly and complicates resizing, what is wrong with the background proper? A: sp.setStyle("-fx-background: blue") instead of: sp.setStyle("-fx-background-color: blue")
{ "pile_set_name": "StackExchange" }
Q: Why can't i construct string array like this in Java? I am trying to initialize a string array like below but it has an error. public class Account{ private String[] account; public Account() { account = {"A", "B", "C"}; } } Does anyone knows why it keep creating an error? A: The correct syntax to use inside the constructor is account = new String[]{"A", "B", "C"}; The shortcut syntax you are trying to use is only permitted at the point of declaration: private String[] account = {"A", "B", "C"}; As to why the distinction, see Why can array constants only be used in initializers?
{ "pile_set_name": "StackExchange" }
Q: When to see autumn colours in and around Kerlingarfjöll, Iceland? I'm considering heading to Kerlingarfjöll. I thought it was pretty barren, which would mean it doesn't matter much whether one travels in summer or autumn, but it looks like it does get some mosses. What would be the optimal time of year to experience autumn colours in Kerlingarfjöll and surrounding areas? A: I would say from late August to mid-September. However, the exact timing, the color intensity and duration varies greatly from year to year and is impossible to predict. If you want to maximize your possibilities of seeing autumn colors, I'd say take a 2-week trip on the first half of September while keeping in mind that hot and dry summer, if one should occur, will make the colors appear earlier. There are a lot of beliefs about autumn colors, even among locals. For example, temperatures below freezing point has nothing to do with autumn colors although you keep hearing such claims over and over again. One good option is to ask from people who actually life off of autumn colors and have some responsibility over having the timing right. There are quite a few companies organizing photography trips in Iceland, calling a few through and asking specifically about Kerlingarfjöll and this year, might give you a better idea on how to time your trip.
{ "pile_set_name": "StackExchange" }
Q: How to search a string from a file and replace it to another with Gulp I'd like to use Gulp to get a selector from a file test1.js and replace it to test2.js test1.js @Component({ selector: 'app-root', ... }) export class AppComponent {} So, I want to get "app-root" from test1.js test2.js var selector = "#selector" function() {} In the test2.js, i want replace "#selector" by "app-root". I know how to replace a string by another string in a file: gulp.src('test2.js', {base: 'src/'}) .pipe(replace(new RegExp(/#selector/, 'g'), (match, p1) => { return 'app-root'; })) .pipe(gulp.dest('dist')); But i don't know how to pass "app-root" to the stream replace. How could i do that with Gulp ? Thanks ! A: I found how to do that with map-stream var gulp = require('gulp'); var map = require('map-stream'); var fs = require('fs'); gulp.task('default', function() { gulp.src('./test1.js') .pipe(map(function(file, callback) { var test = file.contents.toString().match(/selector: '(.*)'/i); fs.readFile('./test2.js', function read(err, data) { if (test) { fs.writeFile('./test2.js', data.toString().replace('#selector', test[1])); } }); })); });
{ "pile_set_name": "StackExchange" }
Q: How to Plot a Heatmap in ggplot with "staggered" points I am trying to graphically represent "heat" data for each of these points, , i.e. a univariate integer for each position. The image is: I don't have the data yet but I will have it like this where, agreed with the data acquirer, I staggered the positions on the x-axis to help me know spatially where I am looking at (where x and y are effectively co-oordinates T is the temp). x y T 1 1 5 3 1 5 5 1 6 7 1 5 9 1 6 11 1 7 2 2 7 4 2 5 6 2 4 8 2 5 10 2 6 1 3 7 3 3 8 5 3 8 7 3 7 9 3 8 11 3 9 2 4 9 4 4 13 6 4 13 8 4 9 10 4 9 How can I best visually represent this heat map using ggplot (or a similar tool), please? I was just going to have boxes with blank spaces (as, for example, (1,2) doesn't exist) but the project team doesn't want that! I don't care if the points are round, square/rectangle is fine. Thanks in advance and hopefully my question was clear. A: I would update @TTNK's answer to use round points, like your barrels, and to fix the coordinates to match circular packing, not squares. ggplot(df, aes(x, y, fill = Temp)) + geom_point(shape = 21, size = 21, stroke = 3) + scale_y_continuous(expand = c(0.25,0)) + scale_x_continuous(breaks = 1:11, expand = c(0.25,0)) + coord_fixed(ratio = tan(pi/3)) + theme_classic() While this gets you closer to your schematic, you have to fiddle with the size = and stroke = arguments to get your barrels just touching. If you want that to be automatic and at the right aspect ratio, but don't care as much about roundness, go with geom_hex: ggplot(df, aes(x, y, fill = Temp)) + geom_hex(stat = "identity", colour = "white") + scale_y_continuous(expand = c(0.25,0)) + scale_x_continuous(breaks = 1:11, expand = c(0.25,0)) + coord_fixed(ratio = tan(pi/3)) + theme_classic()
{ "pile_set_name": "StackExchange" }
Q: how to maintain add to cart session My app is basically used for scanning the QR code, first when user scan the qr code the product image,description,price and add to cart button is displayed.When the user click on add to cart button it proceed to another activity where he/she set product quantity,on the same activity more products icon is set,when the user click on that icon more products image is displayed here also add to cart button is placed when click on that button it proceed to another activity,where he/she set the quantity,my problem is that add to cart session is not maintained ,the previous products is not display in his/her shopping bag, the product which is select presently was store in shopping not prevoiusly. Please help me ,how could i do this? A: you asked a kind of vague question, so I assume you are looking for a strategy rather than specific code. There are two ways that I can think of that you could approach this. Basically you are looking to maintain information between activities. You could use an application class that stores your clicked items, or you could save the session activity into an sqlite database. I recommend the first option, the application level class. Basically this is a class that sits above all your activities, at application level, and is accessible to all of them as well as to services and broadcast receivers etc. I think of it as being equvilant to using SESSION variables in php. theres a link to an article on global variables in android via applications: http://trace.adityalesmana.com/2010/08/declare-global-variable-in-android-via-android-app-application/
{ "pile_set_name": "StackExchange" }
Q: How to split a result from select column in mySQL to a mutiple columns I have column in MYSQL database would like to extract that column data and split it into multiple columns Here is a sample of data that I would like to split ``` {"1744":"1","1745":"1","1747":"1","1748":"1","1749":"1","1750":"1"} {"1759":"1"} {"47":"1","48":"Ehebr","49":"1479977045596.jpg"} ``` I would like to split that into two columns like so with the first data: as you notice this data come in different lengths and would like to be able to split any length of data, had a look here [How to split a resulting column in multiple columns but I don't think that is what i want the result I got there was like so would also like to trim all the other braces and quotes on the data. here is my code so far ``` SELECT combined,SUBSTRING_INDEX( combined , ':', 1 ) AS a, SUBSTRING_INDEX(SUBSTRING_INDEX( combined , ':', 2 ),':',-1) AS b, SUBSTRING_INDEX(SUBSTRING_INDEX( combined , ':', -2 ),':',1) AS c, SUBSTRING_INDEX( combined , ':', -1 ) AS d FROM tablefoo WHERE combined is not null; ``` A: If you can live with procedures and cursors drop procedure if exists p; delimiter // CREATE DEFINER=`root`@`localhost` PROCEDURE `p`( IN `instring` varchar(255) ) LANGUAGE SQL NOT DETERMINISTIC CONTAINS SQL SQL SECURITY DEFINER COMMENT '' begin declare tempstring varchar(10000); declare outstring varchar(100); declare c1 varchar(100); declare c2 varchar(100); declare checkit int; declare done int; DECLARE CUR1 CURSOR for SELECT t.col FROM T; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE; drop table if exists occursresults; create table occursresults (col1 varchar(20), col2 varchar(20)); open CUR1; read_loop: LOOP FETCH CUR1 INTO tempstring; if done then leave read_loop; end if; set tempstring = replace(tempstring,'{',''); set tempstring = replace(tempstring,'}',''); set tempstring = replace(tempstring,'"',''); set checkit = 0; #select tempstring; looper: while tempstring is not null and instr(tempstring,',') > 0 do set checkit = checkit + 1; if checkit > 100 then #In case of infinite loop leave looper; end if; set outstring = substr(tempstring,1,instr(tempstring, ',') - 1); set tempstring = ltrim(rtrim(replace(tempstring,concat(outstring,','),''))); set c1 = substr(outstring,1,instr(outstring, ':') - 1); set c2 = replace(outstring,concat(c1,':'),''); INSERT INTO OCCURSRESULTS (COL1,COL2) VALUES (c1,c2); # select tempstring,outstring,c1,c2; end while; #select tempstring; set outstring = tempstring; set c1 = substr(outstring,1,instr(outstring, ':') - 1); set c2 = replace(outstring,concat(c1,':'),''); INSERT INTO OCCURSRESULTS (Col1,Col2) VALUES (c1,c2); end loop; close cur1; end // delimiter ; MariaDB [sandbox]> select * from t; +---------------------------------------------------------------------+ | col | +---------------------------------------------------------------------+ | {"1744":"1","1745":"1","1747":"1","1748":"1","1749":"1","1750":"1"} | | {"1759":"1"} | | {"47":"1","48":"Ehebr","49":"1479977045596.jpg"} | +---------------------------------------------------------------------+ 3 rows in set (0.00 sec) MariaDB [sandbox]> MariaDB [sandbox]> call p(1); Query OK, 0 rows affected (0.65 sec) MariaDB [sandbox]> MariaDB [sandbox]> SELECT * FROM OCCURSRESULTS; +------+-------------------+ | col1 | col2 | +------+-------------------+ | 1744 | 1 | | 1745 | 1 | | 1747 | 1 | | 1748 | 1 | | 1749 | 1 | | 1750 | 1 | | 1759 | 1 | | 47 | 1 | | 48 | Ehebr | | 49 | 1479977045596.jpg | +------+-------------------+ 10 rows in set (0.00 sec)
{ "pile_set_name": "StackExchange" }
Q: In XSLT how do you test to see if a variable exists? When using XSLT how do you test to see if a locally scoped variable exists, or is this even possible? A: Considering the XSLT stylesheet as an XML DOM, a variable declaration element makes the variable visible to all following siblings and their descendants. This allows XSLT processors to statically analyze any XPath containing a variable reference to see if the variable exists; if the variable declaration exists on the preceding-sibling or ancestor axis, the variable reference is legal, otherwise it's not. Note that this is entirely dependent on the structure of the XSLT, not the structure of the XML it's processing. The XSLT processor can and should produce an error if an XPath expression uses a variable that doesn't exist. There's no way to check for this condition inside XSLT because this condition isn't legal within XSLT. The sitauation you described in your comment - "The idea is to set a flag variable if something is output and later on display a different message if nothing was output." - actually should result in a syntax error. For instance, if you do something like this: <xsl:if test="some_condition"> <!-- produce output here --> <xsl:variable name="flag">true</xsl:variable> </xsl:if> <!-- time passes --> <xsl:if test="$flag='true'> <!-- wouldn't it be nice? --> </xsl:if> you'll get a syntax error: the second xsl:if element is neither a following sibling of the variable declaration nor one of their descendants. Here's a technique I use a fair amount - this produces variable output based on a variety of different conditions that you don't want to re-check later: <xsl:variable name="output"> <xsl:if test="$condition1='true'"> <p>condition1 is true</p> </xsl:if> <xsl:if test="$condition2='true'"> <p>condition2 is true</p> </xsl:if> <xsl:if test="$condition3='true'"> <p>condition3 is true</p> </xsl:if> </xsl:variable> <!-- we've produced the output, now let's actually *output* the output --> <xsl:copy-of select="$output"/> <!-- time passes --> <xsl:if test="normalize-space($output) != ''"> <p>This only gets emitted if $output got set to some non-empty value.</p> </xsl:if> A: Asking this question indicates that you did not fully grasp the key point of XSLT. :-) It's declarative: nothing can exist unless you declare it. You declare a variable, then it's there, you don't, then it's not. Not once will there be the point where you have to wonder, while coding, if a certain variable exists. XSLT has strict scoping rules, variables exist only within the scope of their parent element, (and not all elements can contain variables to begin with). Once you leave the parent element, the variable is gone. So unless you specify your question/intent some more, the only valid answer is that the question is wrong. You cannot and do not need to check if a variable exists at run-time.
{ "pile_set_name": "StackExchange" }
Q: R-naming the column output from lapply and replace I have ten age columns in my data frame named similarly (i.e. agehhm1, agehhm2, …, agehhm10) that should hold age in years for a person. Currently, they are all strings as some observations include the words "month", "mos", etc. as some people are less than 1 year old. I am trying to use lapply to loop through these columns and replace observations that include these string patterns with a "0" value. I am close but am getting stuck on how to name the new columns I want to assign the lapply output to. I am trying setNames. I am not getting an error, but nothing is changing in my dataframe. I am trying the following. I store the 10 age columns in an object "hhages_varnames". Then I apply lapply to this list of objects, and replace the applicable obs in each one with 0 if I find any of the "month" text patterns. I am trying to create new columns named agehhm1_clean, etc as output. I am open to any other methods that you think are better for any part of this. hhages_varnames is just an object where I store the names of the 10 age columns. So it is just a 1:10 vector with "agehhm1" "agehhm2",..."agehhm10". hhages_varnames <- ls(dataframe_name, pattern = "agehhm.*") setNames(lapply(hhages_varnames, FUN = function(x) (replace(x, grepl("month|MO|mos|days|months", dataframe_name[,x]),"0"))), paste(names(hhages_varnames),"clean", sep="_")) A: UPDATE: Here is the final code that I got to do what I was wanted. It worked to use as.data.frame to make the vectors into a data frame. I also ended up using cbind to add the new columns to my existing dataframe. Thank you! hhages_varnames <- ls(dataframe_name, pattern = "agehhm.*") dataframe_name <- cbind(dataframe_name, setNames(as.data.frame(lapply(hhages_varnames, FUN = function(x) (replace(dataframe_name[,x], grepl("month|MO|mo|days|months", dataframe_name[,x]),"0")))), paste0(as.list(hhages_varnames),"clean")))
{ "pile_set_name": "StackExchange" }
Q: Javascript auto calculate time past 00:00 (midnight) in decimal format I am trying to auto calculate time difference and it all works OK if time difference is on the same day example starts ends hours 08:00 12:00 4.0 problem: 22:00 01:00 gives for result -21.0 hours which is unacceptable it should be 3.0 hours Source code: FIDDLE LINK <div class="container"> <table id="t1" class="table table-hover"> <tr> <th class="text-center">Start Time</th> <th class="text-center">End Time</th> <th class="text-center">Stunden</th> </tr> <tr id="row1" class="item"> <td><input name="starts[]" class="starts form-control" ></td> <td><input name="ends[]" class="ends form-control" ></td> <td><input name="stunden[]" class="stunden form-control" readonly="readonly" ></td> </tr> <tr id="row2" class="item"> <td><input name="starts[]" class="starts form-control" value="22:00"></td> <td><input name="ends[]" class="ends form-control" value="01:00"></td> <td><input name="stunden[]" class="stunden form-control" readonly="readonly" ></td> </tr> </table> </div> js $(document).ready(function(){ $('.item').keyup(function(){ var starts = $(this).find(".starts").val(); var ends = $(this).find(".ends").val(); var stunden = NaN; s = starts.split(':'); e = ends.split(':'); min = e[1]-s[1]; hour_carry = 0; if(min < 0){ min += 60; hour_carry += 1; } hour = e[0]-s[0]-hour_carry; min = ((min/60)*100).toString() if (hour < 0) { hour += 24; } stunden = hour + "." + min.substring(0,2); if (!isNaN(e[1])){ // && (hour > 0) && (hour < 24) $(this).find(".stunden").val(stunden); } }); }); Code Edited, now it works. A: You already know the solution because you use it for the minutes: if(min < 0){ min += 60; hour_carry += 1; } You have to do the same thing for hours: if (hour < 0) { hour += 24; }
{ "pile_set_name": "StackExchange" }
Q: Django - ModelForm to template not submitting data my django template is not submitting data! i know it is very basic thing but there is something i cannot realize here! my Model is: class project(models.Model): Project_Name = models.CharField(max_length=50) And my ModelForm is: class create_project(forms.ModelForm): class Meta: model = project fields = ['Project_Name'] views.py def project_create_view(request): form = create_project(request.POST or None) msg = '' if form.is_valid(): form.save() msg = 'Data Submitted' form = create_project() return render(request, 'create_project.html', {'form':form, 'msg':msg}) And my template is: <form action="" method="POST"> {% csrf_token %} <table border="1"> <tr> <td> <div> <label for="id_Project_Name">Project Name</label> <input type="text" name="Project_Name" id="id_Project_Name"> </div> </td> </tr> </table> <input type="submit" value="Submit"> </form> My context dict is 'form', i tried so many ways searched online but no luck, can anyone help?... I haven't pasted all the project as the case is similar for the rest fields. A: i managed to solve it this way after taking support from one of my experienced friends: <td> <div> <label for="{{form.Project_Name.name}}">Project Name</label> <input type="text" name="Project_Name" id="{{form.Project_Name.name}}"> </div> </td>
{ "pile_set_name": "StackExchange" }
Q: Tkinter Gui For Astar algorithm I'm trying to make the input maze of Astar algorithm(alogorithm to find the shortest path between start and destination and there can be some blockages within the maze, which takes input a maze representing blockages only, as shown below). From the GUI using the Click1 command in each button, I intend to get an output like this(where I inserted a blockage at [3][2]). 1 represents blockage which is to avoided to find the path from start to end. [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]] but I get a output as following,I can't understand why it's blocking the same column of each and every row [[0, 1, 0, 0, 0], [0, 1, 0, 0, 0], [0, 1, 0, 0, 0], [0, 1, 0, 0, 0], [0, 1, 0, 0, 0]] I created the maze in the init(): of class App() using this: def __init__(self, master,dimension,indexes): self.maze=[[0]*self.dimension]*self.dimension this entire thing is within a class App(): for creating the grid of buttons, and storing their reference self.gid = [] for i in range(self.dimension): row = [] Grid.rowconfigure(self.frame1, i + 1, weight=3) for j in range(self.dimension): Grid.columnconfigure(self.frame1, j + 1, weight=3) btn=Button(self.frame1,command=lambda i=i, j=j: self.Click1(i, j)) btn.grid(sticky=N+S+E+W,padx=2,pady=2,ipadx=1,ipady=1) row.append(btn) row[-1].grid(row=i + 1, column=j+1) self.gid.append(row) the Click1 method/Command that also within this class: def Click1(self, i, j): self.indxes.append((i,j)) if len(self.indxes)==1: self.gid[i][j]["bg"]="blue" #indicates start elif len(self.indxes)==2: self.gid[i][j]["bg"]="green" #indicates destinations else: self.gid[i][j]["bg"] = "black" self.maze[i][j] = 1 #how I insert blockage within the maze A: Try this in your init: def __init__(self, master,dimension,indexes): self.maze = [[0] * self.dimension] for _ in range(self.dimension)] The latter * self.dimension call was assigning the same reference to all your inner lists (dimension number of times) - meaning when one is changed all will change. This creates a unique list for each sublist
{ "pile_set_name": "StackExchange" }
Q: SQL-Server-2005: Why are results being returned in a different order with(nolock) i have a primary key clustered index in col1 why when i run the following statements are the results returned in a different order select * from table vs select * from table with(nolock) the results are also different with tablock schema: col1 int not null col2 varchar (8000) A: Without any ORDER BY no order of results is guaranteed. Your question is now heavily truncated but the original version mentioned that you saw different order of result when using nolock as well as tablock. Both of these locking options allow SQL Server to use an allocation order scan rather than reading along the clustered index data pages in logical order (following pointers along the linked list). That should not be taken as meaning that the order is guaranteed to be in clustered index order without that as the advanced scanning mechanism, or parallelism for example could both change this. A: The order of rows is never guaranteed unless you use an ORDER BY. If you have to have the rows in a specific order there is no other solution that will return the rows in a predictable order. If you leave out the order by the DBMS is free to return the rows in any order it thinks is most efficient
{ "pile_set_name": "StackExchange" }
Q: Implementing Discussion forum in SDL tridion We are implementing a website using SDL Tridion. We need to develop/integrate "Discussion Forum" functionality in this website. This website is built in .Net and using SSO for authentication. Please suggest. A: Without specifying what level of integration you need, this is rather hard to answer. However the best answer I can give you (which almost all of the times works) is starting with a question: How would you solve this without SDL Tridion? You will most likely do it with SDL Tridion the exact same way as you would do it without. Only if you have a need for moderation tools inside the CME interface, then you are looking towards a real integration scenario, otherwise I would just call it adding functionality to your website, rather than an integration with SDL Tridion. ps. please also note the FAQ for this site, your question should be practical and answerable. Your last statement being "Please suggest", actually makes it more an open-ended question which diminishes the usefulness of this site.
{ "pile_set_name": "StackExchange" }
Q: preg_match get text I have test.php and on test1.php i have this php code running <?php $Text=file_get_contents("http://inviatapenet.gethost.ro/sop/test.php"); preg_match_all('~fid="(.*?)"~si',$Text,$Match); $fid=$Match[1][1]; echo $fid; ?> what i want to do is to get the text from test.php from this fid='gty5etrf' JavaScript an i need just the content of fid <script type='text/javascript'>fid='gty5etrf'; v_width=620; v_height=490;</script><script type='text/javascript' src='http://www.reyhq.com/player.js'></script> in the test1.php i need to show only the content gty5etrf what i have to do? A: preg_match_all('/fid=\'([^\']+)\'/',$Text,$Match); Your regex was wrong. First, you were looking for fid="..." instead of fid='...'. Second, with .*, the regex would match any character further than the end of the fid attribute. Here is the full code : preg_match_all('/fid=\'([^\']+)\'/',$Text,$Match); $fid=$Match[1][0]; echo $fid;
{ "pile_set_name": "StackExchange" }
Q: Requirements for 9600 baud I want to setup a TCP/IP link between my friends house 7km away from me on 70cm. 1200 Baud is a bit slow, and we are looking into something with a faster data rate. I know normal audio does not work for 9600 baud packet. What are the requirements? I have a bunch of Motorola Radius SM50 UHF wideband commercial radio's and I am wondering if they would work? They have direct discriminator out on the back. A: In fact, it appears 9600 baud will work on narrowband FM. Here is a description from Amsat of a 9600 baud packet modem which used a bandwidth of 4800 Hz and its board. According to the Amsat article, the design is used in many devices including: PacComm Inc: NB-96 Kantronics: DE-9600 MFJ: MFJ-9600 Tasco: TMB-965 Symek: TNC2-H (These were from 1988 when the article was written; here are some slightly more modern instructions.) Packet is AX.25, which Linux routes natively. Here's an AX.25 howto.
{ "pile_set_name": "StackExchange" }
Q: How does this "bit set" work in C? unsigned int error_bits = ( X && Y ) | ( A == TRUE) << 1 | ( B == TRUE) << 2 | ( C == TRUE && D == TRUE) << 4; I believe the general concept here is to set each of the 32 bits to true or false based on certain conditions - with each bit representing an error of something. With the syntax above, I'm a little confused as to what is being set, shifted and where/why. Any clarification is helpful. Thank You. A: You are right. The layout of the bits after the line are: Bits X-5: 0 Bit 4: (C == TRUE && D == TRUE) Bit 3: 0 Bit 2: B == TRUE Bit 1: A == TRUE Bit 0: (X && Y) From most significant to least significant bit. Propably something like this would be more readable (a matter of taste): unsigned int error_bits = 0; if( X && Y ) error_bits |= 1; if( A == TRUE ) error_bits |= 2; if( B == TRUE ) error_bits |= 4; if( C == TRUE && D == TRUE ) error_bits |= 16;
{ "pile_set_name": "StackExchange" }
Q: Quando digito número quebrado, ele retira a vírgula e soma como número inteiro static void Main(string[] args) { Console.Write("Digite sua primeira nota: "); double n1 = Convert.ToDouble(Console.ReadLine()); Console.Write("Digite sua segunda nota: "); double n2 = Convert.ToDouble(Console.ReadLine()); double resultado = (n1 + n2) / 2; Console.WriteLine("A Média é {0}", resultado); Console.ReadKey(); } A: Provavelmente precisa resolver a questão da cultura. De qualquer forma vários erros podem ocorrer na digitação. Se não puder converter corretamente não pode deixar fazer a conta. using static System.Console; public class Program { public static void Main(string[] args) { System.Threading.Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo("pt-BR"); Write("Digite sua primeira nota: "); double n1; if (!double.TryParse(ReadLine(), out n1)) { Write("nota digitada errada, estou encerrando, pode tentar novamente"); return; } Write("Digite sua segunda nota: "); double n2; if (!double.TryParse(ReadLine(), out n2)) { Write("nota digitada errada, estou encerrando, pode tentar novamente"); return; } WriteLine($"A Média é {(n1 + n2) / 2}"); } } Veja funcionando no ideone. E no .NET Fiddle. Também coloquei no GitHub para referência futura. https://dotnetfiddle.net/AjgRnK A: A origem do problema é o seu regional Settings (do Windows), o meu computador é regional americano, usam ponto em casas decimais, sendo assim seu programa funciona usando pontos. Você pode resolver forçando sua aplicação a usar System.Globalization.CultureInfo Porém sua aplicacão ficará fixa ao padrão que você definir, se você distribuir a mesma para outras regiões terá problemas.
{ "pile_set_name": "StackExchange" }
Q: The ValueConverter in ResourceDictionary is the Singleton? If I add a ValueConverter which is defined in a .cs file into the ResourceDictionary,and use it as a static resource for many times,Will it create new instances or just use the same one? ---------------------------------ValueConverterDefinition------------------------------- internal class DateTimeConverter : IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { var date = (DateTime)value; return date.Day; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion } ---------------------------------ResourceDictionary------------------------------- <converter:DateTimeConverter x:Key="DateTimeConverter"></converter:DateTimeToSpecificFormatConverter> <Style x:Key="ToolTipStyle" TargetType="{x:Type ToolTip}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ToolTip"> <Border> <Grid> <TextBlock Foreground="Black"> <TextBlock.Text> <Binding Path="StartDate" Converter="{StaticResource DateTimeConverter}"></Binding> </TextBlock.Text> </TextBlock> <TextBlock Foreground="Black"> <TextBlock.Text> <Binding Path="EndDate" Converter="{StaticResource DateTimeConverter}"></Binding> </TextBlock.Text> </TextBlock> </Grid> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> A: It's the same instance, adding it is conceptually equivalent to doing this: var converter = new DateTimeConverter(); control.Resources.Add("Key", converter); StaticResource then just looks up that instance via the key. You can however use x:Shared to change that behavior so that every reference creates a new instance.
{ "pile_set_name": "StackExchange" }
Q: Gauss-divergence theorem for volume integral of a gradient field I need to make sure that the derivation in the book I am using is mathematically correct. The problem is about finding the volume integral of the gradient field. The author directly uses the Gauss-divergence theorem to relate the volume integral of gradient of a scalar to the surface integral of the flux through the surface surrounding this volume, i.e. $$\int_{CV}^{ } \nabla \phi dV=\int_{\delta CV}^{ } \phi d\mathbf{S}$$ The book page is available via this link: http://imgh.us/Esx.jpg Is that true? is there any mathematical derivation available for Gauss-divergence theorem (or similar theorem) when we consider gradient instead of divergence? Does that has any physical significance as in case of divergence? A: The statement is true. It is typically proved using following properties of vectors. Two vectors $\vec{p}, \vec{q} \in \mathbb{R}^n$ equals to each other if and only if for all vectors $\vec{r} \in \mathbb{R}^n$, $\vec{r}\cdot \vec{p} = \vec{r}\cdot \vec{q}$. Back to our original identity. For any constant vector $\vec{k}$, we have $$\vec{k} \cdot \left(\int_{CV}\nabla\phi dV\right) = \int_{CV} \nabla\cdot(\phi \vec{k}) dV \stackrel{\color{blue}{\verb/div. theorem/}}{=} \int_{\partial CV} \phi \vec{k} \cdot dS = \vec{k} \cdot \left(\int_{\partial CV} \phi dS\right)$$ The first equality holds because $\vec{k}\cdot\nabla\phi = \nabla\cdot(\phi \vec{k}) - \phi(\nabla\cdot \vec{k})$ Additionally, since $\vec{k}$ is a constant vector, $\nabla\cdot\vec{k} = 0$. Hence, $\vec{k}\cdot\nabla\phi = \nabla\cdot(\phi\vec{k})$. Since this is true for all constant vector $k$, the two vectors defined by the integrals equal to each other. i.e. $$\int_{CV}\nabla\phi dV = \int_{\partial CV} \phi dS$$ A: $\newcommand{\bbx}[1]{\bbox[8px,border:1px groove navy]{{#1}}\ } \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{\mrm{CV}}\nabla\phi\,\dd V & = \sum_{i}\hat{x}_{i}\int_{\mrm{CV}}\partiald{\phi}{x_{i}}\,\dd V = \sum_{i}\hat{x}_{i}\int_{\mrm{CV}}\nabla\cdot\pars{\phi\,\hat{x}_{i}}\,\dd V = \sum_{i}\hat{x}_{i}\int_{\mrm{\partial CV}}\phi\,\hat{x}_{i}\cdot\dd\vec{S} \\[5mm] & = \sum_{i}\hat{x}_{i}\int_{\mrm{\partial CV}}\phi\,\pars{\dd\vec{S}}_{i} = \int_{\mrm{\partial CV}}\phi\,\sum_{i}\pars{\dd\vec{S}}_{i}\hat{x}_{i} =\ \bbx{\int_{\mrm{\partial CV}}\phi\,\dd\vec{S}} \end{align} One interesting application of this identity is the Archimedes Principle derivation ( the force magnitude over a body in a fluid is equal to the weight of the mass of fluid displaced by the body ): $$ \left\{\begin{array}{rl} \ds{P_{\mrm{atm.}}:} & \mbox{Atmospheric Pressure.} \\[1mm] \ds{\rho:} & \mbox{Fluid Density.} \\[1mm] \ds{g:} & \mbox{Gravity Acceleration}\ds{\ \approx 9.8\ \mrm{m \over sec^{2}}.} \\[1mm] \ds{z:} & \mbox{Depth.} \\[1mm] \ds{m_{\mrm{fluid.}}:} & \ds{\rho V_{\mrm{body}} = \rho\int_{\mrm{CV}}\,\dd V} \end{array}\right. $$ $$ \int_{\mrm{\partial CV}}\pars{P_{\mrm{atm.}} + \rho gz}\pars{-\dd\vec{S}} = -\int_{\mrm{CV}}\nabla\pars{P_{\mrm{atm.}} + \rho gz}\,\dd V = -\int_{\mrm{CV}}\rho g\,\hat{z}\,\dd V = -m_{\mrm{fluid}}\, g\,\hat{z} $$
{ "pile_set_name": "StackExchange" }
Q: Reading and writing to/from memory in Python Let's imagine a situation: I have two Python programs. The first one will write some data (str) to computer memory, and then exit. I will then start the second program which will read the in-memory data saved by the first program. Is this possible? A: Sort of. python p1.py | python p2.py If p1 writes to stdout, the data goes to memory. If p2 reads from stdin, it reads from memory. The issue is that there's no "I will then start the second program". You must start both programs so that they share the appropriate memory (in this case, the buffer between stdout and stdin.)
{ "pile_set_name": "StackExchange" }
Q: Should we try to use css sprite as much as possible? Can we use whole page design jpg file as css-sprite without slicing? A: No. Most designs includes one or more images which are: Content images, which should be included with <img> and have suitable alt text. and/or Images which tile in 2 dimensions, which can't be sprited.
{ "pile_set_name": "StackExchange" }
Q: AWK увеличение счетчика Всем привет, есть скрипт который выводит имена сетевых интерфейсов в json формате, нужно чтобы к каждой строке печатался номер. Скрипт: cat /proc/net/dev $1 | gawk ' BEGIN { ORS = ""; print " [ "} /Inter-/ {next} /face/ {next} /lo/ {next} { printf "%s{\"#IF\": \"%s\"}", separator, $1 separator = ", " } END { print " ] " } ' Вывод: [ {"#IF": "eth0:"}, {"#IF": "lo:"}, {"#IF": "eth0:"} ] Нужно так: [ {"#IF0": "eth0:"}, {"#IF1": "eth1:"}, {"#IF2": "lo:"} ] A: Так добавьте его: BEGIN { ORS = ""; print " [ "; i = 0; } /Inter-/ { next } /face/ { next } /lo/ { next } { printf "%s{\"#IF%d\": \"%s\"}", separator, i, $1; separator = ", "; i++; } END { print " ] "; }
{ "pile_set_name": "StackExchange" }
Q: Запятая в названии магазина Есть у нас в городе то ли кафе, то ли магазин пива на разлив с замечательным названием "Расти(,) пузо". Насколько я помню (вижу обычно это название мельком из окна маршрутки), запятой перед "пузом" там нет. А действительно, нужна она там или нет? С одной стороны, обращение к пузу, а с другой, вроде как и нет. A: Без запятой это циничное обращение к потенциальному посетителю ("расти его, своё пузо"), ведь у нас рекламщики давно не церемонятся и обращаются к людям на "ты". Возможно, и намеренно хотели двусмысленность придать (казнить нельзя помиловать).
{ "pile_set_name": "StackExchange" }
Q: VsFTPd - LDAP - PAM I am trying to configure a VsFTPd server to authenticate agains an LDAP server. It may be easy, but since it is the first time that I am using both LDAP and PAM, I have some difficulties. VsFTPd runs on an Ubuntu Server 11.04 and the LDAP is OpenLDAP on an 10.10 Ubuntu Server. I disabled AppArmor on the first one. VsFTPd cannot connect to the LDAP server, in my syslog I have: vsftpd: pam_ldap: ldap_simple_bind Can't contact LDAP server The LDAP server is OK since I can do an ldapsearch. Here is my /etc/pam.d/vsftpd file: auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed @include common-account @include common-session @include common-auth auth required pam_ldap.so account required pam_ldap.so session required pam_ldap.so password required pam_ldap.so And here is my /etc/ldap.conf file: base dc=example,dc=com uri ldapi:///ldap.example.com ldap_version 3 rootbinddn cn=admin,dc=example,dc=com pam_password md5 nss_initgroups_ignoreusers a_bunch_of_system_users Can anyone help me please ? Thank you. EDIT: new version of the /etc/pam.d/vsftpd file: auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed account required pam_unix.so account sufficient pam_ldap.so session required pam_limits.so session required pam_unix.so session optimal pam_ldap.so auth required pam_env.so auth sufficient pam_unix.so nullok_secure auth sufficient pam_ldap.so use_first_pass auth required pam_shells.so A: According to man ldap.conf: URI <ldap[si]://[name[:port]] ...> The URI scheme may be any of ldap, ldaps or ldapi, which refer to LDAP over TCP, LDAP over SSL (TLS) and LDAP over IPC (UNIX domain sockets), respectively. So, change uri ldapi:///ldap.example.com to uri ldap:///ldap.example.com and try again.
{ "pile_set_name": "StackExchange" }
Q: Comparator ArrayList Objeto JSP Eu estou com dificuldades de implementar o método Comparator, para ordenar o ArrayList de Objetos. Tenho uma class de Empreendimentos. Onde depois, eu crio um ArrayList, que é preenchido a partir de um sistema de gerenciador de conteúdo. Adicionei o metodo compareTo, e ele ordena pelo atributo Area. Mas agora eu também preciso que ordene por quantidade de quartos, e banheiros. Como eu poderia fazer isso ? Aqui esta o cidgo atual: <%! public class Empreendimento implements Comparable<Empreendimento>{ //possui atributos, gets, sets e construtor public int compareTo(Empreendimento Emp) { if (this.menorArea < Emp.getMenorArea()) { return -1; } if (this.menorArea > Emp.getMenorArea()) { return 1; } return 0; } %> Depois que o gerenciador de conteudo, cria o ArrayList com esses obejtos. Faço um filtro, e depois ordeno. Ai uso isso: Collections.sort(ArrayResultadoBusca); A: Com Java 8, há uma solução simples e limpa para lidar com multiplas comparações. Lambda: Comparator<Empreendimento> comparator = Comparator .comparing(e -> e.getMenorArea) .thenComparingInt(e -> e.getQuantidadeQuarto) .thenComparingInt(e -> e.getQuantidadeBanheiro); Method Reference: Comparator<Empreendimento> comparator = Comparator .comparing(Empreendimento::getMenorArea) .thenComparingInt(Empreendimento::getQuantidadeQuarto) .thenComparingInt(Empreendimento::getQuantidadeBanheiro); Não é recomendado misturar código html da view com código Java, é interessante realizar a comparação em uma classe e depois passar o resultado por uma Expression Language (EL). Uma alternativa é implementar um Comparator que ao invez de passar os atributos por parametro, passa duas classes do mesmo tipo e compara seus atributos. Ex: Collections.sort(lista, new Comparator<Empreendimento>(){ public int compare(Empreendimento e1, Empreendimento e2) { int comparacao = e1.getMenorArea().compareTo(e2.getMenorArea()); if(comparacao != 0) { return comparacao; } comparacao = e1.getQuantidadeQuarto().compareTo(e2.getQuantidadeQuarto()); if(comparacao != 0) { return comparacao; } return e1.getQuantidadeBanheiro().compareTo(e2.getQuantidadeBanheiro()); } }); É importante implementar o equals e hashCode na classe modelo, nesse caso Empreendedorismo.
{ "pile_set_name": "StackExchange" }
Q: Dynamically add layout components (combine xml and manual layout) Is there any way to combine xml-based layout and "manual" layout? F.ex.: I use the "standard" xml layout like this: setContentView(R.layout.mainscreen); And then I want to have some more dynamic contents, adding it like this: LinearLayout layout = new LinearLayout(this); setContentView(layout); layout.addView(new CreateItemButton(this, "Button", 1)); I realize of cours that I cannot create a new layout like in the line above, but I'd probably have to initialize the xml layout in some way. But - is it possible, or do I just have to go with a 100% manual layout if I want to dynamically add components? Or is there perhaps another, more elegant/correct way of doing it? (What I want to do is create buttons based on entries fetched from a database. These will wary in number and text/contents, hence the attempt to add them dynamically instead of in the xml layout file. A: You can add any element dynamically to your xml layout. You have to have a container in your xml layout where you are going to add your dynamic elements. Say empty LinearLayout with id="container". Also you can build everything dynamically and setContentView(yourView); Where is yourView is a root layout element with other child elements added. EX: Button myButton = new Button(this); myButton.setLayoutParams(params); LinearLayout container = (LinearLayout)findViewById(R.id.container); container.addView(myButton); or LinearLayout myLayout = new LinearLayout(this); ... container.addView(myLayout);
{ "pile_set_name": "StackExchange" }
Q: Reverse Proxy for all ports I am working on a system where each of my clients have a separate raspberry pi assigned to them, which they can run whatever they want on them (eg. Game Server or web server with a custom port). How would I reverse proxy each one? According to this question, I can use a NAT. What is that, and how do I set it up? Also, can I use it to block specific ports, eg. port 25, and get logs, etc? A: How would I reverse proxy each one? In general: you wouldn't. From your question, it sounds like you're trying to share a single public IP address between many client devices. Both NAT and reverse-proxying are mechanisms for that, although they work at different levels. (However, if you can afford a dedicated public IP address for every device, then the problem doesn't exist in the first place and both mechanisms are practically irrelevant.) DNAT (usually called "port forwarding") usually works at transport level – it allows several devices to "share" an IP address by assigning each device a range of TCP or UDP ports. For example, if you own the IP address x.y.z.t, you can forward TCP port 80 (x.y.z.t:80) to device A, port 81 to device B, and so on. Reverse proxying usually works at application level – it allows several devices or services to "share" a single IP:port combination by separating requests based on some identifier found in the protocol. For example, if you have a HTTP reverse proxy running on x.y.z.t:80, you can make it forward HTTP requests to different devices based on what domain name was requested. Reverse proxying has the advantage that the proxy lets you share the same IP:port across multiple domains, but it comes with requirements: The proxy software needs to understand the protocol in question; it needs to be purpose-built for that protocol. This means you can't just arbitrarily proxy "all ports" and have it work for any miscellaneous service your clients would run. The protocol needs to actually have some sort of "host" or "domain" identifier as part of its messages. Not all protocols carry such identifiers; in fact most don't. Proxying rewrites lower-layer addresses; your clients will see all connections as coming from the proxy itself, unless the service also has methods for dealing with that (e.g. X-Forwarded-For or the so-called "PROXY protocol"). So proxying for multiple devices is practically limited to HTTP and HTTPS (which have a "Host" header); plus TLS-based services (which have SNI and ALPN); plus maybe DNS and SMTP (based on recipient address); and mayyybe insecure POP3/IMAP/FTP (based on login name). For game servers though it's not an option – you pretty much have to use a dedicated port (or several) for each service on each device. That's usually called "port forwarding" or "DNAT"; it is exactly the same thing as the port forwarding feature in your home router; and it has its own set of problems: It doesn't know about DNS domains and works directly on IP address level. If all your domains resolve to the same single IP address, they all have the exact same port forwarding rules. This means each TCP or UDP port on a given IP address can only be forwarded to one device. For example, if client 1 gets the standard SSH port, say pi1.example.com:22, that means pi2.example.com:22 or pi3.example.com:22 will also go to client 1's device – other clients now cannot use this port for inbound connections at all. (Depending on what software performs NAT, they probably cannot use it for outbound connections either.) There are only ~65k TCP ports and ~65k UDP ports (per IP address), and they're required for both inbound connections (one per service, sometimes more) and for outbound connections (as the source port; generally one per connection). So in practice you can't have more than... say, ~32k port-forwarding rules. Some services require a specific port. For example, SMTP for inbound mail delivery always uses TCP port 25; IKE uses UDP ports 500 and 4500; game servers often require a specific range of ports. If one of your clients reserves that specific range, other clients cannot run the same service. As for how to set it up – I suggest starting with DNAT, because most likely you'll need it for the reverse proxy anyway. What is that, and how do I set it up? As mentioned, NAT for incoming connections (DNAT) is often known as "port forwarding" and is configured directly on your router (the device which 'owns' your public IP address), and it will surely be explained in the device's manual. (If the router just runs regular Linux or FreeBSD, then DNAT rules are added through iptables or pf, same as firewall rules.) Also, can I use it to block specific ports, eg. port 25, and get logs, etc? Yes and no. Those are features of your firewall. Any decent firewall will have them (I mean, that's what a firewall does), but they'll be there alongside NAT, but not part of NAT.
{ "pile_set_name": "StackExchange" }
Q: How Invalidate users sessions when makes logout? I spent a lot of time to solve this problem, yet still couldn't get it work. I am using Spring Security. The application will run on multiple servers. I use the option "remember me" on login to save persistent logins in my database. If a user is connected to server 1, he has a session id in cookies browser. I turn on another server and this user makes authentication and the cookies browser have this session id and the session id of server 1 connection. When this user logs out in one server or another server, he should be redirected to login page in all servers. I tried to remove cookies from browser without success. How can I make this work? Any help? Example scenario: In gmail, if you have 2 tabs open in your account and if you log out from one of them, other tab automatically logs out too. The server 1 doesn't know the information of server 2.. I think my problem is here but I don't know how I can solve this. This is my security config: <http auto-config="false" use-expressions="true" disable-url-rewriting="true"> <intercept-url pattern="/login.do" access="permitAll" /> <intercept-url pattern="/**" access="hasRole('ROLE_USER')" /> <remember-me data-source-ref="dataSource" /> <form-login login-page="/login.do" /> <custom-filter position="CONCURRENT_SESSION_FILTER" ref="concurrencyFilter" /> <custom-filter position="LOGOUT_FILTER" ref="logoutFilter" /> <session-management session-authentication-strategy-ref="sas" /> </http> <!-- <logout logout-url="/j_spring_security_logout" logout-success-url="/" invalidate-session="true" /> --> <beans:bean id="logoutFilter" class="org.springframework.security.web.authentication.logout.LogoutFilter"> <beans:constructor-arg value="/login.do" /> <beans:constructor-arg> <beans:list> <beans:ref bean="rememberMeServices"/> <beans:ref bean="logoutHandler"/> </beans:list> </beans:constructor-arg> <!-- <beans:property name="filterProcessesUrl" value="/login.do" /> --> </beans:bean> <beans:bean id="sessionRegistry" class="org.springframework.security.core.session.SessionRegistryImpl" /> <beans:bean id="concurrencyFilter" class="org.springframework.security.web.session.ConcurrentSessionFilter"> <beans:property name="sessionRegistry" ref="sessionRegistry" /> <beans:property name="expiredUrl" value="/login.do" /> </beans:bean> <beans:bean id="sas" class="org.springframework.security.web.authentication.session.ConcurrentSessionControlStrategy"> <beans:constructor-arg name="sessionRegistry" ref="sessionRegistry" /> <beans:property name="maximumSessions" value="1" /> </beans:bean> <authentication-manager alias="authenticationManager"> <authentication-provider user-service-ref="jdbcUserService" /> </authentication-manager> A: Here are 3 solutions for your multiple-server scenario: Use sticky sessions on your load balancer so the user keeps going back to the same server. Then you just invalidate the session when they log out. This is usually coupled with a session failover solution (Tomcat example) so if a server goes down a user can get redirected to a new server that picks up their old session. Use a distributed cache for sessions (for example Terracotta Web Sessions). Then when they logout invalidate the session and it will be invalidated everywhere. Another solution is to use a customized Spring Security TokenBasedRememberMeServices as your "login" cookie. If the user does not select remember me, go ahead and set the cookie, but make it a browser session cookie instead of a persistent cookie. All servers will recognize the user and create a session for it. When the user logs out, drop the cookie. You'll also need a custom RememberMeAuthenticationFilter that looks for a authentication token in the session and a missing RememberMe cookie, invalidating the session and clearing security context if that is the case. A: I would recommend you to have a look at SessionRegistry .You can check this here . There has been a discussion on this at Is it possible to invalidate a spring security session? . Check this out too Spring sessions are stored as JsessionID cookies. Check here for a discussion on cookie removal. The same query has been discussed at Invalid a session when user makes logout (Spring).
{ "pile_set_name": "StackExchange" }
Q: DownloadString skips newline characters I want to import text data from Google Finance, and I use this http address as a parameter to DownloadString http://www.google.com/finance/getprices?i=1200&p=1d&f=d,o,h,l,c,v&df=cpct&q=AAPL . However, the resulting string misses any newline characters, so it is really difficult to parse. Any ideas? A: The line ends returned from the stream are \n opposed to the default Windows line ends \r\n (which is represented in Environment.NewLine on Windows). Try to split on all of the possible combinations of \r and \n: WebClient wc = new WebClient(); string s = wc.DownloadString("http://www.google.com/finance/getprices?i=1200&p=1d&f=d,o,h,l,c,v&df=cpct&q=AAPL"); string[] lines = s.Split(new string[] { Environment.NewLine, "\n", "\"r" }, StringSplitOptions.None);
{ "pile_set_name": "StackExchange" }
Q: How to add pagination to FilterView I've filtering view of django_filters.FilterSet which is called right from urls.py url(r'^$', FilterView.as_view(filterset_class=ProductFilter, template_name='products/products.html'), name='products'), and it's has no pagination, but when i add paginate_by = 20 in url(r'^$', FilterView.as_view(filterset_class=ProductFilter, template_name='products/products.html'), paginate_by = 20, name='products'), it adds my custom pagination page, but it's not handling data restricted by filters. So i can apply a few filters and it reduces data to, say 40 rows, but clicking on a second page it loads my all data without any filter. Could I specify that I want to paginate data after filtering somehow? A: At the end I decided to create separate view and add queryset directly to context object like: class ProductView(ListView): model = Product template_name = 'products/products.html' paginate_by = 5 def get_context_data(self, **kwargs): context = super().get_context_data() context['product_list'] = ProductFilter(self.request.GET, queryset=Product.objects.order_by('id')).qs return context
{ "pile_set_name": "StackExchange" }
Q: Drupal limit number of menu items in primary links Is there a way to set a limit on how many menu items users can add to Primary Links menu? I'm working on a Drupal site and I have a horizontal primary links nav bar. There is only room for no more than 7-8 links in the nav bar. I don't want the future maintainer of the site to add more than 8 items to the menu. Is there a way I can set a limit on that? Some module or override function? Thanks, A: You could try this: http://api.drupal.org/api/function/menu_primary_links/6 Then, using hook_form_alter, do: $menu_links = count(menu_primary_links()); if ($menu_links > 8) { unset($form['menu']); } But, we must also protect nodes that are already in the menu. So, $menu_links = count(menu_primary_links()); if ($menu_links > 8 && !($form['menu']['mlid']['#value'] != 0 && $form['menu']['#item']['menu-name'] == 'primary-links')) { unset($form['menu']); } That will remove the menu option from a node form only if that node has no existing menu entry in primary links menu. It checks by looking to see if the node you are editing has an mlid, and if so, if it is in the primary links menu. hook_form_alter http://api.drupal.org/api/function/hook_form_alter But how will our users know what happened? Lets tell them. if ($menu_links > 8 && !($form['menu']['mlid']['#value'] != 0 && $form['menu']['#item']['menu-name'] == 'primary-links')) { unset($form['menu']); drupal_set_message('The maximum limit of links in the primary menu has been reached.', 'status', FALSE); } You could expand on that message by listing $menu_links too, so the user knows which nodes need removing before other nodes can be added. Also, this is a little tricky if they make use of secondary links or other menus. In which case, you would need more programming to replace the tree within the menu options, but thats a bit more involved at the moment. They could always add nodes to secondary menus through Admin > Build > Menus.
{ "pile_set_name": "StackExchange" }
Q: C: analyze bottleneck of C programs My C program is efficiency critical. Some functions are called millions of times, so I would like to know how much time is spent on each function, giving me sth like this: Total time: 100s forward(): 20s; align(): 15s; ... others: 1s. Is there any debugger can perform such analysis? I am using Eclipse CDT on Ubuntu, and using gdb to debug. Some guy suggested Valgrind, but I did not find any suitable. I found there are some questions talking about c# or php or perl profiling, any suggestions for C? Thanks. =========================================== Follow up: thanks very much for all help, gprof seems really nice. Here is a manual link: http://www.cs.utah.edu/dept/old/texinfo/as/gprof_toc.html. A question about interpreting the summary: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls us/call us/call name 61.29 9.18 9.18 bwa_print_sam_SQ 21.96 12.47 3.29 bwt_sa 4.01 13.07 0.60 bns_coor_pac2real 3.87 13.65 0.58 bwt_match_exact_alt 2.60 14.04 0.39 bwa_read_seq 1.00 14.19 0.15 bwt_match_gap 0.80 14.45 0.12 seq_reverse If I am not wrong, it says the function bwa_print_sam_SQ takes 61.29% of the total time. But my program runs for 96.24 seconds, this function should run around 60 seconds. Why the column "cumulative" seconds is only 9.18? The manual says: cumulative seconds This is the cumulative total number of seconds the computer spent executing this functions, plus the time spent in all the functions above this one in this table. And I use the parameter "gprof -f pe_sai2sam_se_core -f bwa_print_sam_SQ -f seq_reverse ./peta > gprof", where function "pe_sai2sam_se_core" calls "bwa_print_sam_SQ" in a big while loop. Why the report says: index % time self children called name <spontaneous> [1] 61.3 9.18 0.00 bwa_print_sam_SQ [1] ----------------------------------------------- <spontaneous> [8] 0.8 0.12 0.00 seq_reverse [8] ----------------------------------------------- It did not say anything about pe_sai2sam_se_core... Why? A: You don't need a debugger. What you need is called a profiler. Since you mention Ubuntu, you probably want to start with gprof. Here's how you can use gprof: Disable all compiler optimizations for your program (-O0) - optional of course Add the -g and the -pg flags Rebuild and run the program as usual At this point your program should have produced a gmon.out file in the cwd Use gprof to inspect data: gprof ./your_program > prof Now you can view the prof file. It begins with the flat profile which simply tells you how much time it's spending in various functions.
{ "pile_set_name": "StackExchange" }
Q: Sanitizing MSSQL (&/OR Putting HEX into a TEXT Column) Overview I'm in need of a way to properly sanitize my MSSQL data. We all know addslashes() and htmlentities() doesn't cut it. Attempted Solution & Problem After research, I found this thread here on SO. It worked great, until I needed to insert into a column of type text. When trying to insert a HEX literal into that, I get: Operand type clash: varbinary is incompatible with text What I Need So, I need either another solid sanitizing strategy which doesn't involve HEX literals. OR I need help overcoming this error when inserting HEX into text. My Current Method: public static function dbSanitize( $str ){ if( is_numeric( $str ) ) return $str; $unpacked = unpack( 'H*hex', $str ); return '0x' . $unpacked['hex']; } My Query [INSERT INTO myTable ( C1,Text2,C3,C4,C5,C6,Text7,C8 ) VALUES ( 111,0x3c703e0a0932323232323c2f703e0a,1,1,1,0,0x5b7b2274797065223a2274657874222c226c6162656c223a224669656c64204e616d65222c2264657363223a22222c224669656c644944223a2239373334313036343937227d5d,1316471975 )]. I'm not beyond changing the type cast of the column, if there's another option for large amounts of text data. Thanks for any help you can provide!! A: Don't build your query by appending strings. Use bound fields. See: http://www.php.net/function.mssql-bind.php Or the $params variable in: http://www.php.net/function.sqlsrv-query.php if you are using the sqlsrv library (which you should).
{ "pile_set_name": "StackExchange" }
Q: All ViewPart does not get refresh for first time after changing the local Consider a RCP application having some views. If you change the local in .ini files and restart the application all of the views does not get changed to the expected language until user clicked on them. A: Because eclipse workbench caches workbench state. Eclipse caches titles and layout of all parts. All View Parts are not actually created till when it is shown(User clicking) to make eclipse starts faster. So basically there is no code execution not at all in workbench start except visible parts. Since codes are not loaded yet, There is no way to access Message bundles. IMO, changing locale is very few case, So you can ignore this specific case. Insert below lines into product.ini will prevent cache: -clearPersistedState true However, Customer can't restore previously opend editors or some settings of views after re-starting product. Choice is yours.
{ "pile_set_name": "StackExchange" }
Q: Qual extensão do PHP devo usar com o SQL Server 2000? Estou tentando conectar ao banco de dados do SQL Server 2000 usando PHP PDO SQLSRV, porém quando acesso a index me retorna o catch() com o seguinte erro SQLSTATE[08001]: [Microsoft][SQL Server Native Client 11.0]SQL Server Native Client 11.0 does not support connections to SQL Server 2000 or earlier versions. Alguma dica de como posso resolver? A: Infelizmente essa versão do SQL Server Native Client não funciona com o SQL. Server 2000. Você precisa utilizar o Microsoft Drivers 2.0 for PHP for SQL Server nesse caso e o Microsoft SQL Server 2008 R2 Native Client. Entretanto, para utilizar a versão 2.0, você precisa usar no máximo o PHP 5.3. Use essa tabela como comparação: | Versão do Driver | Versão do PHP | Versão SQL Server | |------------------|----------------|-------------------| | 3.2 | 5.4, 5.5 e 5.6 | 2005+ | | 3.1 | 5.4 e 5.5 | 2005+ | | 3 | 5.3 e 5.4 | 2005+ | | 2 | 5.2 e 5.3 | 2000+ | Mais informações aqui.
{ "pile_set_name": "StackExchange" }
Q: Angular - Expression has changed after it was checked. Previous value When an expression's value changes, seldom, this error is thrown and the app fails to respond any longer. I have in my view this function: {{generalService.timeFromNow(item.creation_time)}} which calls: moment(timestamp, "X").fromNow() Randomly, when time changed from 35 to 36, this error happened. If I have another time moving (ticking) minutes, no error is thrown. This kind of error happens all around my application, and I do not want to micromanage the ngOnChanges, as I think Angular should manage it. A: This is a feature of Angular2 in Development mode to help detection of bad designs. Having this error shows that you probably have to redesign. For example assume you have two fields with this binding. like a text and a graph. {{generalService.timeFromNow(item.creation_time)}} they can end up showing different values in Prod every now and then if you don't address this issue. That would be really difficult to notice in your normal testing and only a small portion of your users will notice it. The solution is usually to store the result of 'timeFromNow' in some state variable, so that it can not change over the update cycle.
{ "pile_set_name": "StackExchange" }
Q: UIView drawInRect with UIImage results in blackbackground I have this following codes: @implementation MyImageView @synthesize image; //image is a UIImage - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code } return self; } -(void) removeFromSuperview { self.image = nil; [super removeFromSuperview]; } - (void)drawRect:(CGRect)rect { // Drawing code if (self.image) { CGContextRef context = UIGraphicsGetCurrentContext(); CGContextClearRect(context, rect); //the below 2 lines is to prove that the alpha channel of my UIImage is indeed drawn //CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor); //CGContextFillRect(context, rect); CGContextDrawImage(context, self.bounds, self.image.CGImage); } } @end When I ran the code, I realized that the background of my view is black. To test if it was a problem with my UIImage, I used the 2 lines commented after CGContextClearRect(context, rect). Indeed a white background was drawn. Is there anyway for me to remove the black background? When I init MyImageView, i have already set backgroundColor to [UIColor clearColor]. Any advice is much appreciated. Thanks A: Setting the background color to [UIColor clearColor] should work. Also set self.opaque = NO to enable transparency. You should also check that the correct initializer is being called. For example if the view is part of a XIB file, you need to implement initWithCoder: as well as initWithFrame: etc.
{ "pile_set_name": "StackExchange" }
Q: Add image into UIAlertController in swift I want to have a simple UIAlertController with image in it. what should I do? let alertMessage = UIAlertController(title: "Service Unavailable", message: "Please retry later", preferredStyle: .Alert) alertMessage.addAction(UIAlertAction(title: "OK", style: .Default, handler: nil)) self.presentViewController(alertMessage, animated: true, completion: nil) A: Try this: let alertMessage = UIAlertController(title: "Service Unavailable", message: "Please retry later", preferredStyle: .Alert) let image = UIImage(named: "myImage") var action = UIAlertAction(title: "OK", style: .Default, handler: nil) action.setValue(image, forKey: "image") alertMessage .addAction(action) self.presentViewController(alertMessage, animated: true, completion: nil)
{ "pile_set_name": "StackExchange" }
Q: When using asp.net-mvc, What is the best way to update multple page sections with one HTML.Action() method I have a pretty big asp.net-mvc site with 100 controllers and thousands of actions. Previously the header image that was defined on the Site.Master page was hardcoded and I want to make it dynamic. To do so, I added this line to my Site.Master file: <%= Html.Action("GetHeaderTitle", "Home")%> which just returns some HTML for the header title such as: <span style='font-size:15px;'>My Header Title</span> The issue is that <title> also had this same hard coded value. I could obviously create another HTML.Action to have it show the dynamic valid in the title but now I am going back to the server twice for essentially the same information (not the exact same HTML as I don't want the span info but the same logic on the server to get the data). Is there any way to have an Html.Action return multiple snippets of html that I can updates in different places on my master page. A: I think you're looking at it wrong - if retrieving of title is a long operation then just cache the results & write different actions anyway. // controller public string GetTitle() { var title = (string)ControllerContext.HttpContext.Items["CachedTitle"]; if (string.IsNullOrEmpty(title)) { title = "some lengthy retrieval"; ControllerContext.HttpContext.Items["CachedTitle"] = title; } return title; } public ActionResult GetTitleForTitle() { return Content(GetTitle()); } public ActionResult GetHeaderTitle() { return Content("<span>"+ GetTitle() + "<span>"); } Alternatively you can cache it directly on view, which is kind of evil (the simpler view the better): <% ViewBag.CachedTitle = Html.Action("GetHeaderTitle", "Home"); %> ... <%= ViewBag.CachedTitle %> ... <%= ViewBag.CachedTitle %>
{ "pile_set_name": "StackExchange" }
Q: Access struct from other class I have following class with struct: class JsonHelper { //for tests only struct arrivalScan { let TOID: String let UserId: String let GateId: String let TimeStamp: String } func sendArrival(scan: arrivalScan){ //do something } } In my view controller I am trying to create initialise arrivalScan: let scan = JsonHelper.arrivalScan.init(TOID:"D/0098/123456",UserId:"O0124553",GateId: "G/0098/536371",TimeStamp: "11/04/2018 11:51:00") and then pass this as argument to the sendArrival function in JsonHelper JsonHelper.sendArrival(scan) But getting error 'JsonHelper.arrivalScan' is not convertible to 'JsonHelper' What am I doing wrong? A: There are a few issues: First, always name your classes and structs with an initial capital letter. arrivalScan should be ArrivalScan. This will help you differentiate between a class (or struct) and an instance. The sendArrival function is an instance function. You are trying to access it as if it were a class function. Create an instance of the JsonHelper class, then call the function on that instance. Variable names inside your struct should begin with lowercase. Example: class JsonHelper { struct ArrivalScan { let toId: String let userId: String let gateId: String let timestamp: String } func sendArrival(scan: ArrivalScan) { //do something } } let helper = JsonHelper() let scan = JsonHelper.ArrivalScan(toId: "value", userId: "value", gateId: "value", timestamp: "value") helper.sendArrival(scan: scan)
{ "pile_set_name": "StackExchange" }
Q: I want to make a slider with interchanged instead of moving with JQuery In this slider, I want to make the images interchanged instead of moving, I mean like the first image fades out from its place and the second one fade in replacing the first one - at the place of the first image -, and the second one fades out from its place and the third one fades in replacing the second one - at the place of the second image - <div class="side-imgs"> <div class="eac-img" style="background-image: url(images/Egypt_header_sm.jpg)" > </div> <div class="eac-img" style="background-image: url(images/28273351-chilling-out-sitting-rim-cliff-in-the-mountain.jpg)"> </div> <div class="eac-img" style="background-image: url(images/180933-1-Blue_Lagoon,_Dahab,_Egypt.jpg)"> </div> <div class="eac-img" style="background-image: url(images/cairo2013-700x.jpg)"> </div> <div class="eac-img" style="background-image: url(images/cairo_giza_gizeh_egypt_pyramid_camels_camel_donkey-327500.jpg_d_str7yz.jpg)" ></div> <div class="eac-img" style="background-image: url(images/egypt-tourism-authority-launches-first-new-global-marketing-campaign-in-more-than-four-years-seeking-to-double-number-of-visitors-by-2020.jpg)" > </div> <div class="eac-img" style="background-image: url(images/Egypt_header_sm.jpg)" > </div> <div class="eac-img" style="background-image: url(images/luxorfuntours.png)" > </div> <div class="eac-img" style="background-image: url(images/unnamed.jpg)"> </div> <div class="eac-img" style="background-image: url(images/Egypt_header_sm.jpg)" > </div> <div class="eac-img" style="background-image: url(images/28273351-chilling-out-sitting-rim-cliff-in-the-mountain.jpg)" > </div> </div> I tried using this code but it's not working var i; var theimg = $('.side-imgs .eac-img'); for (i = 0; i < theimg.length; i++) { theimg.eq(i).delay(3000).fadeOut(1000).next().fadeIn(1000).delay(3000); } I also tried this one but it's not working well too (function autoSlider() { $('.side-imgs .eac-img').each(function () { $(this).delay(4000).fadeOut(100).next().fadeIn(200); }); autoSlider(); }()); I converted the divs into images to change the src <div class="side-imgs"> <div class="side-overlay"></div> <img class="eac-img " src="images/Egypt_header_sm.jpg" > <img class="eac-img " src="images/28273351-chilling-out-sitting-rim-cliff-in-the-mountain.jpg" > <img class="eac-img " src="images/180933-1-Blue_Lagoon,_Dahab,_Egypt.jpg" > <img class="eac-img " src="images/cairo2013-700x.jpg" > <img class="eac-img " src="images/cairo_giza_gizeh_egypt_pyramid_camels_camel_donkey-327500.jpg_d_str7yz.jpg" > <img class="eac-img " src="images/egypt-tourism-authority-launches-first-new-global-marketing-campaign-in-more-than-four-years-seeking-to-double-number-of-visitors-by-2020.jpg" > <img class="eac-img " src="images/Egypt_header_sm.jpg" > <img class="eac-img " src="images/28273351-chilling-out-sitting-rim-cliff-in-the-mountain.jpg" > <img class="eac-img " src="images/180933-1-Blue_Lagoon,_Dahab,_Egypt.jpg" > <img class="eac-img " src="images/cairo2013-700x.jpg" > <img class="eac-img " src="images/cairo_giza_gizeh_egypt_pyramid_camels_camel_donkey-327500.jpg_d_str7yz.jpg)" > <img class="eac-img" src="images/egypt-tourism-authority-launches-first-new-global-marketing-campaign-in-more-than-four-years-seeking-to-double-number-of-visitors-by-2020.jpg" > <img class="eac-img" src="images/luxorfuntours.png" > </div> and I used this JS code but still not working function slidingImages() { images = ['images/Egypt_header_sm.jpg', 'images/28273351-chilling-out-sitting-rim-cliff-in-the-mountain.jpg','images/180933-1-Blue_Lagoon,_Dahab,_Egypt.jpg','images/cairo2013-700x.jpg','images/cairo_giza_gizeh_egypt_pyramid_camels_camel_donkey-327500.jpg_d_str7yz.jpg','images/egypt-tourism-authority-launches-first-new-global-marketing-campaign-in-more-than-four-years-seeking-to-double-number-of-visitors-by-2020.jpg']; var random = images[Math.floor(Math.random()*images.length)]; document.querySelectorAll('.side-imgs .eac-img').src= random; setTimeout(slidingImages, 2000); } slidingImages(); A: For starters, your parent class is "show-img" and in the Javascript code you showed the parent class is "side-imgs", consider giving a look at your code before seeking help, you can learn more by doing that. This thread was covered before here But I made an example of how you can achieve that, find it here: https://jsfiddle.net/edonrexhepi/t9dph1gm/ The above code will iterate in all Nodes with the className of images--each and for each will change the index of the 'currentSlide' and 'prevSlide' variables, will add the 'visible' className to the index is going to, and will remove it from the one leaving. UPDATE: Based on the comment below Please check this fork: https://jsfiddle.net/edonrexhepi/9vmpjh67/ I've mixed jQuery and Vanilla, please let me know if this is what you wanted to do, I'll cleanup the code a bit more. BTW, why don't you use smth like Swiper or Flickity?
{ "pile_set_name": "StackExchange" }
Q: (Node js) How to send notification to specific user? i have code for server server.js var socket = require( 'socket.io' ); var express = require('express'); var app = express(); var server = require('http').createServer(app); var io = socket.listen( server ); var port = process.env.PORT || 3000; var nik = {}; server.listen(port, function () { console.log('Server listening at port %d', port); }); io.on('connection', function (socket) { socket.on( 'new_count_message', function( data ) { io.sockets.emit( 'new_count_message', { new_count_message: data.new_count_message }); }); socket.on( 'update_count_message', function( data ) { io.sockets.emit( 'update_count_message', { update_count_message: data.update_count_message }); }); }); and this is how i use that $.ajax({ type: "POST", url: "(some_url)", data: $("id_form").serialize(), dataType: "json", beforeSend:function(){ alert('bla..bla..'); }, success: function (result) { if (result.status) { var socket = io.connect('http://' + window.location.hostname + ':3000'); socket.emit('new_count_message', { new_count_message: result.new_count_message }); } else if (result.status == false) { alert(error); return false; } }, error: function(xhr, Status, error) { alert(error); } }); that function is working perfectly, but it send to all. how to send notif to specific user? i have the ID user that i want to send the notif Thanks A: Well, With io.sockets.emit you emit a message to all sockets. Instead use io.sockets.in("roomname").emit("message"). As well if you have the socket ID where you want to send the message you can use io.sockets.connected["socketid"].emit("message"). If you are inside the io.on('connection') function and you want to send a message to the same socket you can simply use socket.emit. Another way is: When a new socket connects, add this socket to a specific room socket.join("UniqueUserId") or socket.join("UniqueUserSessionId") ... Then use the 1st option io.sockets.in("UniqueUserId").emit("message") or io.sockets.in("UniqueUserSessionId").emit("message") Examples: io.on('connection', function (socket) { //get the unique socket socketId on connection var socketId = socket.id; //you can add this socket id to a Database to use it later, etc... //use sessionStore like Redis or memStore to get a unique sessionId //as well you can extract a cookie with the UserId (you need to secure this to be sure that the user not modified the cookie) (you can use 2 cookies 1 for the userid other for the encrypted password and check if the cookies data is the same than in your users Database) etc etc etc. User Session is a lot better). Read about nodejs session store and socket session. Something like... var cookies = qs.parse(socket.handshake.headers.cookie, "; "); var user_id = cookies.user_id; //or some other cookie name; socket.join(user_id); socket.on( 'new_count_message', function( data ) { //all sockets io.sockets.emit( 'new_count_message', { new_count_message: data.new_count_message }); //same Socket socket.emit( 'new_count_message', { new_count_message: data.new_count_message }); //specific Socket by SocketId //io.sockets.connected["socketid"].emit( 'new_count_message', { io.sockets.connected[socketId].emit( 'new_count_message', { new_count_message: data.new_count_message }); //all sockets in a specific Room //io.sockets.in("roomname").emit( 'new_count_message', { io.sockets.in(user_id).emit( 'new_count_message', { new_count_message: data.new_count_message }); }); });
{ "pile_set_name": "StackExchange" }
Q: Is it safe to run a flask server in a development environment? I have a project that I have to present on a Zoom call for my AP Computer Science class. I have a flask site that I am running off of my laptop onto a port forward. When I run the server it says: WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off I only plan to run this for a couple of hours, and it doesn't need to be particularly efficient, but I don't want to open my computer up to attack. (I know it's very dangerous to run it in debug mode like this). The web app doesn't have any sensitive data to be stolen, but I wanted to make sure I wasn't opening my machine to remote code execution, or anything like that. A: My understanding is Flask is not recommended for production because of stability rather than security. With WSGI or Gunicorn, you can utilize multi-thread/multi-proc more effectively, and serve multiple requests simultaneously. If you're a beginner it can be daunting to go from Flask to WSGI (Gunicorn is slightly easier, but still has a learning curve). If this is just for a few hours to demo something to your class -- I'd say just go for exposing Flask, with the following caveats. Ensure the port you're exposing to the internet is sufficiently random (don't use 80 or 8080), try something like 48982, or 13892, etc -- this reduces your attack probability immensely :) Don't run flask as a root user, preferably create a scoped down user, that only has access to the files you wish to expose Hope that helps.
{ "pile_set_name": "StackExchange" }
Q: Update users in AD in specific OU with PowerShell Let's say I have below OU tree: DataManagement └─Country ├─Germany │ └─Users │ ├─Laptops │ └─Computers ├─France │ └─Users │ ├─Laptops │ └─Computers etc. I would like to update specific container in OU, for example, users in laptops group in France. How to do that if I would like to import users from CSV? Below code check and update all OU. Unfortunately, I have no idea how to select a specific container. Any suggestions? Import-Module ActiveDirectory $Userscsv = Import-Csv D:\areile\Desktop\adtest.csv foreach ($User in $Userscsv) { Set-ADUser $User.SamAccountName -Replace @{ Division = $User.Division; Office = $User.Office; City = $User.City } } A: Ah, it would have helped if you have shown us (part) of the content of the csv then. However, i think this will work for you: Import-Module ActiveDirectory $UsersCsv = Import-Csv D:\areile\Desktop\adtest.csv $SearchBase = "<DISTINGHUISHEDNAME-OF-THE-FRENCH-USERS-OU>" foreach ($usr in $UsersCsv) { $adUser = Get-ADUser -Filter {EmailAdress -eq '$($usr.Email)'} -SearchBase $SearchBase -Properties Division,Office,City,EmailAddress if ($null -ne $adUser) { Set-ADUser $adUser.SamAccountName -Replace @{Division = $usr.Division; Office = $usr.Office; City = $usr.City} } }
{ "pile_set_name": "StackExchange" }
Q: Trouble loading model after Post-training quantisation I've trained up a model and converted it to a .tflite model. I have done post train quantization with the following: import tensorflow as tf converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE] tflite_quant_model = converter.convert() But when I try to do inference using the model on a RaspberryPi I get the following error Traceback (most recent call last): File "tf_lite_test.py", line 8, in <module> interpreter = tf.lite.Interpreter(model_path="converted_from_h5_model_with_quants.tflite") File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/lite/python/interpreter.py", line 46, in __init__ model_path)) ValueError: Didn't find op for builtin opcode 'CONV_2D' version '2' Registration failed. When I convert the model to tflite without applying any Post-training quantization I get no errors. This is the code I use to covert the model without applying Post-training quantization. import tensorflow as tf converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) tflite_quant_model = converter.convert() This is my model: model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(IMG_SHAPE, IMG_SHAPE, 3)), tf.keras.layers.MaxPooling2D(2, 2), tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(128, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Dropout(0.5), tf.keras.layers.Flatten(), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(3, activation='softmax') ]) How do I apply Post-training quantization and load a model without getting this error? A: Maybe you need to rebuild your tflite runtime. It's probably too old to consume this model. See instructions here: https://www.tensorflow.org/lite/guide/build_rpi
{ "pile_set_name": "StackExchange" }
Q: PostgreSQL live and test database I'm working with PostgreSQL now for a few months. Now before going live we usually used the live database for almost everything (creating new columns in the live database tables, executing update and insert queries etc.). But now we want to go live and we have to do things differently before we do that. The best way is to have a test database and live database. Now I created a copy of the live database so we have a test database to run tests on. The problem is that the data is old after 24 hours, so we actually need to create a fresh copy every 24 hours, which is not really smart to do manually. So my question is, are there people over here who know a proper way to handle this issue? I think the most ideal way is: - copy a selection of tables from the live database to the test database (skip tables like users). - make it possible to add columns, rename them or even delete them and when we deploy a new version of the website, transfer those changes from the test database to the live database (net necassary but would be a good feature). A: If your database structure is changing, you do NOT want it automatic. You will blow away dev work and data. You want it manual. I once managed a team that had a situation similar: multi-TiB database, updated daily, and needing to do testing and development against that up-to-date data. Here was the way we solved it: In our database, we defined a function called TODAY(). In our live system, this was a wrapper for NOW(). In our test system, it called out to a one-column table whose only row was a date that we could set. This means that our test system was a time machine, that could pretend any date was the current one. This meant that every function or procedure we wrote had to be time-aware. Should I care about future-scheduled events? How far in the future? This made our functions extremely robust, and made it dead simple to test them against a huge variety of historical data. This helped catch a large number of bugs that we would have never thought would happen, but we saw would indeed occur in our historical data. It's like functional programming for your database! We would still schedule database updates from a live backup, about every month or so. This had the benefit of more data AND testing our backup/restore procedure. Our DBA would run a "post-test-sync" script that would set permissions for developers, so we were damn sure than anything we ran on the test system would work on the live one as well. This is what helped us build our deployment database scripts.
{ "pile_set_name": "StackExchange" }
Q: Select same table twice, in single query My table: id | views | date 1 | 100 | 2017-03-09 2 | 150 | 2017-03-10 3 | 300 | 2017-03-11 4 | 350 | 2017-03-12 I need to calculate visit count difference between days something like this 2017-03-12-->Visitors:350 2017-03-11-->Visitors:300 Difference between days:50 2017-03-11-->Visitors:300 2017-03-10-->Visitors:150 Difference between days:150 2017-03-10-->Visitors:150 2017-03-09-->Visitors:100 Difference between days:50 and so on... I managed to get similar results, but not exatcly what i wanted $sql = "SELECT * FROM `table` ORDER BY `table`.`id` DESC"; $result = mysql_query($sql) or die(mysql_error()); while($row = mysql_fetch_array($result)) { $t = $row['views']; $dat = $row['date']; $sql1 = "SELECT * FROM `table` ORDER BY `table`.`id` DESC LIMIT 1, 99"; $result1 = mysql_query($sql1) or die(mysql_error()); while($row1 = mysql_fetch_array($result1)) { $y = $row1['views']; $dat1 = $row1['date']; $d = $t-$y; echo "{$dat}-->Visitors:{$t}"; echo "<br/>"; echo "{$dat1}-->Visitors:{$y}"; echo "<br/>"; echo "Difference between days:{$d}"; echo "<br/><br/><br/>"; } } So i guess i need to select same table twice with one query. A: No need for SQL acrobatics here. You are displaying the lines in date order. Simply keep the count from last line inside a variable, substract in php, and you have your difference. Remove your nested loops. You only need one loop. $last_views = null; while($row = mysql_fetch_array($result)) { $views = $row['views']; $dat = $row1['date']; if( $last_views === null ) $delta_views = ""; else $delta_views = $last_views - $views; $last_views = $views; echo "{$dat}-->Visitors:{$views}"; echo "<br/>"; echo "Difference between days:{$delta_views}"; echo "<br/>"; }
{ "pile_set_name": "StackExchange" }
Q: Как прочитать огромный тест из Clipboard в WinForms? Приложение на WinForms следит за буфером обмена (Clipboard), отслеживает наличие данных в формате HTML Format. Эти данные можно получать в виде string, используя код: string html = System.Windows.Forms.Clipboard.GetText(System.Windows.Forms.TextDataFormat.Html) У нужных данных присутствует атрибут-маркер, по которому определяется, что данные подлежат обработке. Всё хорошо работает, пока пользователь не копирует большой HTML. Например, этого можно добиться, если в Excel заполнить таблицу размером 50000х10 и скопировать её. В этом случае System.Windows.Forms.Clipboard.GetText отрабатывает ~20 секунд. Что очень чувствительно для пользователя. По сути, для идентификации своих данных загружать весь текст html не нужно, достаточно найти атрибут в первом элементе. Можно ли прочитать текст из Clipboard частично? Например, для HTML Format получить Stream и считать только необходимое количество байт? A: Проблема данного сценария заключается в том, что в функционале Clipboard есть возможность сделать отсроченную загрузку данных, см. Delayed Rendering. Т.е., если мы получаем уведомление о том, что состояние Clipboard изменилось, то это ещё не значит, что данные были полностью загружены в Clipboard. Они загружаются туда, когда мы их запрашиваем, например, при помощи System.Windows.Forms.Clipboard.GetText. Так вот, в выше описанном сценарии, когда мы вызываем функцию System.Windows.Forms.Clipboard.GetText, то после этого процесс Excel начинает формировать данные для Clipboard и нам приходиться ждать, когда он закончит это делать. К сожалению, повлиять на это никак нельзя.
{ "pile_set_name": "StackExchange" }
Q: How to add % symbol to my textbox value I want to add percentage symbol to a text box value, And it should not add to ng-model. It should be viewonly purpose. And Even soles symbol.Is it possible with angularJs Or css. Please help me out.This is my Example. $scope.percent=response.percent.toFixed(1); Thanks in advance. A: Well I understood your problem. You need '%' symbol always. Right?. If that is the case then use the following code. https://jsfiddle.net/scottux/sxh22hfz/ .filter('percent', function () { return function (input) { if (!input || isNaN(input)) { return; } else { return input + '%'; } };
{ "pile_set_name": "StackExchange" }
Q: difference between app access token and user access token What's the difference between app access token and user access token? I noticed that userData from FB is different, but I can make OpenGraph action and post on user wall or send to friends wall. Can I use app access token in OpenGraph action? A: App access_token allows you to make request to the Facebook API on behalf of an App rather than a User. This is useful, for example to: modify the parameters of your App create and manage test users read your application's insights publish content to Facebook on behalf of a user who has granted a publishing permission to your application Now in the Open Graph area: If your app publishes on behalf of its users and requires an access token with no expiration time, you should use an App Access Token. An App Access Token is signed using your app secret and will not expire; it will be invalidated if you re-key/reset your application secret. App Access Tokens should only be used when the posting functions are originated directly from your servers in order to keep them private to the app. ... App Access Tokens are especially useful when publishing instances of “secure Open Graph actions”, Open Graph actions that should only be published by your app, such as achievements and game scores. In this specific example, a user is prevented from gaming his/her score by publishing fake scores/achievements using a user access token. Please read the following documents: Login as an App Using App Access Tokens (Open Graph)
{ "pile_set_name": "StackExchange" }
Q: Not able to perform edit operation using angular 6 and spring boot Actually i want to perform the Edit operation in the form.I am passing the Id to the Spring boot Api i have made using angular 6 in forntend but I am getting the error as: The main code to call update method: { this.selectService.updatenewSelection(this.selection.selectionId,0).subscribe((selection)=>{ console.log(selection); this.router.navigate(['/add-selection']); },(error)=>{ console.log(error); }); } Now the update method in selection.service.ts is updatenewSelection(id: number, value: any): Observable<Object> { return this.http.put(`${this.baseUrl}/${id}`, value); } The api i have made to update in spring boot is:I have tried both method but it is not still working. @PutMapping("/selections/{id}") public ResponseEntity<Selection> updateSelection(@PathVariable("id") long id, @RequestBody Selection selection) { System.out.println("Update Selection with ID = " + id + "..."); Optional<Selection> selectionData = repository.findById(id); if (selectionData.isPresent()) { Selection _selection = selectionData.get(); _selection.setSelectionDate(selection.getSelectionDate()); _selection.setSelectedBy(selection.getSelectedBy()); return new ResponseEntity<>(repository.save(_selection), HttpStatus.OK); } else { return new ResponseEntity<>(HttpStatus.NOT_FOUND); } } @PutMapping("/selections/update") public Selection updatenewSelection(@RequestBody Selection selection) { return repository.save(selection); } I get error when clicked the save button is where the "1" is the Id it is passing: PUT http://localhost:8080/api/selections/1 400 HttpErrorResponse {headers: HttpHeaders, status: 400, statusText: "OK", url: "http://localhost:8080/api/selections/1", ok: false, …} error: {timestamp: "2018-10-09T07:29:16.628+0000", status: 400, error: "Bad Request", message: "JSON parse error: Cannot construct instance of `co…ource: (PushbackInputStream); line: 1, column: 1]", path: "/api/selections/1"} headers: HttpHeaders {normalizedNames: Map(0), lazyUpdate: null, lazyInit: ƒ} message: "Http failure response for http://localhost:8080/api/selections/1: 400 OK" name: "HttpErrorResponse" ok: false status: 400 statusText: "OK" url: "http://localhost:8080/api/selections/1" __proto__: HttpResponseBase A: You are sending 0 as value, but the service requires Selection object!
{ "pile_set_name": "StackExchange" }
Q: Returning Multiple Record sets using refcursor in PostgreSQL -- FUNCTION: public.asyncmultiplerecs() -- DROP FUNCTION public.asyncmultiplerecs(); CREATE OR REPLACE FUNCTION public.asyncmultiplerecs() RETURNS SETOF refcursor LANGUAGE 'plpgsql' COST 100.0 AS $function$ DECLARE ref1 refcursor; -- Declare cursor variables ref2 refcursor; ref3 refcursor; ref4 refcursor; BEGIN OPEN ref1 FOR SELECT bk_channel_id,promotion_id FROM cs_promotion_offer_exclusions; RETURN NEXT ref1; OPEN ref2 FOR SELECT mastergroup,promo_grp_id FROM cs_promotion_group_master; RETURN NEXT ref2; OPEN ref3 FOR SELECT promotion_usoc,promotion_duration FROM cs_promotion_target_details; RETURN NEXT ref2; OPEN ref4 FOR SELECT promotion_id,offer_id FROM cs_promotion_details; RETURN NEXT ref4; END; $function$; Above is my Function, i want to execute all the records sets from the above function. A: You get all the cursors with SELECT * FROM asyncmultiplerecs(); Then you use FETCH to fetch results from the cursors. You forgot to assign names to the cursors, so they will be unnamed. Here is a complete example how this could be done: CREATE FUNCTION asyncmultiplerecs() RETURNS SETOF refcursor LANGUAGE plpgsql AS $$DECLARE ref1 refcursor; BEGIN ref1 := 'c1'; OPEN ref1 FOR VALUES (1), (2); RETURN NEXT ref1; ref1 := 'c2'; OPEN ref1 FOR VALUES (3), (4); RETURN NEXT ref1; END;$$; Now you have to call the function in a transaction, because cursors will be closed at commit time: BEGIN; SELECT * FROM asyncmultiplerecs(); asyncmultiplerecs ------------------- c1 c2 (2 rows) FETCH ALL FROM c1; column1 --------- 1 2 (2 rows) FETCH ALL FROM c2; column1 --------- 3 4 (2 rows) COMMIT;
{ "pile_set_name": "StackExchange" }
Q: I need to get information from JSON in PHP Here is the result of: var_dump($response): "is_claimed": false, "rating": 4.5, "mobile_url": "http://m.yelp.com/biz/filbert-steps-san-francisco?utm_campaign=yelp_api\u0026utm_medium=api_v2_business\u0026utm_source=NUQkLT4j4VnC6ZR7LI-VWA", "rating_img_url": "https://s3-media2.fl.yelpcdn.com/assets/2/www/img/99493c12711e/ico/stars/v1/stars_4_half.png", "review_count": 208 I want to get the rating value, I tried $response->rating but I got nothing. A: You need to make this json first by using {} at two sides of the string. After decoding (json_decode) you will got an Array of Objects. $json = '{"is_claimed": false, "rating": 4.5, "mobile_url": "http://m.yelp.com/biz/filbert-steps-san-francisco?utm_campaign=yelp_api\\u0026utm_medium=api_v2_business\\u0026utm_source=NUQkLT4j4VnC6ZR7LI-VWA", "rating_img_url": "https://s3-media2.fl.yelpcdn.com/assets/2/www/img/99493c12711e/ico/stars/v1/stars_4_half.png", "review_count": 208}'; $result = json_decode ($json); echo $result->rating; // 4.5 Online Check, and let me know is it works for you or not.
{ "pile_set_name": "StackExchange" }