_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d8301 | train | There are no id attributes in your markup. All you are dealing with is innerText
$('td').click(function() {
var myItem = $(this).text();
alert('u clicked ' + myItem);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<table>
<tr>
<td>Computer</td>
</tr>
<tr>
<td>maths</td>
</tr>
<tr>
<td>physics</td>
</tr>
</table>
A: You have invalid markup. extra closing div is added after last tr closing markup. you need to remove it.
what you need to alert is text of element and not parent id. use .text() along with clicked td elements jquery context:
$('td').click(function(){
myItem=$(this).text();
alert('u clicked ' +myItem);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.0/jquery.min.js"></script>
<table>
<tr><td>Computer</td></tr>
<tr><td>maths</td></tr>
<tr><td>physics</td></tr>
</table>
A: try this:
$('td').click(function () {
alert('u clicked ' + $(this).text());
});
A: As I have already commented, you are missing ID in tr. I have added few more tds for demo.
$('td').click(function() {
var myId = $(this).parent().attr("id");
var myItem = $(this).text();
alert('u clicked ' + myId);
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<table>
<tr id="IT">
<td>Computer</td>
</tr>
<tr id="General">
<td>maths</td>
<td>History</td>
</tr>
<tr id="Science">
<td>physics</td>
<td>Chemistry</td>
<td>Biology</td>
</tr>
</table> | unknown | |
d8302 | train | Your code is building an invalid URL:
http://api.website.com/apik=123456&q=some+search&l=San+Jose%2c+CA&sort=1&radius=100
Note the /apik=123456 portion.
var apiKey = "123456";
var Query = "some search";
var Location = "San Jose, CA";
var Sort = "1";
var SearchRadius = "100";
// Build a List of the querystring parameters (this could optionally also have a .ToLookup(qs => qs.key, qs => qs.value) call)
var querystringParams = new [] {
new { key = "k", value = apiKey },
new { key = "q", value = Query },
new { key = "l", value = Location },
new { key="sort", value = Sort },
new { key = "radius", value = SearchRadius }
};
// format each querystring parameter, and ensure its value is encoded
var encodedQueryStringParams = querystringParams.Select (p => string.Format("{0}={1}", p.key, HttpUtility.UrlEncode(p.value)));
// Construct a strongly-typed Uri, with the querystring parameters appended
var url = new UriBuilder("http://api.website.com/api");
url.Query = string.Join("&", encodedQueryStringParams);
This approach will build a valid, strongly-typed Uri instance with UrlEncoded querystring parameters. It can easily be rolled into a helper method if you need to use it in more than one location.
A: Use the UriBuilder class. It will ensure the resulting URI is correctly formatted. | unknown | |
d8303 | train | First of all, you'll need to identify in wich neighborhood you are clicking. Adding a rel (like in the options) might work:
...
<li class="evillage">
<a href="#" rel="asm0option0">East Village/LES</a>
</li>
...
Then, add this script:
$(".search-map a").click(function(event) {
event.preventDefault();
var hood = $(this).attr('rel');
var $option = $('option[rel=' + hood + ']');
$option.change();
return false;
});
Based on this asmselect example. You just need to get the correspondig option depending on the clicked link, and then call the change(); trigger on it.
EDITED:
The above code failed big time :P Here's the new version (With a working fiddle):
$(".search-map a").click(function() {
var $city = $(this).html();
var $rel = $(this).attr('rel');
var disabled = $('#asmSelect0 option[rel=' + $rel + ']:first').attr('disabled');
if (typeof disabled !== 'undefined' && disabled !== false) {
return false;
}
var $option = $('<option></option>').text($city).attr({'selected': true, 'rel': $rel});
$('#asmSelect0').append($option).change();
$('#asmSelect0 option[rel=' + $rel + ']:first').remove();
return false;
});
I had to do something I don't really like, but I couldn't find any other way of doing this without conflicting with the plugin. Anyway, it's not that bad. Basically I get the name of the neighborhood, the rel to identify the option, generate a new <option>, change(); it, and delete the old option tag. The plugin didn't like trying to change the tag itself without creating a new one... I had to add the if (...) thing because when adding a neighborhood (disabling it on the select), and clicking again on the same neighborhood, it enabled it on the select (And we don't want that, since it's already selected). With that condition, if we click on a link whose neighborhood is disabled, we'll do nothing and the option will remain the same.
Sorry if my explanation sucks, It took me a while to get the problem, I had to build everything up again, and I felt like having a lack of english xD Hope it works! | unknown | |
d8304 | train | The $scope.id attribute is probably not getting set.
Try setting the id in the scope in your showProduct function or whichever function initializes the $scope variables for the item.
$scope.showProduct = function(product){
$scope.id = product.id;
.............. | unknown | |
d8305 | train | I think the problem on that line;
OleDbDataAdapter dataadapter = new OleDbDataAdapter(sql, connectionString);
You add your parameters on your command but you still using sql string which expects parameter and their values in OleDbDataAdapter constructor.
Use your command instead of your sql query;
OleDbDataAdapter dataadapter = new OleDbDataAdapter(command, connectionString);
And use using statement to dispose your OleDbConnection, OleDbCommand and OleDbDataAdapter automaticaly.
As your second problem, from the documentaion of OleDbCommand.Parameters:
The OLE DB .NET Provider does not support named parameters for passing parameters to an SQL statement or a stored procedure called by an OleDbCommand when CommandType is set to Text.
Actually, it does support named parameters but names are negligible. Only matter is your parameter orders. Since you add @ID parameter your command first, this will be added to your first parameter in your command that you defined which is @grade. As you can see, this will generate a problem.
Change your parameters order as well;
command.Parameters.Add("@grade", OleDbType.VarChar).Value = grade;
command.Parameters.Add("@ID", OleDbType.VarChar).Value = ID; | unknown | |
d8306 | train | In an IAuthorizationPolicy.Evaluate() OperationContext.Current.InstanceContext is not always null.
Starting with Carlos Figueira’s WCF Extensibility test program which prints to the command line a message each time a WCF extension is called. I added a custom IAuthorizationPolicy. The output and program are below.
The output shows that
*
*IAuthorizationPolicy.Evaluate() Is called after the service instance is created
*OperationContext.Current.InstanceContext is not null in IAuthorizationPolicy.Evaluate()
The test program is self hosted using BasicHttpBinding. It’s very possible that in other hosting environments IAuthorizationPolicy.Evaluate() does not have access to the service instance, but it is at least theoretically possible.
Why would you do this?
From an architecture point of view an IAuthorizationPolicy should be dealing with claims. The application should be consuming the claims on the ClaimsPrinciple. To have an IAuthorizationPolicy tightly coupled to a particular service breaks the very intentional separation of concerns in the Window Identity architecture. In other words, I think this is a bad idea.
Test Program
This is a windows command line program all in one file.
using System;
using System.Globalization;
using System.IdentityModel.Claims;
using System.IdentityModel.Policy;
using System.Reflection;
using System.ServiceModel;
using System.ServiceModel.Channels;
using System.ServiceModel.Description;
using System.ServiceModel.Dispatcher;
using System.Text;
using System.Threading;
using System.Collections.Generic;
namespace WcfRuntime
{
class MyDispatchMessageInspector : IDispatchMessageInspector
{
public MyDispatchMessageInspector()
{
}
public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return null;
}
public void BeforeSendReply(ref Message reply, object correlationState)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
}
}
class MyDispatchMessageFormatter : IDispatchMessageFormatter
{
IDispatchMessageFormatter inner;
public MyDispatchMessageFormatter(IDispatchMessageFormatter inner)
{
this.inner = inner;
}
public void DeserializeRequest(Message message, object[] parameters)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
this.inner.DeserializeRequest(message, parameters);
}
public Message SerializeReply(MessageVersion messageVersion, object[] parameters, object result)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return this.inner.SerializeReply(messageVersion, parameters, result);
}
}
class MyClientMessageInspector : IClientMessageInspector
{
public MyClientMessageInspector()
{
}
public void AfterReceiveReply(ref Message reply, object correlationState)
{
ColorConsole.WriteLine(ConsoleColor.Yellow, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
}
public object BeforeSendRequest(ref Message request, IClientChannel channel)
{
ColorConsole.WriteLine(ConsoleColor.Yellow, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return null;
}
}
class MyClientMessageFormatter : IClientMessageFormatter
{
IClientMessageFormatter inner;
public MyClientMessageFormatter(IClientMessageFormatter inner)
{
this.inner = inner;
}
public object DeserializeReply(Message message, object[] parameters)
{
ColorConsole.WriteLine(ConsoleColor.Yellow, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return this.inner.DeserializeReply(message, parameters);
}
public Message SerializeRequest(MessageVersion messageVersion, object[] parameters)
{
ColorConsole.WriteLine(ConsoleColor.Yellow, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return this.inner.SerializeRequest(messageVersion, parameters);
}
}
class MyDispatchOperationSelector : IDispatchOperationSelector
{
public string SelectOperation(ref Message message)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
string action = message.Headers.Action;
string method = action.Substring(action.LastIndexOf('/') + 1);
return method;
}
}
class MyParameterInspector : IParameterInspector
{
ConsoleColor consoleColor;
bool isServer;
public MyParameterInspector(bool isServer)
{
this.isServer = isServer;
this.consoleColor = isServer ? ConsoleColor.Cyan : ConsoleColor.Yellow;
}
public void AfterCall(string operationName, object[] outputs, object returnValue, object correlationState)
{
ColorConsole.WriteLine(this.consoleColor, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
}
public object BeforeCall(string operationName, object[] inputs)
{
ColorConsole.WriteLine(this.consoleColor, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return null;
}
}
class MyCallContextInitializer : ICallContextInitializer
{
public void AfterInvoke(object correlationState)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
}
public object BeforeInvoke(InstanceContext instanceContext, IClientChannel channel, Message message)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return null;
}
}
class MyOperationInvoker : IOperationInvoker
{
IOperationInvoker inner;
public MyOperationInvoker(IOperationInvoker inner)
{
this.inner = inner;
}
public object[] AllocateInputs()
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return this.inner.AllocateInputs();
}
public object Invoke(object instance, object[] inputs, out object[] outputs)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return this.inner.Invoke(instance, inputs, out outputs);
}
public IAsyncResult InvokeBegin(object instance, object[] inputs, AsyncCallback callback, object state)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return this.inner.InvokeBegin(instance, inputs, callback, state);
}
public object InvokeEnd(object instance, out object[] outputs, IAsyncResult result)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return this.inner.InvokeEnd(instance, out outputs, result);
}
public bool IsSynchronous
{
get
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return this.inner.IsSynchronous;
}
}
}
class MyInstanceProvider : IInstanceProvider
{
Type serviceType;
public MyInstanceProvider(Type serviceType)
{
this.serviceType = serviceType;
}
public object GetInstance(InstanceContext instanceContext, Message message)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return Activator.CreateInstance(this.serviceType);
}
public object GetInstance(InstanceContext instanceContext)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return Activator.CreateInstance(this.serviceType);
}
public void ReleaseInstance(InstanceContext instanceContext, object instance)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
}
}
class MyInstanceContextProvider : IInstanceContextProvider
{
IInstanceContextProvider inner;
public MyInstanceContextProvider(IInstanceContextProvider inner)
{
this.inner = inner;
}
public InstanceContext GetExistingInstanceContext(Message message, IContextChannel channel)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return this.inner.GetExistingInstanceContext(message, channel);
}
public void InitializeInstanceContext(InstanceContext instanceContext, Message message, IContextChannel channel)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
this.inner.InitializeInstanceContext(instanceContext, message, channel);
}
public bool IsIdle(InstanceContext instanceContext)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return this.inner.IsIdle(instanceContext);
}
public void NotifyIdle(InstanceContextIdleCallback callback, InstanceContext instanceContext)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
this.inner.NotifyIdle(callback, instanceContext);
}
}
class MyInstanceContextInitializer : IInstanceContextInitializer
{
public void Initialize(InstanceContext instanceContext, Message message)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
}
}
class MyChannelInitializer : IChannelInitializer
{
ConsoleColor consoleColor;
public MyChannelInitializer(bool isServer)
{
this.consoleColor = isServer ? ConsoleColor.Cyan : ConsoleColor.Yellow;
}
public void Initialize(IClientChannel channel)
{
ColorConsole.WriteLine(this.consoleColor, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
}
}
class MyClientOperationSelector : IClientOperationSelector
{
public bool AreParametersRequiredForSelection
{
get
{
ColorConsole.WriteLine(ConsoleColor.Yellow, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return false;
}
}
public string SelectOperation(MethodBase method, object[] parameters)
{
ColorConsole.WriteLine(ConsoleColor.Yellow, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return method.Name;
}
}
class MyInteractiveChannelInitializer : IInteractiveChannelInitializer
{
public IAsyncResult BeginDisplayInitializationUI(IClientChannel channel, AsyncCallback callback, object state)
{
ColorConsole.WriteLine(ConsoleColor.Yellow, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
Action act = new Action(this.DoNothing);
return act.BeginInvoke(callback, state);
}
public void EndDisplayInitializationUI(IAsyncResult result)
{
ColorConsole.WriteLine(ConsoleColor.Yellow, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
}
private void DoNothing() { }
}
class MyErrorHandler : IErrorHandler
{
public bool HandleError(Exception error)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
return error is ArgumentException;
}
public void ProvideFault(Exception error, MessageVersion version, ref Message fault)
{
ColorConsole.WriteLine(ConsoleColor.Cyan, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
MessageFault messageFault = MessageFault.CreateFault(new FaultCode("FaultCode"), new FaultReason(error.Message));
fault = Message.CreateMessage(version, messageFault, "FaultAction");
}
}
public class MyAuthorizationPolicy : IAuthorizationPolicy
{
string id;
public MyAuthorizationPolicy()
{
id = Guid.NewGuid().ToString();
}
public bool Evaluate(EvaluationContext evaluationContext, ref object state)
{
ColorConsole.WriteLine(ConsoleColor.Green, "{0}.{1}", this.GetType().Name, ReflectionUtil.GetMethodSignature(MethodBase.GetCurrentMethod()));
if (OperationContext.Current.InstanceContext != null)
{
var instance = (Service)OperationContext.Current.InstanceContext.GetServiceInstance();
ColorConsole.WriteLine(ConsoleColor.Green, "Got the service instance. Name={0}", instance.Name);
}
else
{
ColorConsole.WriteLine(ConsoleColor.Green, "OperationContext.Current.InstanceContext is null");
}
// Return true, indicating that this method does not need to be called again.
return true;
}
public ClaimSet Issuer
{
get { return ClaimSet.System; }
}
public string Id
{
get { return id; }
}
}
[ServiceContract]
public interface ITest
{
[OperationContract]
int Add(int x, int y);
[OperationContract(IsOneWay = true)]
void Process(string text);
}
[ServiceBehavior(IncludeExceptionDetailInFaults = true)]
public class Service : ITest
{
public Service() { ColorConsole.WriteLine(ConsoleColor.Green, "Created Service Instance"); }
public string Name { get { return "MyService instance 1234"; } }
public int Add(int x, int y)
{
ColorConsole.WriteLine(ConsoleColor.Green, "In service operation '{0}'", MethodBase.GetCurrentMethod().Name);
if (x == 0 && y == 0)
{
throw new ArgumentException("This will cause IErrorHandler to be called");
}
else
{
return x + y;
}
}
public void Process(string text)
{
ColorConsole.WriteLine(ConsoleColor.Green, "In service operation '{0}'", MethodBase.GetCurrentMethod().Name);
}
}
class MyBehavior : IOperationBehavior, IContractBehavior
{
public void AddBindingParameters(OperationDescription operationDescription, BindingParameterCollection bindingParameters)
{
}
public void ApplyClientBehavior(OperationDescription operationDescription, ClientOperation clientOperation)
{
clientOperation.Formatter = new MyClientMessageFormatter(clientOperation.Formatter);
clientOperation.ParameterInspectors.Add(new MyParameterInspector(false));
}
public void ApplyDispatchBehavior(OperationDescription operationDescription, DispatchOperation dispatchOperation)
{
dispatchOperation.CallContextInitializers.Add(new MyCallContextInitializer());
dispatchOperation.Formatter = new MyDispatchMessageFormatter(dispatchOperation.Formatter);
dispatchOperation.Invoker = new MyOperationInvoker(dispatchOperation.Invoker);
dispatchOperation.ParameterInspectors.Add(new MyParameterInspector(true));
}
public void Validate(OperationDescription operationDescription)
{
}
public void AddBindingParameters(ContractDescription contractDescription, ServiceEndpoint endpoint, BindingParameterCollection bindingParameters)
{
}
public void ApplyClientBehavior(ContractDescription contractDescription, ServiceEndpoint endpoint, ClientRuntime clientRuntime)
{
clientRuntime.ChannelInitializers.Add(new MyChannelInitializer(false));
clientRuntime.InteractiveChannelInitializers.Add(new MyInteractiveChannelInitializer());
clientRuntime.MessageInspectors.Add(new MyClientMessageInspector());
clientRuntime.OperationSelector = new MyClientOperationSelector();
}
public void ApplyDispatchBehavior(ContractDescription contractDescription, ServiceEndpoint endpoint, DispatchRuntime dispatchRuntime)
{
dispatchRuntime.ChannelDispatcher.ChannelInitializers.Add(new MyChannelInitializer(true));
dispatchRuntime.ChannelDispatcher.ErrorHandlers.Add(new MyErrorHandler());
dispatchRuntime.InstanceContextInitializers.Add(new MyInstanceContextInitializer());
dispatchRuntime.InstanceContextProvider = new MyInstanceContextProvider(dispatchRuntime.InstanceContextProvider);
dispatchRuntime.InstanceProvider = new MyInstanceProvider(dispatchRuntime.ChannelDispatcher.Host.Description.ServiceType);
dispatchRuntime.MessageInspectors.Add(new MyDispatchMessageInspector());
dispatchRuntime.OperationSelector = new MyDispatchOperationSelector();
}
public void Validate(ContractDescription contractDescription, ServiceEndpoint endpoint)
{
}
}
static class ColorConsole
{
static object syncRoot = new object();
public static void WriteLine(ConsoleColor color, string text, params object[] args)
{
if (args != null && args.Length > 0)
{
text = string.Format(CultureInfo.InvariantCulture, text, args);
}
lock (syncRoot)
{
Console.ForegroundColor = color;
Console.WriteLine("[{0}] {1}", DateTime.Now.ToString("HH:mm:ss.fff", CultureInfo.InvariantCulture), text);
Console.ResetColor();
}
Thread.Sleep(50);
}
public static void WriteLine(string text, params object[] args)
{
Console.WriteLine(text, args);
}
public static void WriteLine(object obj)
{
Console.WriteLine(obj);
}
}
static class ReflectionUtil
{
public static string GetMethodSignature(MethodBase method)
{
StringBuilder sb = new StringBuilder();
sb.Append(method.Name);
sb.Append("(");
ParameterInfo[] parameters = method.GetParameters();
for (int i = 0; i < parameters.Length; i++)
{
if (i > 0) sb.Append(", ");
sb.Append(parameters[i].ParameterType.Name);
}
sb.Append(")");
return sb.ToString();
}
}
class Program
{
static void Main(string[] args)
{
string baseAddress = "http://" + Environment.MachineName + ":8000/Service";
using (ServiceHost host = new ServiceHost(typeof(Service), new Uri(baseAddress)))
{
ServiceEndpoint endpoint = host.AddServiceEndpoint(typeof(ITest), new BasicHttpBinding(), "");
endpoint.Contract.Behaviors.Add(new MyBehavior());
foreach (OperationDescription operation in endpoint.Contract.Operations)
{
operation.Behaviors.Add(new MyBehavior());
}
var polices = new List<IAuthorizationPolicy>();
polices.Add(new MyAuthorizationPolicy());
host.Authorization.ExternalAuthorizationPolicies = polices.AsReadOnly();
host.Open();
ColorConsole.WriteLine("Host opened");
using (ChannelFactory<ITest> factory = new ChannelFactory<ITest>(new BasicHttpBinding(), new EndpointAddress(baseAddress)))
{
factory.Endpoint.Contract.Behaviors.Add(new MyBehavior());
foreach (OperationDescription operation in factory.Endpoint.Contract.Operations)
{
operation.Behaviors.Add(new MyBehavior());
}
ITest proxy = factory.CreateChannel();
ColorConsole.WriteLine("Calling operation");
ColorConsole.WriteLine(proxy.Add(3, 4));
//ColorConsole.WriteLine("Called operation, calling it again, this time it the service will throw");
//try
//{
// ColorConsole.WriteLine(proxy.Add(0, 0));
//}
//catch (Exception e)
//{
// ColorConsole.WriteLine(ConsoleColor.Red, "{0}: {1}", e.GetType().Name, e.Message);
//}
//ColorConsole.WriteLine("Now calling an OneWay operation");
//proxy.Process("hello");
((IClientChannel)proxy).Close();
}
}
ColorConsole.WriteLine("Done");
}
}
}
Program Output
Look for the ---->.
[12:33:04.218] MyOperationInvoker.get_IsSynchronous()
[12:33:04.269] MyOperationInvoker.get_IsSynchronous()
Host opened
[12:33:04.322] MyChannelInitializer.Initialize(IClientChannel)
Calling operation
[12:33:04.373] MyClientOperationSelector.get_AreParametersRequiredForSelection()
[12:33:04.424] MyClientOperationSelector.SelectOperation(MethodBase, Object[])
[12:33:04.474] MyParameterInspector.BeforeCall(String, Object[])
[12:33:04.524] MyClientMessageFormatter.SerializeRequest(MessageVersion, Object[])
[12:33:04.574] MyClientMessageInspector.BeforeSendRequest(Message&, IClientChannel)
[12:33:04.632] MyInteractiveChannelInitializer.BeginDisplayInitializationUI(IClientChannel, AsyncCallback, Object)
[12:33:04.684] MyInteractiveChannelInitializer.EndDisplayInitializationUI(IAsyncResult)
[12:33:04.788] MyChannelInitializer.Initialize(IClientChannel)
[12:33:04.838] MyInstanceContextProvider.GetExistingInstanceContext(Message, IContextChannel)
[12:33:04.888] MyDispatchOperationSelector.SelectOperation(Message&)
[12:33:04.939] MyInstanceContextProvider.InitializeInstanceContext(InstanceContext, Message, IContextChannel)
[12:33:04.990] MyInstanceContextInitializer.Initialize(InstanceContext, Message)
[12:33:05.040] MyDispatchMessageInspector.AfterReceiveRequest(Message&, IClientChannel, InstanceContext)
[12:33:05.091] MyInstanceProvider.GetInstance(InstanceContext, Message)
[12:33:05.142] Created Service Instance
[12:33:05.192] MyCallContextInitializer.BeforeInvoke(InstanceContext, IClientChannel, Message)
[12:33:05.242] MyOperationInvoker.AllocateInputs()
[12:33:05.293] MyDispatchMessageFormatter.DeserializeRequest(Message, Object[])
[12:33:05.344] MyParameterInspector.BeforeCall(String, Object[])
[12:33:05.394] MyAuthorizationPolicy.Evaluate(EvaluationContext, Object&)
---->[12:33:05.445] Got the service instance. Name=MyService instance 1234
[12:33:05.495] MyOperationInvoker.Invoke(Object, Object[], Object[]&)
[12:33:05.547] In service operation 'Add'
[12:33:05.597] MyParameterInspector.AfterCall(String, Object[], Object, Object)
[12:33:05.648] MyDispatchMessageFormatter.SerializeReply(MessageVersion, Object[], Object)
[12:33:05.698] MyCallContextInitializer.AfterInvoke(Object)
[12:33:05.748] MyDispatchMessageInspector.BeforeSendReply(Message&, Object)
[12:33:05.803] MyInstanceContextProvider.IsIdle(InstanceContext)
[12:33:05.804] MyClientMessageInspector.AfterReceiveReply(Message&, Object)
[12:33:05.854] MyInstanceProvider.ReleaseInstance(InstanceContext, Object)
[12:33:05.855] MyClientMessageFormatter.DeserializeReply(Message, Object[])
[12:33:05.905] MyParameterInspector.AfterCall(String, Object[], Object, Object)
A: IAuthorizationPolicy implements evaluate which in turn provides a context.
From microsoft
public bool Evaluate(EvaluationContext evaluationContext, ref object state)
{
bool bRet = false;
CustomAuthState customstate = null;
// If state is null, then this method has not been called before, so
// set up a custom state.
if (state == null)
{
customstate = new CustomAuthState();
state = customstate;
}
else
customstate = (CustomAuthState)state;
Console.WriteLine("Inside MyAuthorizationPolicy::Evaluate");
// If claims have not been added yet...
if (!customstate.ClaimsAdded)
{
// Create an empty list of Claims.
IList<Claim> claims = new List<Claim>();
// Iterate through each of the claim sets in the evaluation context.
foreach (ClaimSet cs in evaluationContext.ClaimSets)
// Look for Name claims in the current claim set.
foreach (Claim c in cs.FindClaims(ClaimTypes.Name, Rights.PossessProperty))
// Get the list of operations the given username is allowed to call.
foreach (string s in GetAllowedOpList(c.Resource.ToString()))
{
// Add claims to the list.
claims.Add(new Claim("http://example.org/claims/allowedoperation", s, Rights.PossessProperty));
Console.WriteLine("Claim added {0}", s);
}
// Add claims to the evaluation context.
evaluationContext.AddClaimSet(this, new DefaultClaimSet(this.Issuer,claims));
// Record that claims have been added.
customstate.ClaimsAdded = true;
// Return true, which indicates this need not be called again.
bRet = true;
}
else
{
// This point should not be reached, but just in case...
bRet = true;
}
return bRet;
}
See for example
http://msdn.microsoft.com/en-us/library/ms729794(v=vs.110).aspx
Getting the name of the current assembly using reflection
Assembly SampleAssembly;
// Instantiate a target object.
Int32 Integer1 = new Int32();
Type Type1;
// Set the Type instance to the target class type.
Type1 = Integer1.GetType();
// Instantiate an Assembly class to the assembly housing the Integer type.
SampleAssembly = Assembly.GetAssembly(Integer1.GetType());
// Display the name of the assembly currently executing
Console.WriteLine("GetExecutingAssembly=" + Assembly.GetExecutingAssembly().FullName); | unknown | |
d8307 | train | When you spin an EC2 instance up, the root volume is ephemeral - that is, when the instance is terminated, the root volume is destroyed** (taking any data you put there with it). It doesn't matter how you partition that ephemeral volume and where you tuck your data on it - when it is destroyed, everything contained in that volume is lost.
So if the data in the volume is entirely transient and fully recoverable/retrievable from somewhere else the next time you need it, there's no problem; terminate the instance, then spin a new one up and re-acquire the data you need to carry on working.
However, if the data is NOT transient, and needs to be persisted so that work can carry on after an instance crash (and by crash, I mean something that terminates the instance or otherwise renders it inoperable and unrecoverable) then your data MUST NOT be on the root volume, but should be on another EBS volume which is attached to the instance. If and when that instance terminates or breaks irretrievably, your data is safe on that other volume - it can then be re-attached to a new instance for work to continue.
** the exception is where your instance is EBS-backed and you swapped root volumes - in this case, the root volume is left behind after the instance terminates because it wasn't part of the 'package' created by the AMI when you started it.
A: The other volume would be needed in case your server gets broken and you cannot start it. In such case you would just remove initial server, create a second one and attach the additional storage to the new server. You cannot attach root volume of one server to another. | unknown | |
d8308 | train | Your code looks incomplete. You've just got placeholders in the methods getGroup, getGroupId and getGroupCount. They should reference your groupElements array.
The fact that getGroupCount currently returns zero would be enough for the ExpandableListView to not display anything.
A: You probably should set the return value of getGroupCount() to groupElements.length.
The 0 currently returned indicates that you don't have any groups, so there is nothing to show. | unknown | |
d8309 | train | problems :
*
*'%$q%'
*->get();
public function showsearchpage($q)
{
$query = Product::where('product_name','LIKE','%'.$q.'%')->get();
return view('search',['searchbox'=>$query]);
}
A: You forgot get();
public function showsearchpage()
{
$query = Product::where('product_name','LIKE','%$q%')->get();
return view('search',['searchbox'=>$query]);
}
A:
I found out the answer , actually my "q" was not taking input properly . and thanks guys for the help .
public function showsearchpage(Request $request)
{
$q=$request->input('q');
$query = Product::where('product_name', 'like', "%$q%")->get();
return view('search',compact('query'));
}
} | unknown | |
d8310 | train | You must also add the following item to <Metadata />:
<Item Key="AccessTokenResponseFormat">json</Item>
See this blog post for more information.
A: You have add as well...
<Metadata>
<Item Key="AccessTokenResponseFormat">json</Item>
</Metadata>
<OutputClaims>
<OutputClaim ClaimTypeReferenceId="identityProviderAccessToken" PartnerClaimType="{oauth2:access_token}" />
</OutputClaims> | unknown | |
d8311 | train | "There is a one time $25 registration fee"
Extracted from here | unknown | |
d8312 | train | Kvm deamon is running on root.Otherwise it changes its uid,there is no way to change owner.But you can change its permssion to 665 or 664 so that you can access it,or change its ACL for more security | unknown | |
d8313 | train | Best bet might be using a service like Fontello where you can "create" your own custom icon font and upload the custom icons there in addition to selecting the icons you need from Font Awesome. | unknown | |
d8314 | train | Your MainActivity is not the Application class.
For activities use the @AndroidEntryPoint. See more on https://dagger.dev/hilt/android-entry-point
The annotation @HiltAndroidApp is for the Application class. See more on https://dagger.dev/hilt/application | unknown | |
d8315 | train | Technically it's not that the BashOperator doesn't work, it's just that you don't see the stdout of the Bash command in the Airflow logs. This is a known issue and a ticket has already been filed on Airflow's issue tracker: https://issues.apache.org/jira/browse/AIRFLOW-2674
The proof of the fact that BashOperator does work is that if you run your sleep operator with
airflow test tutorial sleep 2018-01-01
you will have to wait 5 seconds before it terminates, which is the behaviour you'd expect from the Bash sleep command. | unknown | |
d8316 | train | I was having the same issue with CBSA and Place data from 2010 Census full geometry shapes. These are not the clipped carto files.
IBM850 Did not work correctly for me. On a whim, I tried latin1 and it worked perfectly.
A: The US Census cartographic boundary files use the IBM850 character encoding. Python code to properly encode these strings would be as follows:
unicode(featurestring.decode("IBM850")) | unknown | |
d8317 | train | Properly pass parameters:
public class Board {
public static void main(String[] args){
...
for (int i=1; i<=N; i++){
for (int j=1; j<=N; j++){
square(N);
g.setColor(Color.RED);
circle(x, y);
g.setColor(Color.BLUE);
circle(x, y);
y+=m;
}
}
}
public static void square(int N){
//deleted first line
int m= (int)(300/5);
int x=0; int y=0;
DrawingPanel panel= new DrawingPanel(300, 300);
Graphics g= panel.getGraphics();
g.setColor(Color.BLACK);
g.drawRect(x, y, N/300, N/300);
}
public static void circle(int x, int y){
//deleted first line
Graphics g= panel.getGraphics();
g.fillOval(x+y+(3/2)*m);
}
} | unknown | |
d8318 | train | You may need setInterval.
Also replace Math.rand() with Math.random()
let colors = ["yellow", "blue", "green", "red"];
let interval
setInterval(() => {
let textBoxes = document.querySelectorAll(".foo");
textBoxes.forEach((word, index) => {
interval = index;
word.style.color = colors[Math.floor(Math.random() * colors.length)]
}, 500 + interval * 250)
})
<div class='foo'>1</div>
<div class='foo'>1</div>
<div class='foo'>1</div>
<div class='foo'>1</div>
<div class='foo'>1</div>
A: You can use setInterval to control the loop segment time. Below I've set intervals to 250ms to get an idea of how frequent the updates occur:
let INTERVAL_IDS = []
document.querySelector('#start').addEventListener('click',start)
document.querySelector('#stop').addEventListener('click',stop)
function start(){
let colors = ["yellow", "blue", "green", "red"];
let textBoxes = document.querySelectorAll(".foo");
INTERVAL_IDS.push(setInterval(function(){
textBoxes.forEach(word =>
word.style.color = colors[Math.floor(Math.random() * colors.length)]
)
},250))
}
function stop(){
clearInterval(INTERVAL_IDS.pop())
}
<button id="start">start</button><button id="stop">stop</button>
<span class='foo'>H</span>
<span class='foo'>e</span>
<span class='foo'>l</span>
<span class='foo'>l</span>
<span class='foo'>o</span>
A: Yeah, a while loop with a counter sounds reasonable. All inside of a setInterval function to fire at so many seconds.
let counter = 0
while( counter >= 10){
*your code*;
counter++
}``` | unknown | |
d8319 | train | try this
IdAccess = from x in OffAcs where
x.AccessDeccription == Combobox.SelectedText
select x.IdAccess;
or this:
IdAccess = OffAcs.First(x=>x.AccessDeccription == Combobox.SelectedText).IdAccess; | unknown | |
d8320 | train | Try following :
const string FILENAME = @"c:\temp\test.txt";
static void Main(string[] args)
{
Dictionary<int, Dictionary<string, int>> dict = new Dictionary<int, Dictionary<string, int>>();
StreamReader reader = new StreamReader(FILENAME);
string input = "";
while ((input = reader.ReadLine()) != null)
{
//string input = "11=205129022,453=8,448=CompanyID,447=D,452=63,448=userid,447=D,452=11,448=CompanyName,447=D,452=13,448=W,447=D,452=54,77=O,555=2";
List<string> strArray = input.Split(new char[] { ',' }).ToList();
//or dictionary
Dictionary<string, int> rowDict = strArray.Select(x => x.Split(new char[] { '=' })).GroupBy(x => x.Last(), y => int.Parse(y.First())).ToDictionary(x => x.Key, y => y.FirstOrDefault());
int id = rowDict["CompanyID"];
dict.Add(id, rowDict);
}
} | unknown | |
d8321 | train | Uncheck "Offline work" in Android Studio:
File -> Settings -> Gradle -> Global Gradle Settings
or in OSX:
Preferences -> Gradle -> Global Gradle Setting | unknown | |
d8322 | train | As you mentioned in the comments you are using RStudio. It is not specified why it has to be the console in R, but I assume there is a good reason to display the links within RStudio and I assume the viewer pane on the right next to the console also works for you.
If that is the case you could do the following:
library(DT) # for datatable() function
library(shiny) # for tags$a() function
data <- data.frame(link = toString(tags$a(href = paste0("http://google.de"), "google")))
datatable(data, escape = FALSE)
Very close to the console ;)
A: It would depend on how the output is being viewed. If this output is going to an HTML file, then all you need to do is set this as the location for a hyperlink.
<a href="https://en.wikipedia.org/wiki/Statistics">https://en.wikipedia.org/wiki/Statistics</a>
But if, for example, you are just viewing the output in notepad, then I don't believe notepad has the functionality for hyperlinks.
I don't believe the R console directly supports hyperlinks or not. But it looks like from this: http://rmarkdown.rstudio.com/lesson-2.html that you can maybe use R Markdown to do what you want.
A: This is not currently possible in RStudio, unfortunately. However, it is an open issue which you can upvote if you think it should be prioritised. | unknown | |
d8323 | train | When all 3 combo boxes are set it will enable the checkbox. Once the value for any combo box is updated it calls a common function which checks whether all combo boxes have a value assigned and accordingly set the checkbox.
Private Sub cmbClientContact_AfterUpdate()
Call SetCheckBox
End Sub
Private Sub cmbClientName_AfterUpdate()
Call SetCheckBox
End Sub
Private Sub cmbProjectManager_AfterUpdate()
Call SetCheckBox
End Sub
Private Sub SetCheckBox()
If Nz(Me.cmbClientContact, "") <> "" And Nz(Me.cmbClientName, "") <> "" And Nz(Me.cmbProjectManager, "") <> "" Then
Me.Check25 = True
Me.Text27.Enabled = True
Else
Me.Check25 = False
End If
End Sub
Enable/disable textbox basis value of the checkbox
Private Sub Check25_AfterUpdate()
If Nz(Me.Check25, False) Then
Me.Text27.Enabled = True
Else
Me.Text27.Enabled = False
End If
End Sub
A: I would reccomend using the AfterUpdate event of all 3 combo boxes. Since the code is going to be the same (you're checking if all 3 combo boxes have a value), you can create one function to handle the check, and set that function to the AfterUpdate event of all 3 combo boxes when the form loads.
The function to update the controls (both the text box and check box) would be something like this:
Private Function UpdateControls()
Me.Text1.Enabled = Not (IsNull(Me.Combo1) Or IsNull(Me.Combo2) Or IsNull(Me.Combo3))
Me.Check1.Value = Not (IsNull(Me.Combo1) Or IsNull(Me.Combo2) Or IsNull(Me.Combo3))
End Function
You can call this function when the form initially loads, so the checkbox will be unchecked and the textbox will be disabled:
Private Sub Form_Load()
' update controls initially when the form loads
UpdateControls
End Sub
To make sure the same update happens whenever one of the combo box's values are updated, you can set each combobox's AfterUpdate event to the same function, like this:
Private Sub Form_Load()
' set each combo box's AfterUpdate event to run the check
Me.Combo1.AfterUpdate = "=UpdateControls()"
Me.Combo2.AfterUpdate = "=UpdateControls()"
Me.Combo3.AfterUpdate = "=UpdateControls()"
End Sub
So your final code might be something like this:
Private Sub Form_Load()
' set each combo box's AfterUpdate event to run the check
Me.Combo1.AfterUpdate = "=UpdateControls()"
Me.Combo2.AfterUpdate = "=UpdateControls()"
Me.Combo3.AfterUpdate = "=UpdateControls()"
' update controls initially when the form loads
UpdateControls
End Sub
Private Function UpdateControls()
Me.Text1.Enabled = Not (IsNull(Me.Combo1) Or IsNull(Me.Combo2) Or IsNull(Me.Combo3))
Me.Check1.Value = Not (IsNull(Me.Combo1) Or IsNull(Me.Combo2) Or IsNull(Me.Combo3))
End Function
A: Without knowing more specifics about the name schemas of your objects, this is my semi-vague answer:
One option (of many) is to use an On Click event procedure with the following:
If Not IsNull(Me.Combo1) _
And Not IsNull(Me.Combo2) _
And Not IsNull(Me.Combo3) Then
Me.Check1 = True
Me.Text1.Enabled = True
Else
Me.Check1 = False
Me.Text1.Enabled = False
End If
This assumes that the checkbox is named Check1 and the textbox is named Text1 and the comboboxes are Combo1, Combo2, and Combo3
It is a little confusing whether you meant Enabled or Visible, but if you meant Visible, just change the lines that say .Enabled to .Visible | unknown | |
d8324 | train | var html = "<table border=0 align=center id=mytable5>";
html = Regex.Replace(html, @"=\s*(\S+?)([ >])", "=\"${1}\"${2}", RegexOptions.IgnoreCase);
A: I got it
String pattern = @"([a-z]+)=([a-z0-9_-]+)([ >])";
String replacePattern = "${1}=\"${2}\"${3}";
html = Regex.Replace(html, pattern, replacePattern, RegexOptions.IgnoreCase);
will get
<table border=0 align="center" id="mytable5">
corrected to this:
<table border="0" align="center" id="mytable5">
thanks for King King that showed me the path | unknown | |
d8325 | train | is it like JS? if yes :
var userObj= JSON.parse(user);
userObj.skills.HTMLCSS = 8.0;
user = JSON.stringify(userObj);
A: db.users.update(
{'user_name' : 'chicken_01'},
{'$set' :
{
"skills.HTML/CSS":8.0
}
})
Except with the name of your collection and not db.users. | unknown | |
d8326 | train | Keep reading. Kent Beck is a very smart guy. He either has a very good reason why he created the example that way, and it will be clear later on, or it's a poor solution.
"Reduce" is a very good name if map-reduce is the ultimate goal. | unknown | |
d8327 | train | The text function will set text, not HTML.
You need replace the newlines in the generated HTML:
$("#some-div").text($("#some-textarea").val())
.html(function(index, old) { return old.replace(/\n/g, '<br />') });
Note that you cannot set the HTML directly from the textarea, because that won't escape HTML tags.
Also, unlike PHP, Javascript uses regex literals, so you cannot put a regex in a string.
A: Rather than treating HTML as a string, I would suggest creating DOM nodes. Normalize the textarea value (IE uses \r\n rather than \n), split the textarea value on line breaks and create text nodes separated by <br> elements:
var div = $("#some-div")[0];
var lines = $("#some-textarea").val().replace(/\r\n/g, "\n").split("\n");
for (var i = 0, len = lines.length; i < len; ++i) {
if (i) div.appendChild(document.createElement("br"));
div.appendChild(document.createTextNode(lines[i]));
} | unknown | |
d8328 | train | I think what you would like is:
=GETPIVOTDATA("Qty",HighPiv,"Item",A55,"Week",H50)
I find the easiest way to write such a formula is to start by ensuring that Pivot Table Tools > Options > PivotTable – Options, Generate GetPivotData is checked then in the desired cell enter = and select the required entry from the PT (here63). That would show (for example) “SPN010977-204” and 11333 or ”11333” but these can be changed to A55 and H50. | unknown | |
d8329 | train | I got it working after expanding the default_app.asar distributed with the Electron build. The instructions on the page linked above neglected to mention that the package.json should contain something like:
{
"name": "electron",
"productName": "Electron",
"main": "main.js"
}
The only file that needs to be in resources/app is the package.json file. You can set main to the location of your your app's entry point script and you can put any other files wherever you want. | unknown | |
d8330 | train | When the item group is getting focus, it adds the active class. So you can do something like this:
.item.active .item__third:nth-child(1), .item.active .item__third:nth-child(3) {
width:10%;
}
.item.active .item__third:nth-child(2) {
width:80%;
}
Or generally on .item .item__third class:
.item .item__third:nth-child(1), .item .item__third:nth-child(3) {
width:10%;
}
.item .item__third:nth-child(2) {
width:80%;
} | unknown | |
d8331 | train | It can be done the way you propose, but the idea for callbacks sent as an argument to another function is to make them non-static callable objects (fitted for any purpose) instead of one implementation per use-case. Also, you don't always have access to invoke the "callback" function (called the way you intend to) due to scope restrictions. | unknown | |
d8332 | train | Yes, the interpreter can always release the GIL; it will give it to some other thread after it has interpreted enough instructions, or automatically if it does some I/O. Note that since recent Python 3.x, the criteria is no longer based on the number of executed instructions, but on whether enough time has elapsed.
To get a different effect, you'd need a way to acquire the GIL in "atomic" mode, by asking the GIL not to be released until you release it explicitly. This is impossible so far (but see https://bitbucket.org/arigo/cpython-withatomic for an experimental version).
A: As Armin said, the GIL can be released inside PyEval_EvalCode. When it returns, it is of course acquired again.
The best way is just to make sure that your code can handle that. For example, incref any objects where you have C pointers to before the GIL might get released. Also, be careful if there might be cases where the Python code again calls the very same function. If you have another mutex there, you can easily end up in a dead-lock. Use recursive-safe mutexes and while waiting on them, you should release the GIL so that the original thread can releases such mutexes. | unknown | |
d8333 | train | I believe the license you purchased to use Jira gives you access to the api without further cost.
First steps?
The second link you gave in your post relating to the api (docs.atlassian.com/jira/REST/cloud/) gives you everything you need to know if you understand its content.
Googling nodejs jira api gave a number of package results that would make interacting with the api very easy. At the time node-jira was top of the list and looked like it suited your needs. There are other packages too so worth looking around.
General pointers:
*
*Start on a list of nodejs packages you will need to build your app from what you know and package searches. Initialize your node project and start adding those packages to package.json.
*Identify the Jira authentication method you are going to use.
*
*The api supports basic over https or oauth and cookie once authenticated.
*Find examples of how the package you are using handles authentication. It should be easy in the package readme or with google.
*Identify the API calls that will give you the data you need.
*
*The options are easy to find in node-jira readme if using it or use the api docs.
*The jira api documentation will give you the expected json response schema that you will need to access the json you get back.
*An example would be the Projects api definition. It gives you an example response and the full response schema.
*The api options are described as 'expandable' which means you only get what you ask for, if you want more you have to ask for it. (see expand option for each api call)
*Consider what you need to process the data you get back and display it in whatever format you require.
*
*Again more package options, json processing, templating.
*If it is a web page you might need something like express.
*Use that information to start coding (not in any specific order).
*
*Code for getting requests (say a web page).
*Code for authentication and api calls.
*Code for templating each data view of api response data.
*Code the overall app structure.
*Give yourself some debug messages that can be turned on and off so you can see process sequence which can help a lot in troubleshooting.
*Write test scripts! Change code.... run the test/s, got a new feature... write a test then code to the test. Retest before release.
There are lots of package options, information, and examples. Use Google lots, search npmjs.com for packages, use the api docs. | unknown | |
d8334 | train | Drive-Uploady is mainly a sender for Uploady. Therefore, you are able to use most of Uploady's functionality. In this case, the upload context method -
import React from "react";
import DriveUploady, { useUploady } from "drive-uploady";
const MyButton = () => {
const { upload } = useUploady();
const onUpload = async () => {
const myBlob = await getBlobFromSomewhere();
upload(myBlob);
};
return <button onClick={onUpload}>Upload</button>
};
const App = () => {
return <DriveUploady
...
>
<MyButton />
</DriveUploady>
};
This will initiate the upload process once the button is clicked. Where you get the blob is your responsibility of course. | unknown | |
d8335 | train | The documentation says
If IgnoreCase is TRUE, Expression must be uppercase.
Note that, per your comments, you misunderstood the case-sensitivity parameter. It is IgnoreCase not CaseSensitive.
As for the results:
*
*Lower-case expression with IgnoreCase set to TRUE - won't work
*Lower-case expression, IgnoreCase set to FALSE, upper case pattern - won't match
*Lower-case expression with IgnoreCase set to TRUE - won't work
*Lower-case expression with IgnoreCase set to TRUE - won't work
Just really bad luck that not a single one worked :) | unknown | |
d8336 | train | Please trouble shooting you issue with below aspects:
1. Use one git repository
*
*If you mean 3 gits are 3 git repositories, you should keep only one git repo to manage your project. Only keep the repo that can contain all the files you want to manage under it’s directory.
As below example, if you project (the files you want to manage in git repo) is under Dir2 (not contains it’s parent Dir1), then you should remove the other two repos for Dir1 and Dir3 by removing the folders Dir1/.git and Dir1/Dir2/Dir3/.git.
Dir1
|___.git
|___ …
|___Dir2
|___.git
|___ …
|___Dir3
|___.git
|___ …
*If you mean 3 gits are 3 git branches for the same git repo, before switching branches, you should make sure the work directory is clean (save and commit changes before switching branches).
2. Mark sure user.name and user.email has already been set
You can use the command git config -l to check if the output contains:
user.name=<username>
user.email=<email address>
If the output does not contain user.name and user.email, you should set as below:
git config --global user.name yourname
git config --global user.email [email protected] | unknown | |
d8337 | train | This is not going to work. The command handler for the Publish action (org.eclipse.wst.server.ui.internal.view.servers.ServerActionHandler) expects the current selection to be a server and doesn't do anything if it is not. So you have to be in the server view for it to work. | unknown | |
d8338 | train | I think you can initialize the Email property in your User model :
public string Email { get; set; } = "unchanged";
you can do it also in the default constructor . | unknown | |
d8339 | train | You can pivot with conditional aggregation:
select
year(d_date) yr,
sum(case when month(d_date) = 1 then amount end) Jan,
sum(case when month(d_date) = 2 then amount end) Feb,
sum(case when month(d_date) = 3 then amount end) Mar,
...
sum(case when month(d_date) = 12 then amount end) Dec,
sum(amount) total
from mytable
group by year(d_date)
order by yr | unknown | |
d8340 | train | The link to the airfoil database contains the coordinates of a NACA0012 in the Lednicer format, while the code in AeroPython Lesson was written for an airfoil in Selig's format.
(The Notebook compute the flow around an airfoil using a source-panel method.)
Selig's format starts from the trailing edge of the airfoil, goes over the upper surface, then over the lower surface, to go back to the trailing edge.
Lednicer's format lists points on the upper surface (from leading edge to trailing edge), then points on the lower surface (from leading edge to trailing edge).
You can load the Selig's format (skipping the header "NACA 0012 AIRFOILS" with skiprows=1 in numpy.loadtxt) as follow:
import urllib
# Retrieve and save geometry to file.
selig_url = 'http://airfoiltools.com/airfoil/seligdatfile?airfoil=n0012-il'
selig_path = 'naca0012-selig.dat'
urllib.request.urlretrieve(selig_url, selig_path)
# Load coordinates from file.
with open(selig_path, 'r') as infile:
x1, y1 = numpy.loadtxt(infile, unpack=True, skiprows=1)
The NACA0012 airfoil here contains 131 points and you will see that the trailing edge has a finite thickness:
print('Number of points:', x1.size) # -> 131
print(f'First point: ({x1[0]}, {y1[0]})') # -> (1.0, 0.00126)
print(f'Last point: ({x1[-1]}, {y1[-1]})') # -> (1.0, -0.00126)
If you do the same with the Lednicer's format (with skiprows=2 for the header), you will load 132 points (leading-edge point is duplicated) with points on the upper surface being flipped (from leading edge to trailing edge).
(That's why you observe this line in the middle with pyplot.plot; the line connects the trailing edge from the upper surface to the leading edge from the lower surface.)
One way to re-orient the points to follow Selig's format is to skip the leading edge on the upper surface (i.e., skip the duplicated point) and flip the points on the upper surface.
Here is a possible solution:
import numpy
# Retrieve and save geometry to file.
lednicer_url = 'http://airfoiltools.com/airfoil/lednicerdatfile?airfoil=n0012-il'
lednicer_path = 'naca0012-lednicer.dat'
urllib.request.urlretrieve(lednicer_url, lednicer_path)
# Load coordinates from file (re-orienting points in Selig format).
with open(lednicer_path, 'r') as infile:
# Second line of the file contains the number of points on each surface.
_, info = (next(infile) for _ in range(2))
# Get number of points on upper surface (without the leading edge).
num = int(info.split('.')[0]) - 1
# Load coordinates, skipping the first point (leading edge on upper surface).
x2, y2 = numpy.loadtxt(infile, unpack=True, skiprows=2)
# Flip points on the upper surface.
x2[:num], y2[:num] = numpy.flip(x2[:num]), numpy.flip(y2[:num])
You will end up with 131 points, oriented the same way as the Selig's format.
print('Number of points:', x2.size) # -> 131
print(f'First point: ({x2[0]}, {y2[0]})') # -> (1.0, 0.00126)
print(f'Last point: ({x2[-1]}, {y2[-1]})') # -> (1.0, -0.00126)
Finally, we can also checks that coordinates are the same with numpy.allclose:
assert numpy.allclose(x1, x2, rtol=0.0, atol=1e-6) # -> True
assert numpy.allclose(y1, y2, rtol=0.0, atol=1e-6) # -> True | unknown | |
d8341 | train | You should put all your formcontrols in a formGroup
myFormGroup: FormGroup = this.fb.group({
name: new FormControl('name'),
description: new FormControl('description'),
price: new FormControl('price'),
inventory: new FormControl('inventory'),
category: new FormControl('category'),
image_url: new FormControl('image_url'),
});
In order to do so, you need to be able to make a formGroup, thus have the FormBuilder dependency injected.
constructor(..., private fb: FormBuilder) {}
The above code of grouping formcontrols can be done easier like this:
myFormGroup: FormGroup = this.fb.group({
name: [''],
description: [''],
price: [''],
inventory: ['']),
category: [''],
image_url: [''],
});
You will need to add the formGroup to your HTML template aswell
<form [formGroup]="myFormGroup" (ngSubmit)="updateItem()" autocomplete="off" novalidate>
...
</form>
Lastly, you can print the values in your submit function by adding likes this
updateItem() {
if (this.myFormGroup.valid) //Not necessary since you don't use validators
console.log(this.myFormGroup.value)
}
All this information and more can be found on the Reactive Forms Documentation of Angular | unknown | |
d8342 | train | It is not ideal, but you can downgrade a bit the version of ApexCharts.
This bug appeared with v3.36.1, so it was not in v3.36.0.
let options = {
series: [{
name: 'Series',
data: [10, 20, 15]
}],
chart: {
type: 'bar',
height: 350
},
dataLabels: {
enabled: false
},
xaxis: {
categories: ['Category 1', 'Category 2', 'Category 3'],
title: {
text: 'Axis title'
}
}
};
let chart = new ApexCharts(document.querySelector('#chart'), options);
chart.render();
<script src="https://cdn.jsdelivr.net/npm/[email protected]"></script>
<div id="chart"></div> | unknown | |
d8343 | train | Solved it today..
Just add property "homepage" : "./" to package.json,
check this issue comment on create-react-app | unknown | |
d8344 | train | Instead of:
Element.extend(elt);
Try:
elt = Element.extend(elt);
or
elt = $(elt);
As for how to do the traversing before you've inserted the node, here's some random examples that illustrate a few features of Prototype:
var elt = new Element('div', {
className: 'someClass'
});
elt.insert(new Element('ul'));
var listitems = ['one', 'two', 'three'];
listitems.each(function(item){
var elm = new Element('li');
elm.innerHTML = item;
elt.down('ul').insert(elm);
});
elt.getElementsBySelector('li'); //=> returns all LIs
elt.down('li'); //=> returns first li
elt.down('ul').down('li'); //=> also returns first li
elt.down('ul').down('li', 2); //=> should return the third if I'm not mistaken
// all before inserting it into the document!
Check the brand new API documentation.
Answering Ollie: my code above can be tested here, as you can see it works under IE 6.
A: I don't think it is possible to select nodes that are not in the document, because the selector depends on the document node.
And you should build new elements this way :
var elt = new Element("div"); | unknown | |
d8345 | train | It is because, refModels.once('value').then is async meaning that JS starts its execution and continues to next line which is console.log and by the time console.log is executed $scope.theModel hasn't been populated with data yet.
I suggest you read this
Asynchronous vs synchronous execution, what does it really mean?
You can still access your $scope.theModel in other functions, but you have to make sure it is loaded first.
Edit
refModels.once('value').then(function (snapshot) {
snapshot.forEach(function (snapshot) {
var usersValue = snapshot.val();
console.log("users values", usersValue.carModel);
$scope.theModel = usersValue.carModel;
console.log("The model" + $scope.theModel); //output "e90"
});
$scope.processData();
});
$scope.processData = function() {
// here $scope.theModel exists
// do some code execution here with $scope.theModel
// call your other firebase.database.ref("models") here
};
A: Ok so finally got it working, with Bunyamin Coskuner solution, the only thing I had to fix is to return $scope.processData;
var UserID = firebase.auth().currentUser.uid;
var refModels = firebase.database().ref("/users/" + UserID);
refModels.once('value').then(function(snapshot) {
console.log(snapshot)
snapshot.forEach(function(childSnapshot) {
console.log(childSnapshot)
var usersValue = childSnapshot.val();
// var theModel = usersValue.carModel;
console.log("The model", usersValue.carModel); //output "e90"
$scope.theModel = usersValue.carModel;
***return $scope.processData;*** // we have to return it here
});
$scope.processData();
});
$scope.processData = function (){
console.log("Process Model" + $scope.theModel); // now returns the value e90
$scope.carDetails = [];
firebase.database().ref("models").once('value').then(function(snapshot) {
snapshot.forEach(function(userSnapshot) {
var models = userSnapshot.val();
console.log(models);
if (models.brand == $scope.theModel){
$scope.carDetails.push(models);
}
});
});
}
It seems to be a very common mistake made by new javascript developers, the Async behaviour, so its good to read about it.
In this solution we called a function from inside of then function. That way, we'll have for sure the data, that can be used in other function
A: Your $scope.theModel gets set inside the "then" block as expected, but being async, this happens after the second console.log (outside the function) is invoked.
If you use angular components, you can init theModel inside $onInit method:
angular.module('app', []).component('myComponent', {
bindings: {
title: '@'
},
controller: function() {
this.$onInit = function() {
// Init the variable here
$scope.theModel = null;
});
//Following code should not be in the controller, but in a service (here just as sample)
var refModels = firebase.database().ref("/users/" + $scope.UserID);
refModels.once('value').then(function (snapshot) {
snapshot.forEach(function (snapshot) {
var usersValue = snapshot.val();
console.log("users values", usersValue.carModel);
// From this moment your variable is set and available outside
$scope.theModel = usersValue.carModel;
});
},
templateUrl: 'template.html'
});
A: You can watch the value changing. Async behaviour is explained in another answer.
$scope.$watch('theModel', function(newVal){
console.log(newVal);
});
Also I would use $evalAsync() on $scope.theModel = usersValue.carModel;
$scope.$evalAsync(function(){
$scope.theModel = usersValue.carModel;
}); | unknown | |
d8346 | train | It seems like you are not setting Headers properly in your HTTP Request body, try the following code,
register(user : any, key : string) : Promise<any>{
let parametros = new HttpParams().set("command", "register");
user.verif = key;
return this.http.post(this.url, user, {Headers: parametros}).toPromise();
}
Also, if you use the HTTPClient package to make requests then it's a bit hard to set custom headers.
If you can prevent some issues (like CORS Access-Control Errors) by using axios or a similar package | unknown | |
d8347 | train | The issue was that I needed CascadeType.MERGE on the Aircraft entity:
@ManyToOne(cascade={CascadeType.PERSIST, CascadeType.MERGE}, fetch = FetchType.EAGER)
private Client owner;
@ManyToOne(cascade={CascadeType.PERSIST, CascadeType.MERGE}, fetch = FetchType.EAGER)
private Client operator;
Essentially, when JSON is input to a create operation, it appears to be indistinguishable from a MERGE operation (where a new entity is created and put under management, unlike PERSIST) and so a MERGE operation is passed into Client, not PERSIST (as I would have expected). This is why Person and Contact were not being persisted. I still don't understand why JSON input is being treated like a merge though - it doesn't make sense. | unknown | |
d8348 | train | As far as cropping images are concerned you can use the WriteableBitmapEx library on codeplex.
Now you just need to draw a rectangle on a canvas containing the image to describe the crop region. | unknown | |
d8349 | train | Simply use the contour function with a 2nd argument of desired values (it is a vector of 2 elements instead of a scalar, to distinguish the function call from another mode):
some_value = .5;
[x y] = meshgrid(linspace(0,4*pi,30),linspace(0,4*pi,30));
z = cos(x)+cos(y);
contour(x, y, z, [some_value, some_value])
A: It helped me.
contourf(aX, aY, NM(:, :, k+1), 'ShowText','on', 'LevelStep', 0.4); | unknown | |
d8350 | train | To resolve the issue with COALESCE/IFNULL still returning NULL for the WITH ROLLUP placeholders, you need to GROUP BY the table column names, rather than the aliased column expressions.
The issue is caused by the GROUP BY clause being specified on the aliased column expressions, because the aliases are assigned after the column expression is processed.
Resulting in the WITH ROLLUP NULL placeholders not being in the recordset to be evaluated by COALESCE.
Meaning the aliases DEALER, SERVICE_ADVISOR in the GROUP BY do not exist until after IFNULL/COALESCE have already been executed.
See MySQL Handling of GROUP BY for more details.
Example DB-Fiddle
CREATE TABLE foo (
`amount` INTEGER,
`created` INTEGER
);
INSERT INTO foo
(`amount`, `created`)
VALUES
('1', '2019'),
('2', '2019');
Query #1 (Reproduce Issue)
SELECT
SUM(amount) AS amounts,
COALESCE(created, 'Total') AS created_coalesce
FROM foo
GROUP BY created_coalesce WITH ROLLUP;
| amounts | created_coalesce |
| ------- | ---------------- |
| 3 | 2019 |
| 3 | |
Query #2 (Corrected)
SELECT
SUM(amount) AS amounts,
COALESCE(created, 'Total') AS created_coalesce
FROM foo
GROUP BY foo.created WITH ROLLUP;
| amounts | created_coalesce |
| ------- | ---------------- |
| 3 | 2019 |
| 3 | Total |
Use-Case Specific
Example DB-Fiddle
SELECT
COALESCE(usergroups.name, 'GROUP') AS DEALER,
COALESCE(users.name, 'TOTAL') AS SERVICE_ADVISOR,
COUNT(DISTINCT vcrs.uid) AS COMPLETED,
/* ... */
GROUP BY usergroups.name, users.name WITH ROLLUP;
Query #1 (Original)
SELECT
COALESCE(usergroups.name, 'GROUP') AS DEALER,
COALESCE(users.name, 'TOTAL') AS SERVICE_ADVISOR,
COUNT(DISTINCT vcrs.uid) AS COMPLETED
/* ... */
GROUP BY DEALER, SERVICE_ADVISOR WITH ROLLUP;
| DEALER | SERVICE_ADVISOR | COMPLETED |
| ------ | --------------- | --------- |
| Foo | Jane Doe | 1 |
| Foo | John Doe | 1 |
| Foo | | 2 |
| | | 2 |
Query #2 (Corrected)
SELECT
COALESCE(usergroups.name, 'GROUP') AS DEALER,
COALESCE(users.name, 'TOTAL') AS SERVICE_ADVISOR,
COUNT(DISTINCT vcrs.uid) AS COMPLETED
/* ... */
GROUP BY usergroups.name, users.name WITH ROLLUP;
| DEALER | SERVICE_ADVISOR | COMPLETED |
| ------ | --------------- | --------- |
| Foo | Jane Doe | 1 |
| Foo | John Doe | 1 |
| Foo | TOTAL | 2 |
| GROUP | TOTAL | 2 |
Considerations
*
*
With MySQL 5.7+ and ONLY_FULL_GROUP_BY enabled, selected
non-aggregate columns that are not specified in the GROUP BY clause
will fail. Meaning the following query will not work as expected: DB-Fiddle
SELECT COALESCE(YEAR(foo), 'foo') /* ... */ GROUP BY YEAR(foo) WITH ROLLUP
-> ER_WRONG_FIELD_WITH_GROUP: Expression #2 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'test.foo_bar.foo' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by
*
COALESCE,
IFNULL,
IF(... IS NULL)
and CASE WHEN ... IS NULL
will all function
similarly. Where IFNULL is the proprietary to MySQL and is a less
functional replacement of COALESCE. As COALESCE can accept more
than two parameters to check against NULL, returning the first non-NULL value.
mysql> SELECT COALESCE(NULL, NULL, 1, NULL);
-> 1
mysql> SELECT IFNULL(NULL, 1);
-> 1
mysql> SELECT IF(NULL IS NULL, 1, '');
-> 1
mysql> SELECT CASE WHEN NULL IS NULL THEN 1 END;
-> 1
*
nullable columns in the GROUP BY as either aliases or column names, will result in the NULL values being
displayed as the WITH ROLLUP placeholder. This applies to using WITH ROLLUP in general. For example if
users.name can return NULL. DB-Fiddle
| DEALER | SERVICE_ADVISOR | COMPLETED |
| ------ | --------------- | --------- |
| Foo | TOTAL | 1 |
| Foo | Jane Doe | 1 |
| Foo | John Doe | 1 |
| Foo | TOTAL | 3 |
| GROUP | TOTAL | 3 |
Prevent NULL column values from being displayed
To ensure that nullable columns are not accidentally included, you would need to specify in the criteria to exclude them.
Example DB-Fiddle
SELECT
COALESCE(usergroups.name, 'GROUP') AS DEALER,
COALESCE(users.name, 'TOTAL') AS SERVICE_ADVISOR,
COUNT(DISTINCT vcrs.uid) AS COMPLETED
FROM vcrs
LEFT JOIN users
ON users.id = vcrs.uid
LEFT JOIN usergroups
ON usergroups.id = users.group_id
WHERE vcrs.vcrSubStatus = 4
AND users.name IS NOT NULL
GROUP BY usergroups.name, users.name
WITH ROLLUP;
Result
| DEALER | SERVICE_ADVISOR | COMPLETED |
| ------ | --------------- | --------- |
| Foo | Jane Doe | 1 |
| Foo | John Doe | 1 |
| Foo | TOTAL | 2 |
| GROUP | TOTAL | 2 |
Since LEFT JOIN is used on the vcrs table, IS NOT NULL must be
applied to the WHERE clause, instead of the ON clause. As LEFT JOIN returns NULL for non-matching criteria. To circumvent the
issue, use an INNER JOIN to limit the resultset to only those with
matching ON criteria.
/* ... */
INNER JOIN users
ON users.id = vcrs.uid
AND users.name IS NOT NULL
/* ... */
WHERE vcrs.vcrSubStatus = 4
GROUP BY usergroups.name, users.name
WITH ROLLUP;
Include NULL columns values
To explicitly include nullable column values, without duplicating the WITH ROLLUP placeholder name, you would need to utilize a derived table subquery to substitute the NULL value as a textual value.
Example DB-Fiddle
SELECT
COALESCE(v.usergroup_name, 'GROUP') AS DEALER,
COALESCE(v.user_name, 'TOTAL') AS SERVICE_ADVISOR,
COUNT(DISTINCT v.uid) AS COMPLETED
FROM (
SELECT
usergroups.name AS usergroup_name,
COALESCE(users.name, 'NULL') AS user_name,
vcrs.uid
FROM vcrs
LEFT JOIN users
ON users.id = vcrs.uid
LEFT JOIN usergroups
ON usergroups.id = users.group_id
WHERE vcrs.vcrSubStatus = 4
) AS v
GROUP BY v.usergroup_name, v.user_name
WITH ROLLUP;
Result
| DEALER | SERVICE_ADVISOR | COMPLETED |
| ------ | --------------- | --------- |
| Foo | Jane Doe | 1 |
| Foo | John Doe | 1 |
| Foo | NULL | 1 |
| Foo | TOTAL | 3 |
| GROUP | TOTAL | 3 |
You can also optionally replace the 'NULL' textual placeholder as desired and even display it as NULL.
SELECT
COALESCE(v.usergroup_name, 'GROUP') AS DEALER,
CASE v.user_name WHEN 'NULL' THEN NULL ELSE COALESCE(v.user_name, 'TOTAL') END AS SERVICE_ADVISOR,
COUNT(DISTINCT v.uid) AS COMPLETED
FROM (
/* ... */
) AS v
GROUP BY v.usergroup_name, v.user_name
WITH ROLLUP;
A: I'm only 2 years too late, but since I came across the same issue as @the_gimlet I thought I'd post the answer.
So don't know if this is a mySQL versioning or something, but using mysql 5.6 I get the same problem... ifnull will not replace the rollup 'nulls'.
Simply get around this by making your rollup a subquery, and doing the ifnulls in the main select... annoying to repeat the select, but it works!
e.g. for example above
SELECT
IFNULL(`DEALER`, 'GROUP') AS DEALER,
IFNULL(`SERVICE_ADVISOR`, 'TOTAL') AS SERVICE_ADVISOR,
`COMPLETED`,
/* .......... */
FROM (SELECT
usergroups.name AS DEALER,
users.name AS SERVICE_ADVISOR,
COUNT(DISTINCT vcrs.uid) AS COMPLETED,
/* .......... */
AND vcrs.vcrSubStatus = 4
GROUP BY DEALER, SERVICE_ADVISOR with ROLLUP);
A: Do you want something like this?
SELECT COALESCE(usergroups.name, 'GROUP') AS DEALER,
COALESCE(users.name, IF(usergroups.name IS NULL, 'TOTAL', 'SUBTOTAL')) AS SERVICE_ADVISOR,
COUNT(DISTINCT vcrs.uid) AS COMPLETED,
..........
..........
AND vcrs.vcrSubStatus = 4
GROUP BY DEALER, SERVICE_ADVISOR with ROLLUP;
Test:
mysql;root@localhost(playground)> select * from t;
+------+----------+-------+--------+
| id | car | state | tstamp |
+------+----------+-------+--------+
| 1 | toyota | new | 1900 |
| 2 | toyota | old | 1950 |
| 3 | toyota | scrap | 1980 |
| 4 | mercedes | new | 1990 |
| 5 | mercedes | old | 2010 |
| 6 | tesla | new | 2013 |
+------+----------+-------+--------+
6 rows in set (0.04 sec)
mysql;root@localhost(playground)> select car, sum(tstamp) from t group by car with rollup;
+----------+-------------+
| car | sum(tstamp) |
+----------+-------------+
| mercedes | 4000 |
| tesla | 2013 |
| toyota | 5830 |
| NULL | 11843 |
+----------+-------------+
4 rows in set (0.03 sec)
mysql;root@localhost(playground)> select coalesce(car, 'huhu'), sum(tstamp) from t group by car with rollup;
+-----------------------+-------------+
| coalesce(car, 'huhu') | sum(tstamp) |
+-----------------------+-------------+
| mercedes | 4000 |
| tesla | 2013 |
| toyota | 5830 |
| huhu | 11843 |
+-----------------------+-------------+
4 rows in set (0.00 sec)
A: What you're looking for is a case statement.
What you're saying is that under a certain condition replace the found value with the one specified. You can use multiple when/then statements depending on how customised you'd like the replacements to be.
select IFNULL(usergroups.name, 'GROUP') AS DEALER,
case when(users.name is null) then 'TOTAL' else users.name end AS SERVICE_ADVISOR,
COUNT(DISTINCT vcrs.uid) AS COMPLETED,
..........
..........
and vcrs.vcrSubStatus = 4
group by DEALER, SERVICE_ADVISOR with ROLLUP; | unknown | |
d8351 | train | configure the servlet like this: some sysntax error i think. its working fine now.
<?xml version="1.0" encoding="UTF-8"?>
<web-app id="WebApp_ID" version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">
<display-name>student-portlet</display-name>
<servlet>
<servlet-name>JSON Web Service Servlet</servlet-name>
<servlet-class>
com.liferay.portal.kernel.servlet.PortalClassLoaderServlet
</servlet-class>
<init-param>
<param-name>servlet-class</param-name>
<param-value>com.liferay.portal.servlet.JSONServlet</param-value>
</init-param>
<load-on-startup>0</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>JSON Web Service Servlet</servlet-name>
<url-pattern>/api/jsonws/*</url-pattern>
</servlet-mapping>
<servlet-mapping>
<servlet-name>JSON Web Service Servlet</servlet-name>
<url-pattern>/api/secure/jsonws/*</url-pattern>
</servlet-mapping>
<jsp-config>
<taglib>
<taglib-uri>http://java.sun.com/portlet_2_0</taglib-uri>
<taglib-location>/WEB-INF/tld/liferay-portlet.tld</taglib-location>
</taglib>
</jsp-config>
</web-app> | unknown | |
d8352 | train | You should try this e.g. enum_classes.js:
Java.perform(
function(){
Java.enumerateLoadedClasses(
{"onMatch":function(className){
console.log(className) },
"onComplete":function(){}
}
)
}
)
And load this js with Frida on the following way:
frida -U -l enum_classes.js --no-pause -f <package-name>
Run this script on the same directory where you put enum_classes.js or add path before that (e.g. /path/where/you/store/this/frida/script/enum_classes.js)
You can get the package name:
frida-ps -U | unknown | |
d8353 | train | Code should work. Only problems that might occur is your ListBox databinding is incorrectly defined.
I don't see any .ItemsSource = or ItemsSource={Binding some_collection}
Another thing is make sure that photo.filename is returning the correct file.
Set a string debug_string = "Assets/Content/" + photo.filename + ".jpg";
Make sure everything is correct.
Last thing is to make sure the files are actually in the Assets Folder inside the project and its
BuildAction is set to Content
like so | unknown | |
d8354 | train | I would use onchange and javascript to solve your question. Here is some documentation:
<select name="newsletter" onchange="newsletterChanged()">
then you can add javascript to hide or show the html you wish to:
function newsletterChanged() {
//do work whenever newsletter changes.
};
Hope this helps you on the way to solve your question. Happy coding :-) | unknown | |
d8355 | train | The following should work:
import itertools
result=[]
for k in range(2,len(values)+1):
temp=[tuple(x[0] for x in i) for i in list(itertools.combinations(values,k))if sum([p[1] for p in i]) >0.5]
result.append(temp)
result=sum(result, [])
print(result)
Output:
[('DNO', 'Equinor'), ('Equinor', 'Petoro'), ('Equinor', 'Total'), ('DNO', 'Equinor', 'Petoro'), ('DNO', 'Equinor', 'Total'), ('DNO', 'Petoro', 'Total'), ('Equinor', 'Petoro', 'Total'), ('DNO', 'Equinor', 'Petoro', 'Total')]
A: You can use a list comprehension like this:
Explanation:
*
*The first for defines the length of the combinations. Every length from 2 to the length of values is used.
*The second for creates the actual combinations
*The if uses a generator method to sum the numbers of the items
from itertools import combinations
combis = [
item
for length in range(2, len(values)+1)
for item in combinations(values, length)
if sum(i[1] for i in item) >= 0.5
] | unknown | |
d8356 | train | Your connection string is malformed.
It should probably be:
Driver={MySQL ODBC 5.2w Driver};Server=server_name;Database=database_name;User=my_user_id;Password=my_pwd
User instead of uid and Password instead of pwd.
See connectionstrings.com for the different options.
A: For some reason this error only happens when I run the application from Visual Studio and not when it is deployed in IIS.
Thank you for your help! | unknown | |
d8357 | train | Although the code posted above should work another way to connect to a socket.io server is to call the connect() method on the client.
Socket.io Client
const io = require('socket.io-client');
const socket = io.connect('http://website.com');
socket.on('connect', () => {
console.log('Successfully connected!');
});
Socket.io Server w/ Express
const express = require('express');
const app = express();
const server = require('http').Server(app);
const io = require('socket.io')(server);
const port = process.env.PORT || 1337;
server.listen(port, () => {
console.log(`Listening on ${port}`);
});
io.on('connection', (socket) => {
// add handlers for socket events
});
Edit
Added Socket.io server code example. | unknown | |
d8358 | train | Those are just padding 0x00 characters added at the end, because the string's length for that cryptographic algorithms has to be a multiple of 16 (with 128 bit).
Indeed, if you add at the end of your code:
var_dump(bin2hex(Cipher::decrypt($emailAddress, $iv)));
You can see that the last 6 characters are all 0's (which means there are 3 0x00 bytes at the end).
To remove them, just run:
$decrypted = rtrim($decrypted, "\0"); | unknown | |
d8359 | train | It's all a matter of tradeoffs -- in this case, you want just enough complexity to handle a reasonable number of cases.
If there are only two options, I think that if statement looks just fine. A 'case' statement (aka a switch statement) can be DRYer, and you may want to explicitly say "movie.txt", e.g.
@word = (case file_name
when "word.txt"
Word
when "movie.txt"
Movie
else
raise "Unknown file name #{file_name}"
end).new(file_name)
or extract a class_for_file_name method if you don't like putting a case statement in parens, even though it's pretty fun.
If your set of polymorphic classes gets bigger, you might get more mileage out of a lookup table:
class_for_file_name = {
"word.txt" => Word,
"movie.txt" => Movie,
"sandwich.txt" => Sandwich,
}
@word = class_for_file_name[file_name].new(file_name)
And of course, maybe you're oversimplifying and you really want to switch on the file extension, not the file name, which you could get with File.extname(file_name).
You could also get even fancier and ask each concrete class whether it handles this file name; this feels more OO -- it puts the knowledge about foos inside the Foo class -- but is arguably more complicated:
class Word
def self.good_for? file_name
file_name == "word.txt"
end
end
class Movie
def self.good_for? file_name
file_name == "movie.txt"
end
end
...
def class_for_file_name file_name
[Word, Movie].detect{|k| k.good_for? file_name}
end
BTW this problem more or less fits the design pattern called Abstract Factory; you may want to look it up and see if any of its analyses inspire you.
A: Try the FileMagic library.
gem install ruby-filemagic
Example:
require 'filemagic'
type = FileMagic.new.file file_name | unknown | |
d8360 | train | But if I receive FulfillmentResult.PurchaseReverted, then what
happened? How did the user just revert the purchase? Am I meant to
withdraw their Coins/Gems/Potatoes?
The value PurchaseReverted means the transaction is canceled on the backend and users get their money back. So you should disable the user's access to the cosumable content (withdraw the Coins/Gems/Potatoes) as necessary.
What are scenarios behind the other error messages?
NothingToFulfill : The transaction id has been fulfilled or is otherwise complete
PurchasePending: The purchase is not complete. At this point it is still possible for the transaction to be reversed due to provider failures and/or risk checks. It means the purchase has not yet cleared and could still be revoked.
ServerError: There was an issue receiving fulfillment status. It might be the problem from the Store.
Succeed: The fulfillment is complete and your Coins/Gems/Potatoes can be offered again.
Here is the documentation about FulfillmentResult Enum | unknown | |
d8361 | train | How should I assign new documents to these topics?
Once you have a trained model you can query the model for your document with:
doc_bow = model.id2word.doc2bow(doc.split()) # convert to bag of words format first
doc_topics, word_topics, phi_values = model.get_document_topics(bow, per_word_topics=True)
re.
This code is going to provide you with per-doc and per-word information about the level of belonging to a particular topic. This means the per-word calculations are done for you automatically.
How do I assign the main keywords that denote the topic?
it is difficult to understand what you mean. The keywords denoting a topic along with their weights are the actual LDA model that you got from the training using a corpus.
I suppose you may be interested in reviewing the following notebook [*] for more information how to query model for specific information regarding a document (per-word topic information, etc.).
[*] from which I took the excerpt of the code above | unknown | |
d8362 | train | try
2748.ToString("X")
A: If you want exactly 3 characters and are sure the number is in range, use:
i.ToString("X3")
If you aren't sure if the number is in range, this will give you more than 3 digits. You could do something like:
(i % 0x1000).ToString("X3")
Use a lower case "x3" if you want lower-case letters.
A: Note: This assumes that you're using a custom, 12-bit representation. If you're just using an int/uint, then Muxa's solution is the best.
Every four bits corresponds to a hexadecimal digit.
Therefore, just match the first four digits to a letter, then >> 4 the input, and repeat.
A: The easy C solution may be adaptable:
char hexCharacters[17] = "0123456789ABCDEF";
void toHex(char * outputString, long input)
{
outputString[0] = hexCharacters[(input >> 8) & 0x0F];
outputString[1] = hexCharacters[(input >> 4) & 0x0F];
outputString[2] = hexCharacters[input & 0x0F];
}
You could also do it in a loop, but this is pretty straightforward, and loop has pretty high overhead for only three conversions.
I expect C# has a library function of some sort for this sort of thing, though. You could even use sprintf in C, and I'm sure C# has an analog to this functionality.
-Adam | unknown | |
d8363 | train | We can use the column names of df to check whether each file is %in% each column inside an sapply. This will give us a square matrix which tells us whether each file contains every other file.
This way, it is straightforward to use array indexing to get the files which contain other files:
tab <- `rownames<-`(sapply(df, function(x) names(df) %in% x), names(df))
ind <- which(tab, arr.ind = TRUE)
AinB <- data.frame(item = names(df)[ind[,2]], contains = names(df)[ind[,1]])
AinB
#> item contains
#> 1 cy1.CSV cy1.CSV
#> 2 cy1.CSV cy24.CSV
#> 3 cy1.CSV cy6.CSV
#> 4 cy2.CSV cy2.CSV
#> 5 cy24.CSV cy1.CSV
#> 6 cy24.CSV cy24.CSV
#> 7 cy24.CSV cy6.CSV
#> 8 cy3.CSV cy3.CSV
#> 9 cy6.CSV cy1.CSV
#> 10 cy6.CSV cy24.CSV
#> 11 cy6.CSV cy6.CSV
#> 12 dlt.CSV dlt.CSV
#> 13 dm.CSV dm.CSV
#> 14 dov.CSV dov.CSV
#> 15 dov.CSV dov_1.CSV
#> 16 dov_1.CSV dov_1.CSV
#> 17 ds.CSV ds.CSV
#> 18 ds.CSV ds_1.CSV
#> 19 ds_1.CSV ds.CSV
#> 20 ds_1.CSV ds_1.CSV
To find instances in which A is in B but not vice versa, we do the same thing except we are looking for the indices where tab is different from its transpose:
ind2 <- which(tab & !t(tab), arr.ind = TRUE)
AinBnotBinA <- data.frame(item = names(df)[ind2[,2]],
contains = names(df)[ind2[,1]])
AinBnotBinA
#> item contains
#> 1 dov.CSV dov_1.CSV
Created on 2020-11-25 by the reprex package (v0.3.0) | unknown | |
d8364 | train | You can't print binary with printf. You could print hex which is quite easy to relate to its binary representation (with %02X e.g.).
If you insist on printing binary, you would have to write a function for it. The function would be quite simple. If you have n bits, you could loop n times, do a shift by 1 and based on the carry either print 0 or 1. A bit more efficiently, you could store the result in a buffer and then print it all at once with %s.
Depending on how much memory you have (since it's x86 it's unlikely (impossible?) that it's embedded, so you probably do have a lot of memory), you can also store a table with the '0'-'1' representations of the bytes. That table would take 256*(8+1) bytes (one for '\0' if you use %s) for 8-bit values and you would need to hard-code it (perhaps generate with another program).
So far, I told you the two extremes: calculate all vs store all. The first can be slow and the second take a lot of space (and also tedious to generate).
To get the best of both worlds, you can have a table that stores the '0'-'1' representations of nibbles (4 bits). That table would be a mere 16*(8+1) bytes. Printing a byte would then be printing first its most significant and then its least significant nibble.
In C, this would look like this:
const char *bit_rep[16][5] = {
[ 0] = "0000", [ 1] = "0001", [ 2] = "0010", [ 3] = "0011",
[ 4] = "0100", [ 5] = "0101", [ 6] = "0110", [ 7] = "0111",
[ 8] = "1000", [ 9] = "1001", [10] = "1010", [11] = "1011",
[12] = "1100", [13] = "1101", [14] = "1110", [15] = "1111",
};
void print_byte(uint8_t byte)
{
printf("%s%s", bit_rep[byte >> 4], bit_rep[byte & 0x0F]);
} | unknown | |
d8365 | train | You could always just first slice the array into 2 parts (assuming it's not bigger then 2 times those rows). After that encode it, and add them together again. In case that isn't a solution, you need to increase your memory limit.
Here is an example, test it here. On @GeertvanDijk suggestion, I made this a flexible function, in order to increase functionality!
<?php
$bigarray = array(1,2,3,4,5,6,7,8,9);
function json_encode_big($bigarray,$split_over = 6670){
$parts = ceil((count($bigarray) / $split_over));
$start = 0;
$end = $split_over;
$jsoneconded = [];
for($part = 0; $part < $parts; $part++){
$half1 = array_slice($bigarray,$start,$end);
$name = "json".$part;
$$name = ltrim(rtrim(json_encode($half1),"]"),"[");
$jsoneconded[] = $$name;
$start += $split_over;
}
return "[".implode($jsoneconded,",")."]";
}
print_r(json_encode_big($bigarray));
?>
I have now test this with more rows then 6,670. You can test it online here as well.
Now, I have to mention that I tested the normal json_encode() with a million rows, no problem. Yet, I still hope this solves your problem...
In case that you run out of memory, you can set the memory_limit to a higher value. I would advice against this, instead I would retrieve the data in parts, and procces those in parts. As I don't know how you retrieve this data, I can't give an example how to regulate that. Here is how you change the memory in case you need to (sometimes it is the only option, and still "good").
ini_set("memory_limit","256M");
In this case, it is the dubbel from the default, you can see that it is 128M in this documentation
A: Possibly depends on your PHP version and the memory allowed for use by PHP (and possibly the actual size of all the data in the array, too).
What you could do if all else fails, is this:
Write a function that checks the size of a given array, then split off a smaller part that will encode, then keep doing this until all parts are encoded, then join them again. If this is the route you end up taking, post a comment on this answer, and perhaps I can help you out with it. (This is based on the answer by Nytrix)
EDIT, example below:
function encodeArray($array, $threshold = 6670) {
$json = array();
while (count($array) > 0) {
$partial_array = array_slice($array, 0, $threshold);
$json[] = ltrim(rtrim(json_encode($partial_array), "]"), "[");
$array = array_slice($array, $threshold);
}
$json = '[' . implode(',', $json) . ']';
return $json;
} | unknown | |
d8366 | train | To Sync Azure AD users to SQL make sure each property stored in the data source maps properly to an AD user's attribute.
This article by Adam Bertram has code and whole process to Sync Azure AD users to Sql Database.
As per official documentation
Azure role-based access control (Azure RBAC) applies only to the portal and is not propagated to SQL Server.
Note
Database users (with the exception of administrators) cannot be
created using the Azure portal. Azure roles are not propagated to the
database in SQL Database, the SQL Managed Instance, or Azure Synapse.
Azure roles are used for managing Azure Resources, and do not apply to
database permissions. For example, the SQL Server Contributor role
does not grant access to connect to the database in SQL Database, the
SQL Managed Instance, or Azure Synapse. The access permission must be
granted directly in the database using Transact-SQL statements. | unknown | |
d8367 | train | Not sure why you're nesting calls to URLWithString:
[NSURL URLWithString:[NSURL URLWithString:@"http://properfrattire.com/Classifi/CRN_JSON.json"]]];
Once will do:
[NSURL URLWithString:@"http://properfrattire.com/Classifi/CRN_JSON.json"];
Also, you should use dataWithContentsOfURL:options:error: so you can see any error. | unknown | |
d8368 | train | Try something on these lines:
SELECT Employee.EmployeeId,Employee.FirstName,Employee.LastName,Employee.Salary
FROM Employee
LEFT JOIN Services
ON Employee.EmployeeId = Services.EmployeeId
WHERE Services.EmployeeId IS NULL
Do not forget that MS Access has a Find Unmatched query wizard.
You might like to look at:
Fundamental Microsoft Jet SQL for Access 2000
Intermediate Microsoft Jet SQL for Access 2000
Advanced Microsoft Jet SQL for Access 2000 | unknown | |
d8369 | train | You can use time.Time:
CreatedAt time.Time `json:"created_at" bson:"created_at"`
However, I would recommend that you store Epoch Unix timestamp (the number of seconds since Jan 1st 1970) because it is universal:
CreatedAt int64 `json:"created_at" bson:"created_at"`
I have tried in the past to store time.Time in MongoDB through Golang but then I had trouble when I parsed the same information into a datetime object in Python. If you would like to be compatible across languages and technologies, storing the Epoch Unix timestamp would be a great option. | unknown | |
d8370 | train | Javascript has a Array.find(function(element(){}) function that you can use to look up values in an array, and inside the function(element){} you define the matching criteria .
Here the parameter to the Array.find() function is passed as the function(element), in this case, findProducts(), and additional parameters to the find function is given as another argument following the comma (in this case p.id).
You can pass any number of arguments while calling the function, and they'll all be accessible using an array indexing approach. For example, you can do
somefunction(someOtherFunction, a, b, c)...
...
Later you can access these values:
a will be this[0]
b will be this[1]
c will be this[2]
A: this.products.find(this.findProducts, [p.id])
findProducts(p) {
return p.id === this[0];
}
above function is the expansion of
this.products.find(product=>product.id == p.id)
It will iterate through all array element and returns matching product with given p.id
A: The Mozilla documentation gives some insight into what the second argument to find is, namely thisArg:
Object to use as this when executing callback.
So in your example, the callback is findProducts and thisArg is an array containing one element (which is a slightly weird approach to be honest).
Without thisArg, this inside findProducts would not be that array.
Below is a working demo. Note that without thisArg, this in findProducts is the window object. With thisArg, it is the second array parameter:
var arr = [
{id: 1, name: 'a'},
{id: 2, name: 'b'},
{id: 3, name: 'c'},
{id: 4, name: 'd'}
];
function findProducts(p) {
console.log('in findProducts', p);
console.log(this);
return p.id === this[0];
}
var f = arr.find(p => findProducts(p));
console.log('without thisArg', f);
f = arr.find(findProducts, [1]);
console.log('with thisArg', f);
A: You can do is in your findProducts fn
findProducts(p) {
let result = this.products.findIndex(item=>item.id === p.id);
return result // here the result will contain -1 if value is not present else it will contain some value
}
read more about find and findIndex | unknown | |
d8371 | train | This is quite a troublesome problem. I recommend videos from Brian Lagunas himself where he provides a solution and explanation. For example this one.
https://app.pluralsight.com/library/courses/prism-problems-solutions/table-of-contents
If you can watch it. If not I will try to explain.
The problem I believe is that IRegionManager from the container is a singleton and whenever you use it it is the same instance, so when you are trying to inject a region in an already injected region it will not work and you need to have a separate RegionManager for nested views.
This should fix it.
Create two interfaces
public interface ICreateRegionManagerScope
{
bool CreateRegionManagerScope { get; }
}
public interface IRegionManagerAware
{
IRegionManager RegionManager { get; set; }
}
Create a RegionManagerAwareBehaviour
public class RegionManagerAwareBehaviour : RegionBehavior
{
public const string BehaviorKey = "RegionManagerAwareBehavior";
protected override void OnAttach()
{
Region.Views.CollectionChanged += Views_CollectionChanged;
}
void Views_CollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e)
{
if (e.Action == NotifyCollectionChangedAction.Add)
{
foreach (var item in e.NewItems)
{
IRegionManager regionManager = Region.RegionManager;
// If the view was created with a scoped region manager, the behavior uses that region manager instead.
if (item is FrameworkElement element)
{
if (element.GetValue(RegionManager.RegionManagerProperty) is IRegionManager scopedRegionManager)
{
regionManager = scopedRegionManager;
}
}
InvokeOnRegionManagerAwareElement(item, x => x.RegionManager = regionManager);
}
}
else if (e.Action == NotifyCollectionChangedAction.Remove)
{
foreach (var item in e.OldItems)
{
InvokeOnRegionManagerAwareElement(item, x => x.RegionManager = null);
}
}
}
private static void InvokeOnRegionManagerAwareElement(object item, Action<IRegionManagerAware> invocation)
{
if (item is IRegionManagerAware regionManagerAwareItem)
{
invocation(regionManagerAwareItem);
}
if (item is FrameworkElement frameworkElement)
{
if (frameworkElement.DataContext is IRegionManagerAware regionManagerAwareDataContext)
{
// If a view doesn't have a data context (view model) it will inherit the data context from the parent view.
// The following check is done to avoid setting the RegionManager property in the view model of the parent view by mistake.
if (frameworkElement.Parent is FrameworkElement frameworkElementParent)
{
if (frameworkElementParent.DataContext is IRegionManagerAware regionManagerAwareDataContextParent)
{
if (regionManagerAwareDataContext == regionManagerAwareDataContextParent)
{
// If all of the previous conditions are true, it means that this view doesn't have a view model
// and is using the view model of its visual parent.
return;
}
}
}
invocation(regionManagerAwareDataContext);
}
}
}
}
Create ScopedRegionNavigationContentLoader
public class ScopedRegionNavigationContentLoader : IRegionNavigationContentLoader
{
private readonly IServiceLocator serviceLocator;
/// <summary>
/// Initializes a new instance of the <see cref="RegionNavigationContentLoader"/> class with a service locator.
/// </summary>
/// <param name="serviceLocator">The service locator.</param>
public ScopedRegionNavigationContentLoader(IServiceLocator serviceLocator)
{
this.serviceLocator = serviceLocator;
}
/// <summary>
/// Gets the view to which the navigation request represented by <paramref name="navigationContext"/> applies.
/// </summary>
/// <param name="region">The region.</param>
/// <param name="navigationContext">The context representing the navigation request.</param>
/// <returns>
/// The view to be the target of the navigation request.
/// </returns>
/// <remarks>
/// If none of the views in the region can be the target of the navigation request, a new view
/// is created and added to the region.
/// </remarks>
/// <exception cref="ArgumentException">when a new view cannot be created for the navigation request.</exception>
public object LoadContent(IRegion region, NavigationContext navigationContext)
{
if (region == null) throw new ArgumentNullException("region");
if (navigationContext == null) throw new ArgumentNullException("navigationContext");
string candidateTargetContract = this.GetContractFromNavigationContext(navigationContext);
var candidates = this.GetCandidatesFromRegion(region, candidateTargetContract);
var acceptingCandidates =
candidates.Where(
v =>
{
var navigationAware = v as INavigationAware;
if (navigationAware != null && !navigationAware.IsNavigationTarget(navigationContext))
{
return false;
}
var frameworkElement = v as FrameworkElement;
if (frameworkElement == null)
{
return true;
}
navigationAware = frameworkElement.DataContext as INavigationAware;
return navigationAware == null || navigationAware.IsNavigationTarget(navigationContext);
});
var view = acceptingCandidates.FirstOrDefault();
if (view != null)
{
return view;
}
view = this.CreateNewRegionItem(candidateTargetContract);
region.Add(view, null, CreateRegionManagerScope(view));
return view;
}
private bool CreateRegionManagerScope(object view)
{
bool createRegionManagerScope = false;
if (view is ICreateRegionManagerScope viewHasScopedRegions)
createRegionManagerScope = viewHasScopedRegions.CreateRegionManagerScope;
return createRegionManagerScope;
}
/// <summary>
/// Provides a new item for the region based on the supplied candidate target contract name.
/// </summary>
/// <param name="candidateTargetContract">The target contract to build.</param>
/// <returns>An instance of an item to put into the <see cref="IRegion"/>.</returns>
protected virtual object CreateNewRegionItem(string candidateTargetContract)
{
object newRegionItem;
try
{
newRegionItem = this.serviceLocator.GetInstance<object>(candidateTargetContract);
}
catch (ActivationException e)
{
throw new InvalidOperationException(
string.Format(CultureInfo.CurrentCulture, "Cannot create navigation target", candidateTargetContract),
e);
}
return newRegionItem;
}
/// <summary>
/// Returns the candidate TargetContract based on the <see cref="NavigationContext"/>.
/// </summary>
/// <param name="navigationContext">The navigation contract.</param>
/// <returns>The candidate contract to seek within the <see cref="IRegion"/> and to use, if not found, when resolving from the container.</returns>
protected virtual string GetContractFromNavigationContext(NavigationContext navigationContext)
{
if (navigationContext == null) throw new ArgumentNullException(nameof(navigationContext));
var candidateTargetContract = UriParsingHelper.GetAbsolutePath(navigationContext.Uri);
candidateTargetContract = candidateTargetContract.TrimStart('/');
return candidateTargetContract;
}
/// <summary>
/// Returns the set of candidates that may satisfiy this navigation request.
/// </summary>
/// <param name="region">The region containing items that may satisfy the navigation request.</param>
/// <param name="candidateNavigationContract">The candidate navigation target as determined by <see cref="GetContractFromNavigationContext"/></param>
/// <returns>An enumerable of candidate objects from the <see cref="IRegion"/></returns>
protected virtual IEnumerable<object> GetCandidatesFromRegion(IRegion region, string candidateNavigationContract)
{
if (region == null) throw new ArgumentNullException(nameof(region));
return region.Views.Where(v =>
string.Equals(v.GetType().Name, candidateNavigationContract, StringComparison.Ordinal) ||
string.Equals(v.GetType().FullName, candidateNavigationContract, StringComparison.Ordinal));
}
}
In your App.xaml
protected override void RegisterTypes(IContainerRegistry containerRegistry)
{
containerRegistry.RegisterSingleton<IRegionNavigationContentLoader,ScopedRegionNavigationContentLoader>();
}
protected override void ConfigureDefaultRegionBehaviors(IRegionBehaviorFactory regionBehaviors)
{
base.ConfigureDefaultRegionBehaviors(regionBehaviors);
regionBehaviors.AddIfMissing(RegionManagerAwareBehaviour.BehaviorKey, typeof(RegionManagerAwareBehaviour));
}
Coming to the finish.
Now in your ViewModelB implement IRegionManagerAware and have it as a normal property
public IRegionManager RegionManager { get; set; }
Then at your ViewB implement ICreateRegionManagerScope and have it as a get property
public bool CreateRegionManagerScope => true;
Now it should work.
Again I truly recommend the videos at Pluralsight from Brian on Prism. He has a couple of videos that help a lot when you are starting with a Prism. | unknown | |
d8372 | train | You don't need to use apply you can just use your conditionals as boolean masks and do your operations that way.
mask = df["seg"] == df["seg2"]
true_rows = df.loc[mask]
false_rows = df.loc[~mask]
changed_rows = false_rows.assign(seg=false_rows.seg2)
df1 = pd.concat([true_rows, false_rows, changed_rows], ignore_index=True)
print(df1)
seg cod seg2
0 [55792, TRIM] A [55792, TRIM]
1 [ZQFC, DYWH] A [MEIL, 1724]
2 [MEIL, 1724] A [MEIL, 1724] | unknown | |
d8373 | train | I believe that contains functionality can only be used in tables configured to use/support Full Text Search -- an elderly feature of SQL Server that I have little experience with. If you are not using Full Text Search, I'm pretty sure contains will not work.
A: Before CONTAINS will work against a column you need setup full text index. This is actually a different engine which runs as a separate service in SQL Server, so make sure it is installed an running.
Once you're sure the component is installed (it may already be installed) You need to do the following:
*
*Create a Full-Text Catalogue
*Create a Full-Text Index (you can have multiple of these in the same catalogue) against the tables/columns you want to be able to use full-text keywords
*Run a Population which will crawl the index created & populate the catalogue (these are seperate files which SQL Server needs to maintain in addition to mdf/ldf )
There's an ok tutorial on how to do this by using SSMS in SQL Server 2008 by Pinal Dave on CodeProject. SQL Server 2014 should be very similar.
You can also perform these steps with TSQL:
*
*CREATE FULLTEXT CATALOG
*CREATE FULLTEXT INDEX
*ALTER FULLTEXT INDEX | unknown | |
d8374 | train | You can use .map() operator and map to the type you want:
const data: Observable<NestedObject[]> = getInitialObservable()
.map((response: Response) => <FlatObject[]>response.json().results)
.map((objects: FlatObject[]) => {
// example implementation, consider using hashes for faster lookup instead
const result: NestedObjects[] = [];
for (const obj of objects) {
// get all attributes except "group" into "props" variable
const { group, ...props } = obj;
let nestedObject = result.find(o => o.group === group);
if (!nestedObject) {
nestedObject = { group, objectProps: [] };
result.push(nestedObject);
}
nestedObject.objectProps.push(props);
}
return result;
}); | unknown | |
d8375 | train | Check if the 2 sides of the comparison matches , Meaning the PublishedClause_ClauseId is also BigInt Data type as the parameter you are using "@EntityKeyValue1" , Mismatching them cause query optimizer to either scan or not use indexes, Match them then redeploy | unknown | |
d8376 | train | You need to override OnActivityResult. In its arguments you will get an Intent containing the data you requested with StartActivityForResult.
The Intent you get back you will be able to get the Uri, by just getting the Data property, for the file you have picked. From there you will be able to get whatever you need. | unknown | |
d8377 | train | Going through the logic of the given code we can see that the animation-duration is always set to the same amount (7s) on every click - it never changes after the first click:
var increasePlus = document.getElementById("plus");
increasePlus.addEventListener('click', () => {
var sec= 5 + "s";
if(sec=="5s"){//this is always true as sec has just been set to 5s
sec = 6 + "s";//so sec is set to 6s
increasePlus.style.animationDuration = sec;
}
if(sec=="6s"){//this is always true as sec has (just) been set to 6s
sec = 7 + "s";//so sec is now set to 7s
increasePlus.style.animationDuration = sec;//and so the animation-duration is ALWAYS set to 7s on a click
}
});
It is difficult to click on a moving object which is what the given code seems to require (the clickable element id plus is also the one given the animation duration in that code) so in this snippet the plus element gets clicked and that updates the animation duration of a separate object which is the one that moves.
const increasePlus = document.getElementById("plus");
const theObject = document.getElementById('object');
increasePlus.addEventListener('click', () => {
//get the current animation-duration
//remember this has an s at the end so we need to get rid of that so we can add to it
let sec= window.getComputedStyle(theObject, null)["animationDuration"].replace('s','');
//add 1 to it
sec++;
//and set the animation-duration
theObject.style.animationDuration = sec + 's';
});
#plus{
font-size: 60px;
}
#object {
position: relative;
animation-name: move;
animation-duration: 5s;
animation-iteration-count: infinite;
animation-timing-function: linear;
width: 100px;
height: 100px;
background-color: magenta;
}
@keyframes move {
0% {
top: 0;
}
100% {
top: 30vh;
}
}
<button id="plus">+</button>
<div id="object"></div>
A: I think this can fail because of this line
var sec= 5 + "s";
this line will execute always when you click on the button, so after that in "if" condition you will always get sec = '5s' and than it will be decreased to '4s'.
If you want that code to work, you could place "var sec = 5+'s' above the listener declaration. | unknown | |
d8378 | train | Your logic is a bit suspect in if grosspay > range(1000, 1500). What would it mean to be "greater" than a range of numbers? my guess is that the grosspay you input is, in fact, within the range [1000, 1500), so it hits this logic bug in your code and fails to assign it to anything.
The usual way to check if a number is within a range is to use the in operator.
if some_num in range(1, 10):
print("some_num is 1, 2, 3, 4, 5, 6, 7, 8, or 9")
However you'll notice that some_num MUST be contained in the integer range [1, 9] for this to trigger. If some_num is 7.5, this will fail. This is incredibly likely in the case of gross pay. What are the chances of someone's pay coming out to an exactly even dollar amount?
Instead what you could do is:
if grosspay <= 1000:
taxrate = 0.90
elif 1000 < grosspay <= 1500:
taxrate = 0.78
elif 1500 < grosspay:
taxrage = 0.63
using elif instead of a series of ifs makes the code slightly more efficient, since if/elif/else is by definition one block that is mutually exclusive. In other words:
a = 1
b = 2
if a == 1:
print("This gets done!")
if b == 2:
print("This gets done!")
if a == 1:
print("This gets done!")
elif b == 2:
print("BUT THIS DOESN'T!")
else:
print("(this doesn't either...)") | unknown | |
d8379 | train | This can be achieved using css clip-path and using a polygon as the parameter. Here is an example:
<div class="dialog"></div>
And the CSS
.dialog{
position: absolute;
top: 10px;
left: 10px;
right: 10px;
bottom: 10px;
width: 500px;
height: 200px;
background-color: #d3d0c9;
background-image: url(http://lorempixel.com/400/200/sports/1/Dummy-Text/);
background-size: cover;
background-position: center center;
-webkit-clip-path: polygon(0% 25%, 85% 25%, 100% 0%, 100% 100%, 0% 100%);
clip-path: polygon(0% 25%, 85% 25%, 100% 0%, 100% 100%, 0% 100%);
}
<div class="dialog"></div>
Browser support is limited to modern browsers though.
You can play around using this tool : http://bennettfeely.com/clippy/
A: Here's a solution that uses transforms to accomplish the desired corner effect. Although the solution is more complicated than the accepted one, it can be implemented on pretty much all modern browsers. By using several of the transform polyfills, the solution can be implemented across the board.
The algorithm behind this solution achieves a corner element via skewX() transform that is equally applied on the image (set as a background) and its container, just in different directions (e.g., -63.43deg and 63.43deg, depending on the dimensions of the corner element). Then the generated corner element is perfectly aligned with the heading's background.
Here's a fiddle: http://jsfiddle.net/bLbt11a3/.
HTML:
<div class = "popup">
<header>
<h1>DIY Gardener</h1>
<div class = "corner-holder">
<div class = "corner"></div>
</div>
</header>
</div>
CSS:
* {
margin: 0;
padding: 0;
border: 0;
}
body {
padding: 10px;
background-color: #eee;
}
.popup {
box-shadow: 0 0 10px #ccc;
height: 240px;
width: 186px;
position: fixed;
top: 50px;
background-color: #fff;
}
.popup h1 {
font: bold 20px/3 Sans-Serif;
color: #fff;
padding: 0 20px;
background: url(http://thebusstopsherefoundation.com/images/bettis_wave.jpg)
no-repeat
-80px -90px/600px;
}
.popup header {
position: relative;
}
.corner-holder {
position: absolute;
right: 0;
top: 0;
height: 30px;
width: 60px;
overflow: hidden;
transform: translateY(-100%);
}
.corner-holder .corner,
.corner-holder .corner:before {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
transform-origin: bottom left;
/* webkit trick to get rid of jagged edges */
-webkit-backface-visibility: hidden;
}
.corner-holder .corner {
overflow: hidden;
transform: skewX(-63.43deg);
}
.corner-holder .corner:before {
content: "";
background: url(http://thebusstopsherefoundation.com/images/bettis_wave.jpg)
no-repeat
-206px -60px/600px;
transform: skewX(63.43deg);
} | unknown | |
d8380 | train | Here's one on RoseIndia's site that shows how to create area chart in JSP http://www.roseindia.net/chartgraphs/areachart-jsppage.shtml
Now, just replace the charting code with the one for making bar charts and you are done: http://www.geodaq.com/jfreechart-sample/bar_chart_code.jsp | unknown | |
d8381 | train | Found it! The div #map needed the Bootstrap class:
class = "mx-auto"
A:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width">
<title>JS Bin</title>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous">
</head>
<body>
<div class="container text-center">
<div class="row">
<div class="col-md-12">
<h1>My Map</h1>
</div>
</div>
<div class="row mt-5">
<div class="col-md-12">
<div id="map">
<iframe src="https://www.google.com/maps/embed?pb=!1m14!1m12!1m3!1d14475.259566651797!2d91.88062475!3d24.90429495!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!5e0!3m2!1sen!2sbd!4v1563361490340!5m2!1sen!2sbd" width="100%" height="450" frameborder="0" style="border:0" allowfullscreen></iframe>
</div>
</div>
</div>
</div>
</body>
</html>
Wrap your <h1> and <div> tag within .row > .col
*
*Containers provide a means to center and horizontally pad your
site’s contents. Use .container for a responsive pixel width or
.container-fluid for width: 100% across all viewport and device
sizes.
*Rows are wrappers for columns. Each column has horizontal padding
(called a gutter) for controlling the space between them. This
padding is then counteracted on the rows with negative margins. This
way, all the content in your columns is visually aligned down the
left side.
*In a grid layout, content must be placed within columns and only
columns may be immediate children of rows.
For reference go through https://getbootstrap.com/docs/4.0/layout/grid/ this link | unknown | |
d8382 | train | You can have your .htaccess like this:
DirectoryIndex index.php
RewriteEngine On
# request is not for a file
RewriteCond %{REQUEST_FILENAME} !-f
# request is not for a directory
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^([0-9a-zA-Z-]+)/?$ /show.php?id=$1 [L,QSA] | unknown | |
d8383 | train | You can not use session_start() or header() after content has been sent to the browser (<!DOCTYPE html> in your case).
Here, even if you are using ob_start() to buffer the output, what came before has not been buffered and is sent to the browser, which prevents header() and session_start() from working.
From the PHP documentation :
To use cookie-based sessions, session_start() must be called before outputting anything to the browser.
Remember that header() must be called before any actual output is sent, either by normal HTML tags, blank lines in a file, or from PHP.
The fact that it works on your local computer but not on your Web hosting provider's server is most likely due to differences between your configuration of PHP or of your Web server. For instance, output buffering may be enabled by default on your local installation (output_buffering = 4096 for instance), and be disabled on your Web hosting (output_buffering = Off). | unknown | |
d8384 | train | Can I define easier location for my published app.
Well inside of your connection string you can specify the location of the database under Data Source.
Take your database and move it where ever you want, and then update the Data Source inside of your connection string to point to that path. You might have to play with it a few times to get the path right, but this should do what you are wanting to do.
VB.net seem to place my database file into /userprofile/local settings/apps/2.0/data/random/random/appname/data/ folder.
If you are making an installer then you will want to keep the database close to the application, most likely inside a sub-folder in the application's directory (like the data folder). That is why VS (not VB .Net) tends to place a created database inside of the data folder. | unknown | |
d8385 | train | In SQLite, autoincrementing fields are intended to be used as actual primary keys for their records.
You should just it as the ID for your orders table.
If you really want to have an atomic counter independent of corresponding table records, use a table with a single record.
ACID is ensured with transactions:
BEGIN;
SELECT number FROM MyTable;
UPDATE MyTable SET number = ? + 1;
COMMIT;
A: ok, looks like sqlite either doesn't have what I need, or I am missing it. Here's what I came up with:
*
*declare zorder as integer primary key autoincrement, zuid integer in orders table
this means every new row gets an ascending number, starting with 1
*generate a random number:
rnd = int(random.random() * 1000000) # unseeded python uses system time
*create new order (just the SQL for simplicity):
'INSERT INTO orders (zuid) VALUES ('+str(rnd)+')'
*find that exact order number using the random number:
'SELECT zorder FROM orders WHERE zuid = '+str(rnd)
*pack away that number as the new order number (newordernum)
*clobber the random number to reduce collision risks
'UPDATE orders SET zuid = 0 WHERE zorder = '+str(newordernum)
...and now I have a unique new order, I know what the correct order number is, the risk of a read collision is reduced to negligible, and I can prepare that order without concern that I'm trampling on another newly created order.
Just goes to show you why DB authors implement sequences, lol. | unknown | |
d8386 | train | I think you asked this question before, and its also quite clear from your code sample that you are using GSView, not Ghostscript.
Now, while GSView does use Ghostscript to do the heavy lifting, its a concern that you are unable to differentiate between these two applications.
You still haven't provided an example PDF file to look at, nor a command line, though you have now at least managed to quote the Ghostscript version. You need to also give a command line (no I'm not prepared to assemble it from reading your code) and you should try this from the command line, not inside your own application, in order to show that its not your application making the error.
You should consider upgrading Ghostscript to the current version.
Note that a quick perusal of your code indicates that you are specifying a number of command line options (eg -dPDFSETTINGS) which are only appropriate for converting a file into PDF, not for any other purpose (such as printing).
So as I said before, provide a specimen file to reproduce the problem, and a command line (preferably a Ghostscript command line) which causes the problem. Knowing which printer you are using would probably be useful too, although its highly unlikely I will have a duplicate to test on.
A: Answer - UPDATE 16/12/2013
I was managed to get it fixed and wanted to enclose the working solution if it help others. Special thanks to 'KenS' since he spent lot of time to guide me.
To summarize, I finally decided to use GSView along with GhostScript to print PDF to bypass Adobe. The core logic is given below;
//PrintParamter is a custom data structure to capture file related info
private void PrintDocument(PrintParamter fs, string printerName = null)
{
if (!File.Exists(fs.FullyQualifiedName)) return;
var filename = fs.FullyQualifiedName ?? string.Empty;
printerName = printerName ?? GetDefaultPrinter(); //get your printer here
var processArgs = string.Format("-dAutoRotatePages=/All -dNOPAUSE -dBATCH -sPAPERSIZE=a4 -dFIXEDMEDIA -dPDFFitPage -dEmbedAllFonts=true -dSubsetFonts=true -dPDFSETTINGS=/prepress -dNOPLATFONTS -sFONTPATH=\"C:\\Program Files\\gs\\gs9.10\\fonts\" -noquery -dNumCopies=1 -all -colour -printer \"{0}\" \"{1}\"", printerName, filename);
try
{
var gsProcessInfo = new ProcessStartInfo
{
WindowStyle = ProcessWindowStyle.Hidden,
FileName = gsViewEXEInstallationLocation,
Arguments = processArgs
};
using (var gsProcess = Process.Start(gsProcessInfo))
{
gsProcess.WaitForExit();
}
}
A: You could use GSPRINT.
I've managed to make it work by only copying gsprint.exe/gswin64c.exe/gsdll64.dll in a directory and launch it from there.
sample code :
// This uses gsprint (mind the paths)
private const string gsPrintExecutable = @"C:\gs\gsprint.exe";
private const string gsExecutable = @"C:\gs\gswin64c.exe";
string pdfPath = @"C:\myShinyPDF.PDF"
string printerName = "MY PRINTER";
string processArgs = string.Format("-ghostscript \"{0}\" -copies=1 -all -printer \"{1}\" \"{2}\"", gsExecutable, printerName, pdfPath );
var gsProcessInfo = new ProcessStartInfo
{
WindowStyle = ProcessWindowStyle.Hidden,
FileName = gsPrintExecutable ,
Arguments = processArgs
};
using (var gsProcess = Process.Start(gsProcessInfo))
{
gsProcess.WaitForExit();
}
A: Try the following command within Process.Start():
gswin32c.exe -sDEVICE=mswinpr2 -dBATCH -dNOPAUSE -dNOPROMPT -dNoCancel -dPDFFitPage -sOutputFile="%printer%\\[printer_servername]\[printername]" "[filepath_to_pdf]"
It should look like this in C#:
string strCmdText = "gswin32c.exe -sDEVICE=mswinpr2 -dBATCH -dNOPAUSE -dNOPROMPT -dNoCancel -dPDFFitPage -sOutputFile=\"%printer%\\\\[printer_servername]\\[printername]\" \"[filepath_to_pdf]\"";
System.Diagnostics.Process.Start("CMD.exe", strCmdText);
This will place the specified PDF file into the print queue.
Note- your gswin32c.exe must be in the same directory as your C# program. I haven't tested this code. | unknown | |
d8387 | train | Android Studio doesn't read environment variables, so this approach won't work. Also, using the projectDir scheme in settings.gradle will probably cause problems. Android Studio has a limitation that all of its modules need to be located underneath the project root. If you have libraries that are used in multiple projects and they can't be placed under a single project root, the best advice is to have them publish JARs or AARs to a local Maven repository that individual projects can pick up.
read more Environment variable in settings.gradle not working with Android Studio
A: It works for me with the follwoing steps:
*
*Set your variable in Windows
*Reboot
*reach it in gradle build: "$System.env.MYVARIABLE" | unknown | |
d8388 | train | *
*The ability for a client application to connect is almost entirely unrelated to the state of a sender channel. (I say almost because theoretically you could use up all the resources in your queue manager by having loads of retrying senders and then maybe they could affect clients).
*When a client application makes a connection to a queue manager, the network connection is first caught by the listener, and then a running channel of type SVRCONN is started. This is a different type from a SENDER channel, and so there is no requirement to have a SENDER channel running for the client connection to be successful.
*Sender channel status does not matter for the client to be able to connect.
Lets' try to diagnose your two problems. Look in the queue manager AMQERR01.LOG (found in the data directory under \Qmgrs\<qm-name>\errors) and edit your question to add the errors you see in there. There should be errors that explain why the sender channel is retrying, and some to explain why the client cannot connect.
It is possible that the problem with the client not being able to connect is because it is not even reaching the queue manager machine - in which case there will be nothing about that in the queue manager error log. In this case, you should also look in the AMQERR01.LOG on the client machine, this time in the data directory just under the errors folder (as no queue manager name there). You should also have seen some sort of error message or MQRC Reason code from the client application - you should tell us that too. | unknown | |
d8389 | train | Try using the one available in marketplace https://github.com/jitterbit/get-changed-files#get-all-changed-files | unknown | |
d8390 | train | I believe you actually want
cal:Bind.Model="{Binding SelectedAudit}"
Otherwise you are trying to do viewmodel-first resolution in which case Caliburn Micro will look to resolve a view for the VM instead of using the view that you have provided.
e.g.
<aura:AuditView Grid.Row="0" x:Name="SelectedAudit" cal:Bind.Model="{Binding SelectedAudit}" /> | unknown | |
d8391 | train | If malloc() returns NULL it means that the allocation was unsuccessful. It's up to you to deal with this error case. I personally find it excessive to exit your entire process because of a failed allocation. Deal with it some other way.
A: In library code, it's absolutely unacceptable to call exit or abort under any circumstances except when the caller broke the contact of your library's documented interface. If you're writing library code, you should gracefully handle any allocation failures, freeing any memory or other resources acquired in the attempted operation and returning an error condition to the caller. The calling program may then decide to exit, abort, reject whatever command the user gave which required excessive memory, free some unneeded data and try again, or whatever makes sense for the application.
In all cases, if your application is holding data which has not been synchronized to disk and which has some potential value to the user, you should make every effort to ensure that you don't throw away this data on allocation failures. The user will almost surely be very angry. It's best to design your applications so that the "save" function does not require any allocations, but if you can't do that in general, you might instead want to perform frequent auto-save-to-temp-file operations or provide a way of dumping the memory contents to disk in a form that's not the standard file format (which might for example require ugly XML and ZIP libraries, each with their own allocation needs, to write) but instead a more "raw dump" which you application can read and recover from on the next startup.
A: Use Both?
It depends on whether the core file will be useful. If no one is going to analyze it, then you may as well simply _exit(2) or exit(3).
If the program will sometimes be used locally and you intend to analyze any core files produced, then that's an argument for using abort(3).
You could always choose conditionally, so, with --debug use abort(3) and without it use exit. | unknown | |
d8392 | train | Am not really fond of playing with points and Superview. What is can suggest is to make a class for UITapGestureRecognizer as follows which can hold extra data. In your case it would be an index path
class CustomGesture: UITapGestureRecognizer {
let indexPath:NSIndexPath? = nil
}
And then in your didSelect you can add the index path to the newly created CustomGesture class which be would be like:
func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell
{
cell.imageView.userInteractionEnabled = true
let tapImageView = CustomGesture(target: self, action: #selector(HomeFeedViewController.tapImageView(_:)))
tapImageView.indexPath = indexPath// Add the index path to the gesture itself
cell.imageView.addGestureRecognizer(tapImageView)
return cell as CastleCell
}
Now since you have added the indexPath you don't need to play around with super view's and you can access the cell like this:
func tapImageView(gesture: CustomGesture) {
let indexPath = gesture.indexPath!
let cell = self.tableView.cellForRowAtIndexPath(indexPath!) as! CastleCell
performSegueWithIdentifier("SegueName", sender: self)
} | unknown | |
d8393 | train | I figured this out by using postman in my app:
import request from 'postman-request'
const formData = {
'your-name': name,
'your-email': email,
'your-subject': inquiries.find(x=> x.value === inquiry).text,
'file-871': file
}
request.post('https://your-domain/wp-json/contact-form-7/v1/contact-forms/your-form-id/feedback',{ form: formData}, function(err,httpResponse,body){
if(err) {
console.log(err)
}
else {
console.log(body)
}
}) | unknown | |
d8394 | train | Although latest Spark doc says that it has support for Python 2.7+/3.4+, it actually doesn't support Python 3.8 yet. According to this PR, Python 3.8 support is expected in Spark 3.0. So, either you can try out Spark 3.0 preview release (assuming you're not gonna do a production deployment) or 'temporarily' fall back to Python 3.6/3.7 for Spark 2.4.x.
A: Spark 3.0 has been released for a while now and is compatible with Python 3.8.+.
The error you experienced is no longer reproducible. | unknown | |
d8395 | train | I'm building exactly this as an open source project on GitHub juliofruta/CodableCode. Feel free to submit a Pull request since this does not support all cases as specified in the comments. I'm copy and pasting my current solution here:
import Foundation
enum Error: Swift.Error {
case invalidData
}
let identation = " "
extension String {
var asType: String {
var string = self
let firstChar = string.removeFirst()
return firstChar.uppercased() + string
}
var asSymbol: String {
var string = self
let firstChar = string.removeFirst()
return firstChar.lowercased() + string
}
mutating func lineBreak() {
self = self + "\n"
}
func makeCodableTypeArray(anyArray: [Any], key: String, margin: String) throws -> String {
var types = Set<String>()
var existingTypes = Set<String>()
var structCodeSet = Set<String>()
try anyArray.forEach { jsonObject in
var type: String?
// check what type is each element of the array
switch jsonObject {
case _ as String:
type = "String"
case _ as Bool:
type = "Bool"
case _ as Decimal:
type = "Decimal"
case _ as Double:
type = "Double"
case _ as Int:
type = "Int"
case let dictionary as [String: Any]:
let objectData = try JSONSerialization.data(withJSONObject: dictionary, options: [])
let objectString = String(data: objectData, encoding: .utf8)!
let dummyTypeImplementation = try objectString.codableCode(name: "TYPE", margin: "")
// if the existing type does not contain the dummy type implementation
if !existingTypes.contains(dummyTypeImplementation) {
// insert it
existingTypes.insert(dummyTypeImplementation)
// keep a count
if existingTypes.count == 1 {
type = key.asType
} else {
type = key.asType + "\(existingTypes.count)"
}
// and get the actual implementation
let typeImplementation = try objectString.codableCode(name: type!, margin: margin + identation)
structCodeSet.insert(typeImplementation)
}
default:
type = ""
assertionFailure() // unhandled case
}
if let unwrappedType = type {
types.insert(unwrappedType)
}
}
// write type
var swiftCode = ""
if types.isEmpty {
swiftCode += "[Any]"
} else if types.count == 1 {
swiftCode += "[\(types.first!)]"
} else {
swiftCode += "\(key.asType)Options"
// TODO: Instead of enum refactor to use optionals where needed.
// TODO: Build Swift Build package plugin
// Use diffing algorithm to introduce optionals?
// Introduce strategies:
// create
// 1. enum withassociated types
// 2. optionals where needed
// 3. optionals everywhere
// add support to automatically fix when reserved keywords have reserved words for example:
// let return: Return // this does not compile and is part of the bitso api
// so add support for coding keys
//
// struct Landmark: Codable {
// var name: String
// var foundingYear: Int
// var location: Coordinate
// var vantagePoints: [Coordinate]
//
// enum CodingKeys: String, CodingKey {
// case name = "return"
// case foundingYear = "founding_date"
//
// case location
// case vantagePoints
// }
// }
// create enum
swiftCode.lineBreak()
swiftCode.lineBreak()
swiftCode += margin + identation + "enum \(key.asType)Options: Codable {"
types.forEach { type in
swiftCode.lineBreak()
// enum associatedTypes
swiftCode += margin + identation + identation + "case \(type.asSymbol)(\(type))"
}
swiftCode.lineBreak()
swiftCode += margin + identation + "}"
}
// write implementations
structCodeSet.forEach { implementation in
swiftCode.lineBreak()
swiftCode.lineBreak()
swiftCode += implementation
swiftCode.lineBreak()
}
return swiftCode
}
/// Compiles a valid JSON to a Codable Swift Type as in the following Grammar spec: https://www.json.org/json-en.html
/// - Parameter json: A valid JSON string
/// - Throws: Not sure if it should throw right now. We can check if the JSON is valid inside
/// - Returns: The string of the type produced by the JSON
public func codableCode(name: String, margin: String = "") throws -> String {
var swiftCode = ""
swiftCode += margin + "struct \(name.asType): Codable {"
guard let data = data(using: .utf8) else {
throw Error.invalidData
}
if let dictionary = try JSONSerialization.jsonObject(with: data, options: []) as? [String: Any] {
try dictionary
.sorted(by: { $0.0 < $1.0 })
.forEach { pair in
let (key, value) = pair
swiftCode.lineBreak()
swiftCode += margin + identation + "let \(key.asSymbol): "
switch value {
case _ as Bool:
swiftCode += "Bool"
case _ as String:
swiftCode += "String"
case _ as Decimal:
swiftCode += "Decimal"
case _ as Double:
swiftCode += "Double"
case _ as Int:
swiftCode += "Int"
case let jsonObject as [String: Any]:
let objectData = try JSONSerialization.data(withJSONObject: jsonObject, options: [])
let objectString = String(data: objectData, encoding: .utf8)!
swiftCode += "\(key.asType)"
swiftCode.lineBreak()
swiftCode.lineBreak()
swiftCode += try objectString.codableCode(name: key, margin: margin + identation)
swiftCode.lineBreak()
case let anyArray as [Any]:
swiftCode += try makeCodableTypeArray(anyArray: anyArray, key: key, margin: margin)
// TODO: Add more cases like dates
default:
swiftCode += "Any"
}
}
}
swiftCode.lineBreak()
swiftCode += margin + "}"
return swiftCode
}
public var codableCode: String? {
try? codableCode(name: "<#SomeType#>")
}
} | unknown | |
d8396 | train | I've figured out the answer for this:
When jQuery loads, it assigns an event handler to the $(".accordion .accordion-trigger-all.open").on('click', function() so at the beginning it only finds whichever is open. However when it searches again it doesn't find the element with the class removed.
Simple solution: $(".accordion").on('click','.accordion-trigger-all.open', function()
which will add the event handler to the whole element, instead of the only the element with the class .open
The same should be done for the opposite: $(".accordion").on('click','.accordion-trigger-all:not(.open)', function() | unknown | |
d8397 | train | On wcf client, you would have access to
HttpContext.Current.Request
Now this Request object contains cookies. You could loop over the cookie collection and remove the one you need.
foreach(var cookie in request.Cookies) { // }
An excellent article at code project which explains cookie management on WCF client
UPDATE
HttpContext is only available at server side, so my previous answer was incorrect as pointed by Phil.
The correct way to do it would be rather clumsy as you have get hold of HttpRequest itself
MyWebServiceClient client = new MyWebServiceClient();
using ( new OperationContextScope( client.InnerChannel ) )
{
HttpRequestMessageProperty request = new HttpRequestMessageProperty();
//get the instance of your AuthCookie and make it blank
request.Headers["AuthCookie"] = "";
OperationContext.Current.OutgoingMessageProperties[
HttpRequestMessageProperty.Name] = request;
client.InvokeSomeMethod();
}
Found this example here | unknown | |
d8398 | train | Things don't just happen, they happen for a reason. If they look as if they just happen, then that just means you don't know the reason... which is why you are asking. So...
The problem has to be with either your html or your css. As you don't give us much to go on, there isn't much that anyone can say.
You could put a page up, and provide a link to that, for people to look at the html and the css. No reason you can't do that.
Things to look out for in your css? floats that haven't been cleared for example.
Things to look out for in the Html? Unclosed tags.
You can diagnose the above kind of problem by right clicking on the offending area and doing "Inspect Element" when viewing the page in either Google Chrome or FireFox?
Are you assigning css classes using jquery or similar? Probably not, given your inline js code in the a tag...
All the above or a combination thereof are possible candidates, but in the absence of a page to look at, your not going to get much help.
A: I may not be correct but what i interpreted from the description is you are appending some HTML from code behind into your place holder.
If it is possible you can append the elements into your div
through jquery.
$('div.warning').append("your content")
A: Just guessing here, are you overriding the page's render method? And ff you try and change the containing div to runat="server" what happens? | unknown | |
d8399 | train | I have done some research into fast KD-tree implementations a few months ago, and I agree with Anony-Mousse that quality (and "weight" of libraries) varies strongly. Here are some of my findings:
kdtree2 is a little known and pretty straightforward KD-tree implementation I found to be quite fast for 3D problems, especially if you allow it to copy and re-sort your data. Also, it is small and very easy to incorporate and adapt. Here is a paper on the implementation by the author (don't be put off by the mentioning of Fortran in the title). This is the library I ended up using. My colleagues benchmarked its speed for 3D points against VLFeat's KD-trees and another library I don't remember (many FLANN, see below) and it won.
FLANN has a reputation of being fast, and is used and recommended quite often recently. It aims at the higher dimensional case, where it offers approximate algorithms, but is also used in the Point Cloud Library which deals with 3D problems.
I did not experiment with CGAL since I found the library to be too heavyweight. I agree that CGAL has a good reputation, but I feel it occasionally suffers from oversophistication.
A: From my experience, implementation quality varies widely, unfortunately. I have, however, never looked at the CGAL implementation.
The worst case for the k-d-tree usually is when due to incremental changes it becomes too unbalanced, and should be reloaded.
However, in general such trees are most efficient when you don't know the data distribution.
In your case it sounds as if a simple grid-based approach may be the best choice. If you want, you can consider a texture to be a dense 2d grid. So maybe you can find a 2d projection where a grid works good, and then intersect with this projection.
A: Answers are not the place to ask questions, but your question is not a question, but a statement that the kd-tree of CGAL sucks.
Reading 1.8mio points of a geological data model, and computing the 50 clostest points for each of these points has the following performance on my Dell Precision, Windows7, 64bit, VC10:
*
*reading the points from a file: 10 sec
*Construction of the tree 1.3 sec
*1.8 mio queries reporting the 50 closest points: 17 sec
Do you have similar performances. Would you expect a kd-tree to be faster?
Also I am wondering where your query points are, that is close to the surface, or close to the skeleton.
A: Take a look at ApproxMVBB library under the MPL licence:
https://github.com/gabyx/ApproxMVBB:
The kdTree implementation should be comparable to PCL(FLANN) and might be even faster. (tests with PCL seemed to be faster with my implementation!)
Diclaimer: I am the owner of this library and by no means this library claims to be any faster and serious performance tests have not been conducted yet, but I am using this library sucessfully in granular rigid body dynamics where speed is king!
However, this library is very small, and the kdTree implementation is very generic (see the examples) and lets you have custom splitting heurstics and other fancy stuff :-).
Similar improvements and considerations as in the nanoflann (direct data access etc., generic data, n-dimensional ) are implemented ... (see the KdTree.hpp) header.
Some Updates on Timing:
The example kdTreeFilteringcontains some small benchmarks:
The standord bunny with 35947 points is loaded (fully working example in the repo out of the box) :
The results:
Bunny.txt
Loaded: 35947 points
KDTree:: Exotic point traits , Vector3* + id, start: =====
KdTree build took: 3.1685ms.
Tree Stats:
nodes : 1199
leafs : 600
tree level : 11
avg. leaf data size : 29.9808
min. leaf data size : 0
max. leaf data size : 261
min. leaf extent : 0.00964587
max. leaf extent : 0.060337
SplitHeuristics Stats:
splits : 599
avg. split ratio (0,0.5] : 0.5
avg. point ratio [0,0.5] : 0.22947
avg. extent ratio (0,1] : 0.616848
tries / calls : 599/716 = 0.836592
Neighbour Stats (if computed):
min. leaf neighbours : 6
max. leaf neighbours : 69
avg. leaf neighbours : 18.7867
(Built with methods: midpoint, no split heuristic optimization loop)
Saving KdTree XML to: KdTreeResults.xml
KDTree:: Simple point traits , Vector3 only , start: =====
KdTree build took: 18.3371ms.
Tree Stats:
nodes : 1199
leafs : 600
tree level : 10
avg. leaf data size : 29.9808
min. leaf data size : 0
max. leaf data size : 306
min. leaf extent : 0.01
max. leaf extent : 0.076794
SplitHeuristics Stats:
splits : 599
avg. split ratio (0,0.5] : 0.448302
avg. point ratio [0,0.5] : 0.268614
avg. extent ratio (0,1] : 0.502048
tries / calls : 3312/816 = 4.05882
Neighbour Stats (if computed):
min. leaf neighbours : 6
max. leaf neighbours : 43
avg. leaf neighbours : 21.11
(Built with methods: midpoint, median,geometric mean, full split heuristic optimization)
Lucy.txt model with 14 million points:
Loaded: 14027872 points
KDTree:: Exotic point traits , Vector3* + id, start: =====
KdTree build took: 3123.85ms.
Tree Stats:
nodes : 999999
leafs : 500000
tree level : 25
avg. leaf data size : 14.0279
min. leaf data size : 0
max. leaf data size : 159
min. leaf extent : 2.08504
max. leaf extent : 399.26
SplitHeuristics Stats:
splits : 499999
avg. split ratio (0,0.5] : 0.5
avg. point ratio [0,0.5] : 0.194764
avg. extent ratio (0,1] : 0.649163
tries / calls : 499999/636416 = 0.785648
(Built with methods: midpoint, no split heuristic optimization loop)
KDTree:: Simple point traits , Vector3 only , start: =====
KdTree build took: 7766.79ms.
Tree Stats:
nodes : 1199
leafs : 600
tree level : 10
avg. leaf data size : 11699.6
min. leaf data size : 0
max. leaf data size : 35534
min. leaf extent : 9.87306
max. leaf extent : 413.195
SplitHeuristics Stats:
splits : 599
avg. split ratio (0,0.5] : 0.297657
avg. point ratio [0,0.5] : 0.492414
avg. extent ratio (0,1] : 0.312965
tries / calls : 5391/600 = 8.985
Neighbour Stats (if computed):
min. leaf neighbours : 4
max. leaf neighbours : 37
avg. leaf neighbours : 12.9233
(Built with methods: midpoint, median,geometric mean, full split heuristic optimization)
Take care about the interpretation! and look at the settings used in the example File.
However comparing with results from other people: ~3100ms for 14*10⁶ points is quite slick :-)
Processor used: Intel® Core™ i7 CPU 970 @ 3.20GHz × 12 , 12GB Ram
A: If the kdtree is fast for small sets, but "slow" for large (>100000?) sets, you may be suffering from flushing the processor caches. If the top few nodes are interleaved with rarely used leaf nodes then you will fit fewer heavily used nodes in the processor cache(s). This can be improved by minimising the size of the nodes and careful layout of the nodes in memory.. but eventually you will be flushing a fair number of nodes anyway. You can end up with access to main memory being the bottle-neck.
Valgrind tells me one version of my code is 5% fewer instructions, but I believe the stopwatch when it tells me it is about 10% slower for the same input. I suspect valgrind doing a full cache simulation would tell me the right answer.
If you are multi-threaded, you may want to encourage the threads to be doing searches in similar areas to reuse the cache... that assumes a single multi-core processor - multiple processors might want the opposite approach.
Hint: You pack more 32bit indexes in memory than 64bit pointers. | unknown | |
d8400 | train | You can use setTimeout(() => window.location.reload(true), 5000); this code | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.