Home
Support

Knowledge Base

BSPs and drivers
Community resources
Product documentation
Questions?
Contact us

ThreadCtl() execution time?
 
________________________________________________________________________

Applicable Environment
________________________________________________________________________
  • Topic: threadCtl(), I/O privileges
  • SDP: 6.4.0
  • Target: All supported targets
________________________________________________________________________

Question
________________________________________________________________________

Calling ThreadCtl( _NTO_TCTL_IO, 0 ) takes up to 900ms to execute. If we remove source files / linked libs from the target executable, the time required for ThreadCtl to return decreases. A simple test case that only calls ThreadCtl(_NTO_TCTL_IO, 0) returns in about 1-2 ms. What is the expected behavior of ThreadCtl? Are there any known issues that cause ThreadCtl(_NTO_TCTL_IO, 0) to require more time to execute based on the size of the executable?

________________________________________________________________________

Question
________________________________________________________________________

This is a side effect of having to lock down pages. So that drivers don't suddenly start faulting in their ISRs because we're lazily putting in mappings in 6.4 (I think it was starting in 6.4... ) we overloaded ThreadCtl() to do essentially an mlockall(MCL_CURRENT|MCL_FUTURE) before returning. Since all drivers that have done InterruptAttach() are required to do a ThreadCtl() for IO privity, it was the least invasive. The size of the binary is likely altering the amount of data/process map that is required to lock down and thus expanding the time to complete.

The source:
 
case _NTO_TCTL_IO:
if(act != op)
return ENOTSUP;

if(!kerisroot(act)) {
return ENOERROR;
}
//Check with the memory manager to make sure that
//it's locked all the memory for the process (so we don't
//page fault after an InterruptDisable/Lock has been done).
r = memmgr.mlock(act->process, 0, 0, -1);
if(r == -1) {
// The memmgr has blocked this thread and sent
// a pulse to lock the memory down. Restart the
// kernel call - when we come in again all should be well.
KERCALL_RESTART(act);
return ENOERROR;
}
if(r != EOK) return r;
cpu_thread_priv(act);
act->flags |= _NTO_TF_IOPRIV;
break;

You could verify this also by using procnto -mL to cause memory to be locked at mmap time which would mean the memmgr.mlock should do nothing, so the cost of the call
should be very small (eg. microseconds?)

________________________________________________________________________
NOTE: This entry has been validated against the SDP version listed above. Use caution when considering this advice for any other SDP version. For supported releases, please reach out to QNX Technical Support if you have any questions/concerns.
________________________________________________________________________


Related Attachments
 None Found





Please contact us with your questions or concerns.